Article

Automatic Facial Coding Versus Electromyography of mimicked, passive, and inhibited facial response to emotional faces

Taylor & Francis
Cognition and Emotion
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Decoding someone's facial expressions provides insights into his or her emotional experience. Recently, Automatic Facial Coding (AFC) software has been developed to provide measurements of emotional facial expressions. Previous studies provided first evidence for the sensitivity of such systems to detect facial responses in study participants. In the present experiment, we set out to generalise these results to affective responses as they can occur in variable social interactions. Thus, we presented facial expressions (happy, neutral, angry) and instructed participants (N = 64) to either actively mimic, to look at them passively (n = 21), or to inhibit their own facial reaction (n = 22). A video stream for AFC and an electromyogram (EMG) of the zygomaticus and corrugator muscles were registered continuously. In the mimicking condition, both AFC and EMG differentiated well between facial expressions in response to the different emotional pictures. In the passive viewing and in the inhibition condition AFC did not detect changes in facial expressions whereas EMG was still highly sensitive. Although only EMG is sensitive when participants intend to conceal their facial reactions, these data extend previous findings that Automatic Facial Coding is a promising tool for the detection of intense facial reaction.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A high positive correlation was found between the probability of joy and zygomaticus muscle activity and between anger and corrugator muscle activity [41]. While strong prototypical affect expressions were measured here, Höfling et al. [42] compare the ability of the AFC software Facereader (Noldus) with EMG to measure subtle affect expressions. Here, subjects were not asked to imitate the affect stimuli, but to behave passively. ...
... We also expected a positive correlation between the EMG-activity of corrugator muscle and action unit brow lowerer for affect anger [41]. However, there is also evidence that measuring subtle affect expressions may be more difficult for the Affdex software [40,42]. ...
... We expected comparable results for Affdex and EMG measurements [41,42]. However, there was also evidence for reduced measurement performance of Affdex for subtle affect expressions, as expected for facial mimicry [40]. ...
Article
Full-text available
Facial mimicry is the automatic imitation of the facial affect expressions of others. It serves as an important component of interpersonal communication and affective co-experience. Facial mimicry has so far been measured by Electromyography (EMG), which requires a complex measuring apparatus. Recently, software for measuring facial expressions have become available, but it is still unclear how well it is suited for measuring facial mimicry. This study investigates the comparability of the automated facial coding software Affdex with EMG for measuring facial mimicry. For this purpose, facial mimicry was induced in 33 subjects by presenting naturalistic affect-expressive video sequences (anger, joy). The response of the subjects is measured simultaneously by facial EMG (corrugator supercilii muscle, zygomaticus major muscle) and by Affdex (action units lip corner puller and brow lowerer and affects joy and anger). Subsequently, the correlations between the measurement results of EMG and Affdex were calculated. After the presentation of the joy stimulus, there was an increase in zygomaticus muscle activity (EMG) about 400 ms after stimulus onset and an increase in joy and lip corner puller activity (Affdex) about 1200 ms after stimulus onset. The joy and the lip corner puller activity detected by Affdex correlate significantly with the EMG activity. After presentation of the anger stimulus, corrugator muscle activity (EMG) also increased approximately 400 ms after stimulus onset, whereas anger and brow lowerer activity (Affdex) showed no response. During the entire measurement interval, anger activity and brow lowerer activity (Affdex) did not correlate with corrugator muscle activity (EMG). Using Affdex, the facial mimicry response to a joy stimulus can be measured, but it is detected approximately 800 ms later compared to the EMG. Thus, electromyography remains the tool of choice for studying subtle mimic processes like facial mimicry.
... In another study, Höfling and colleagues tested volitional mimicry, passive viewing (spontaneous mimicry), and inhibition of mimicry conditions. While volitional mimicry could be detected by the EMG delta and FaceReader valence output, spontaneous facial mimicry during passive viewing could only be detected using the EMG delta measure, not the FaceReader valence output [34]. In this study, the performance of the FaceReader AU 4 and 12 outputs was neither estimated nor compared with corresponding EMG CS and ZM recordings. ...
... Cross-correlation will be used to test the extent of detection latency. (6) Previous studies validating the detection of spontaneous facial mimicry using the automated FACS employed earlier versions of commercial software, such as FaceReader 7 [33,34]. In this study, FaceReader 9.0, which incorporated updates such as deep learning-based algorithms (see Section 2.6.1) and two new open-source software, OpenFace 2.2.0 [41] and Py-Feat 0.6.0 ...
... However, without bilateral estimation and the East Asian module, Py-Feat achieved a comparable or even higher F1 score in smile mimicry detection than FaceReader. The advantages seemed to have contributed to improved performance in detecting spontaneous facial mimicry in this study compared with the performance of FaceReader 7.0 used in Höfling and colleagues (2021) in which spontaneous facial mimicry during passive viewing was only detected by the EMG delta but not the FaceReader valence output [34]. However, based on time series and cross-correlation visualization (Figures 1 and 2), FaceReader appeared to demonstrate temporal smoothing across frame-wise estimations to reduce temporal noise, resulting in smooth AU time series compared to other automated FACS software and possibly introduced some latency in AU estimation compared to OpenFace and Py-Feat. ...
Article
Full-text available
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark.
... In comparison to human FACS coding, automatic facial coding (AFC) offers several advantages: it is dramatically more time efficient because it can analyze a large number of facial expressions without human effort [10]. Moreover, AFC is less intrusive and less susceptible to motion artifacts [11], but also less sensitive to more subtle facial responses compared to psycho-physiological measures like electromyography [12,13]. ...
... Only few studies have also tested the validity of AFC in more naturalistic facial expressions of untrained participants who posed facial expressions. Two studies documented that AFC is sensitive for posed joy and anger, but with larger sensitivity for joyful compared to angry faces [13,18]. Two other studies, in which participants posed all six emotions, reported substantial differences in sensitivity for specific emotion categories [21,25]. ...
... Accordingly, more naturalistic research settings have to be approached in future studies [50]. Until further technological progress is made, AFC may not yet be capable of detecting very subtle emotional facial expressions in contrast to other research methods like EMG [13]. ...
Article
Full-text available
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional facial expressions. AFC can classify emotional expressions with high accuracy in standardized picture inventories of intensively posed and prototypical expressions. However, classification of facial expressions of untrained study participants is more error prone. This discrepancy requires a direct comparison between these two sources of facial expressions. To this end, 70 untrained participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with a well-established AFC software (FaceReader, Noldus Information Technology). These were compared with AFC measures of standardized pictures from 70 trained actors (i.e., standardized inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a novel machine learning approach to determine the relevant AUs for each emotion, separately for both datasets. First, misclassification was more frequent for some emotions of untrained participants. Second, AU intensities were generally lower in pictures of untrained participants compared to standardized pictures for all emotions. Third, although profiles of relevant AU overlapped substantially across the two data sets, there were also substantial differences in their AU profiles. This research provides evidence that the application of AFC is not limited to standardized facial expression inventories but can also be used to code facial expressions of untrained participants in a typical laboratory setting.
... Consistent with these results, we found that AFC parameters of standardized inventories and unstandardized facial expressions from untrained participants in a typical laboratory setting substantially differ in the relative intensity of AU activity, the resulting AU profiles, and overall classification accuracies. Furthermore, the classification performance of AFC decreases if spontaneous facial responses toward emotional stimuli like scenes or faces are investigated [23,24]. Hence, the validity of AFC to detect emotional facial expressions is further decreased compared to prototypical facial expressions from standardized inventories. ...
... Accordingly, the present study aimed to investigate the influence of the prototypicality of the picture material (standardized vs. unstandardized facial expressions) used to train machine learning algorithms on the classification of standardized and non-standardized emotional expressions. This is highly relevant for emotion researchers, who are primarily interested in the valid measurement of naturalistic facial expressions that are less intense and less prototypical [21,24]. ...
Article
Full-text available
Automatic facial coding (AFC) is a promising new research tool to efficiently analyze emotional facial expressions. AFC is based on machine learning procedures to infer emotion categorization from facial movements (i.e., Action Units). State-of-the-art AFC accurately classifies intense and prototypical facial expressions, whereas it is less accurate for non-prototypical and less intense facial expressions. A potential reason might be that AFC is typically trained with standardized and prototypical facial expression inventories. Because AFC would be useful to analyze less prototypical research material as well, we set out to determine the role of prototypicality in the training material. We trained established machine learning algorithms either with standardized expressions from widely used research inventories or with unstandardized emotional facial expressions obtained in a typical laboratory setting and tested them on identical or cross-over material. All machine learning models’ accuracies were comparable when trained and tested with held-out dataset from the same dataset (acc. = [83.4% to 92.5%]). Strikingly, we found a substantial drop in accuracies for models trained with the highly prototypical standardized dataset when tested in the unstandardized dataset (acc. = [52.8%; 69.8%]). However, when they were trained with unstandardized expressions and tested with standardized datasets, accuracies held up (acc. = [82.7%; 92.5%]). These findings demonstrate a strong impact of the training material’s prototypicality on AFC’s ability to classify emotional faces. Because AFC would be useful for analyzing emotional facial expressions in research or even naturalistic scenarios, future developments should include more naturalistic facial expressions for training. This approach will improve the generalizability of AFC to encode more naturalistic facial expressions and increase robustness for future applications of this promising technology.
... Höfling et al. (2020) report strong correlations of FER parameters and participants' emotion ratings that spontaneously respond to pleasant emotional scenes, but find no evidence for a valid FER detection of spontaneous unpleasant facial reactions. Other studies report a decrease in FER emotion recognition for more subtle and naturalistic facial expressions (Höfling et al., 2021) and find a superiority of humans to decode such emotional facial responses (Yitzhak et al., 2017;Dupré et al., 2020). However, the data sets applied are still comprised of images collected in a controlled lab setting, with little variation on lighting, camera angle, or age of the subject which might further decrease FER performance under less restricted recording conditions. ...
... Although emotion recognition performance is generally lower for such facial expressions, FER tools perform similarly or better than humans for most emotion categories of non-standardized (except for anger and fear) and standardized facial expressions. Facial expressions of joy are detected best among the emotion categories in both standardized and non-standardized facial expressions, which also replicates existing findings (Stöckli et al., 2018;Höfling et al., 2021). However, FER performance varies strongly between systems and emotion categories. ...
Article
Full-text available
Emotional facial expressions can inform researchers about an individual's emotional state. Recent technological advances open up new avenues to automatic Facial Expression Recognition (FER). Based on machine learning, such technology can tremendously increase the amount of processed data. FER is now easily accessible and has been validated for the classification of standardized prototypical facial expressions. However, applicability to more naturalistic facial expressions still remains uncertain. Hence, we test and compare performance of three different FER systems (Azure Face API, Microsoft; Face++, Megvii Technology; FaceReader, Noldus Information Technology) with human emotion recognition (A) for standardized posed facial expressions (from prototypical inventories) and (B) for non-standardized acted facial expressions (extracted from emotional movie scenes). For the standardized images, all three systems classify basic emotions accurately (FaceReader is most accurate) and they are mostly on par with human raters. For the non-standardized stimuli, performance drops remarkably for all three systems, but Azure still performs similarly to humans. In addition, all systems and humans alike tend to misclassify some of the non-standardized emotional facial expressions as neutral. In sum, emotion recognition by automated facial expression recognition can be an attractive alternative to human emotion recognition for standardized and non-standardized emotional facial expressions. However, we also found limitations in accuracy for specific facial expressions; clearly there is need for thorough empirical evaluation to guide future developments in computer vision of emotional facial expressions.
... These software tools have been validated for the classification of standardized prototypical facial expressions, and their reliability has been tested in several research studies (e.g., Beringer et al., 2019;Kulke et al., 2020;Küntzler et al., 2021;Stöckli et al., 2018;Zaharieva et al., 2024). According to some studies, facial expression analysis may have a worse ability to detect subtle affect (Stöckli et al., 2018) than does EMG, as well as to detect facial mimicry (emotional contagion through a feedback mechanism, e.g., Höfling et al., 2021;Westermann et al., 2024). ...
Article
Full-text available
The interaction between music and the environment has been widely investigated in various domains; however, the effects of music on the perception of outdoor environments have not been adequately examined. A better understanding of audio-visual interactions between music and the natural environment is important for music psychology, because the field is currently employing natural sounds, yet their pairing remains poorly understood. Furthermore, this understanding is vital for soundscape research, given that individuals are increasingly listening to music on headphones in natural settings. This has practical implications wherever music and the natural environment are paired. This study explored the audio-visual interaction between music and the perception of natural environments. Four types of natural images were presented based on their attractiveness/unattractiveness and visual openness/closedness. At the same time, the participants listened to sad or happy music. Both self-reported assessment data and data obtained through automated software analysis of emotional facial expressions represented in the form of emotional engagement were analyzed. The results showed that, compared to listening to sad music or no music, exposure to happy music resulted in an increase in self-reported environmental preference. However, sad music did not significantly decrease self-reported environmental preference or self-reported pleasant feelings compared to the control no-music condition. Analysis of the engagement in facial emotional expressions showed that sad music decreased engagement compared to the no-music condition in all types of environments; however, when listening to happy music, the participants’ engagement was lower in unattractive environments but not in attractive environments compared to when they did not listen to any music.
... The raw EMG data was off-line re-referenced to bipolar measures, and were filtered with a 30 Hz high-pass filter, a 250 Hz low-pass filter, and a 50 Hz notch filter [60,61]. We then rectified and integrated the EMG signal via the root-mean-square (RMS) technique with a 100-ms time window using AcqKnowledge (ver. ...
... A growing body of literature has explored the reliability of automatic facial coding in detecting emotions by comparing it with traditional methods. Results have shown that automatic facial coding generates values that are comparable and highly correlated with those generated by untrained human coders (Krumhuber et al., 2021a(Krumhuber et al., , 2021b, trained human coders (Girard & Cohn, 2015;Girard et al., 2013;Gupta et al., 2022), and electromyography, which is currently considered the psychophysiological "gold standard" (Beringer et al., 2019;Höfling et al., 2021;Kulke et al., 2020). ...
Article
The main diagnostic criteria for major depressive disorder (MDD) are consistent experiences of high levels of negative emotions and low levels of positive emotions. Therefore, modification of these emotions is essential in the treatment of MDD. In the current study, we harnessed a computational approach to explore whether experiencing negative emotions during psychological treatment is related to subsequent changes in these emotions. Facial expressions were automatically extracted from 175 sessions of 58 patients with MDD. Within sessions, a U-shaped trajectory of change in valence was observed in which patients expressed an increase in negative emotions in the middle of the session. Between sessions, a consistent increase in valence was observed. A trajectory of within-sessions decrease followed by an increase in valence was positively associated with greater perceived positive emotions and subsequent decreases in depressive symptoms. These findings highlight the importance of targeting negative emotions during treatment to achieve more favorable outcomes.
... These can reliably classify discrete emotions as well as facial actions (Littlewort et al., 2011;Lewinski et al., 2014). Given that most classifiers have been trained based on the theoretical principle proposed by the Facial Action Coding System (FACS, Ekman et al., 2002;Calvo et al., 2018), recognition performance is found to be comparable to human coders (Skiendziel et al., 2019;Krumhuber et al., 2021a) and other physiological measurements (Kulke et al., 2020;Höfling et al., 2021), sometimes even outperforming human raters (Krumhuber et al., 2021b). In most cases, the distinctive appearance of highly standardised expressions benefits the featural analysis by machines (Pantic and Bartlett, 2007). ...
Article
Full-text available
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, amsbiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
... Büdenbender et al. (2023), Höfling et al. (2022), and Sato et al. (2019) explored the application of AFC in untrained participants who performed posed expressions, highlighting the challenges and potential in coding their facial expressions, and emphasising the importance of incorporating more naturalistic expressions in training AFC models. Furthermore, Höfling et al. (2021) investigated the differentiation of facial expressions in various social interaction scenarios, demonstrating the efficacy of FaceReader in the mimicking condition and the superiority of electromyogram (EMG) measures in passive viewing and inhibition conditions. Notably, Höfling et al. (2020) compared the sensitivity of FaceReader to established psychophysiological measures and found comparable results for pleasant emotions, but limitations in distinguishing between neutral and unpleasant stimuli. ...
Article
Full-text available
Introduction This work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding. Methods We used videos of parents (fathers, n = 36; mothers, n = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos—obtained during real-life parent-infant interactions in the home—were coded both manually (using an existing coding scheme) and by FaceReader. We established a correspondence between the manual and automated coding categories - namely Positive, Neutral, Negative, and Surprise - before contingency tables were employed to examine the software’s detection rate and quantify the agreement between manual and automated coding. By employing binary logistic regression, we examined the predictive potential of FaceReader outputs in determining manually classified facial expressions. An interaction term was used to investigate the impact of gender on our models, seeking to estimate its influence on the predictive accuracy. Results We found that the automated facial detection rate was low (25.2% for fathers, 24.6% for mothers) compared to manual coding, and discuss some potential explanations for this (e.g., poor lighting and facial occlusion). Our logistic regression analyses found that Surprise and Positive expressions had strong predictive capabilities, whilst Negative expressions performed poorly. Mothers’ faces were more important for predicting Positive and Neutral expressions, whilst fathers’ faces were more important in predicting Negative and Surprise expressions. Discussion We discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in automated facial coding research.
... However, this type of software presents a number of limitations compared to facial EMG. For instance, these tools performed poorer for negative valence and inhibited facial expressions (Höfling et al., 2020(Höfling et al., , 2021. Furthermore, although it achieves acceptable reliability with standardized prototypical images of facial expressions (e.g., Lewinski, 2015), as well as simulated facial expressions (e.g., Kulke et al., 2020), iMotions software performs worse with naturally expressed facial emotions, notably fearful faces (Stöckli et al., 2018). ...
Article
Ho, M.H., Kemp, B.T., Eisenbarth, H. & Rijnders, R.J.P. Designing a neuroclinical assessment of empathy deficits in psychopathy based on the Zipper Model of Empathy. NEUROSCI BIOBEHAV REV YY(Y) XXX-XXX, 2023. The heterogeneity of the literature on empathy highlights its multidimensional and dynamic nature and affects unclear descriptions of empathy in the context of psychopathology. The Zipper Model of Empathy integrates current theories of empathy and proposes that empathy maturity is dependent on whether contextual and personal factors push affective and cognitive processes together or apart. This concept paper therefore proposes a comprehensive battery of physiological and behavioral measures to empirically assess empathy processing according to this model with an application for psychopathic personality. We propose using the following measures to assess each component of this model: (1) facial electromyography; (2) the Emotion Recognition Task; (3) the Empathy Accuracy task and physiological measures (e.g., heart rate); (4) a selection of Theory of Mind tasks and an adapted Dot Perspective Task, and; (5) an adjusted Charity Task. Ultimately, we hope this paper serves as a starting point for discussion and debate on defining and assessing empathy processing, to encourage research to falsify and update this model to improve our understanding of empathy.
... The differential distribution of IEMG across gender participants in the experimental condition is shown in Figure 9. Previous studies have shown that when positive emotions are evoked, frown muscle activity decreases and zygomaticus and peripheral muscle activity increases [40]. Conversely, when a negative response occurs, higher muscle activity occurs at the frown muscle than at the positive response [38]. ...
Article
Full-text available
With the advent of the “her economy” era, the new energy automobile market has also ushered in the “her era”, and female consumers have gradually become the main force of domestic and foreign vehicle consumption, thus contributing to the sustainable and rapid development of many female new energy automobile market segments. In this context, this study explores the icon cognitive preferences of female drivers based on gender differences in icon cognition by taking the human–machine interface icons in new energy automobiles as a case study. Firstly, we conducted behavioral response experiments and facial electromyography experiments on 20 male and female participants to analyze their cognitive preferences for icons by combining the four dimensions of “semantic dimension, conceptual dimension, contextual dimension and pragmatic dimension”. The results showed that the four−dimensional graphic deconstruction format had a significant effect on the improvement of icon recognition performance. At the same time, we designed 10 formats of icons as experimental stimulus materials and combined them with subjective scales to jointly explore the reasons for the bias of different gender participants towards icons. The results show that there are significant gender differences in icon perception on a four−dimensional basis, with males more likely to be disturbed by icon constituent elements (semantic dimension), while females are more likely to be disturbed by icon metaphors (semantic dimension) and usage environment and interface context (contextual dimension). This study helps to explore the best balance between studying women’s driving experiences in new energy vehicles and the sustainable product life cycle, and then improve the accuracy of women drivers’ decision−making behavior in new energy vehicles to ensure driving safety.
... Consequently, both expressions may have had the same inherent meaning and also similar implications in the context of our study. Thus, we suggest that-especially as spontaneous facial mimicry reactions are typically more subtle and not of the same intensity as prototypical ( posed) emotion expressions [41]smiling in response to laughter should have been an appropriate proxy for shared laughter (i.e. laughter mimicry) that signals affiliation and fosters bonding [20]. ...
Article
Full-text available
Laughter is an ambiguous phenomenon in response to both positive and negative events and a social signal that coordinates social interactions. We assessed (i) who laughs and why, and (ii) if the type of laughter and whether the observer approves of it impact on facial mimicry as a proxy for shared laughter. For this, 329 participants watched funny, schadenfreude and disgusting scenes and then saw individuals who purportedly reacted to each scene while participants' facial expressions were recorded and analysed. Participants laughed more in response to funny than in response to schadenfreude scenes and least in response to disgust scenes, and laughter within each scene could be explained both by situational perceptions of the scenes as well as by individual differences. Furthermore, others’ laughter in response to funny scenes was perceived as more appropriate, elicited more closeness and more laughter mimicry than others' laughter in response to schadenfreude and especially in response to disgust scenes. Appropriateness and closeness as well as individual differences could explain laughter mimicry within each scene. This is in line with the notion that laughter is not per se an affiliative signal and that different types of laughter have distinct social implications. This article is part of the theme issue ‘Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience’.
... As the facial expression or emotion occurs and/or intensifies, the confidence score rises from 0 (no expression) to 100 (expression fully present). The software was tested in various preliminary explorations conducted by the Affectiva company [43], and in several independent research studies [44][45][46][47]. These studies seem to confirm that the software is reliable for recognizing basic, subtle emotional facial expressions for standardized images when participants do not intend to conceal their facial reactions; notably, the software demonstrates similar precision as the facial action coding system and facial electromyography. ...
Article
Full-text available
Many studies have demonstrated that exposure to simulated natural scenes has positive effects on emotions and reduces stress. In the present study, we investigated emotional facial expressions while viewing images of various types of natural environments. Both automated facial expression analysis by iMotions’ AFFDEX 8.1 software (iMotions, Copenhagen, Denmark) and self-reported emotions were analyzed. Attractive and unattractive natural images were used, representing either open or closed natural environments. The goal was to further understand the actual features and characteristics of natural scenes that could positively affect emotional states and to evaluate face reading technology to measure such effects. It was predicted that attractive natural scenes would evoke significantly higher levels of positive emotions than unattractive scenes. The results showed generally small values of emotional facial expressions while observing the images. The facial expression of joy was significantly higher than that of other registered emotions. Contrary to predictions, there was no difference between facial emotions while viewing attractive and unattractive scenes. However, the self-reported emotions evoked by the images showed significantly larger differences between specific categories of images in accordance with the predictions. The differences between the registered emotional facial expressions and self-reported emotions suggested that the participants more likely described images in terms of common stereotypes linked with the beauty of natural environments. This result might be an important finding for further methodological considerations.
... intentionally [18,19]. Recently, spontaneous and natural facial expressions have become a topic of great interest [17][18][19][20][21]. Generally, spontaneous facial expressions are weaker and more diverse than intentionally made facial expressions, which makes it challenging to evaluate spontaneous facial expressions [21][22][23][24]. Among the investigations of spontaneous facial expressions, in one study, researchers showed that human preferences for images could be predicted [25]. ...
Article
Full-text available
With advances in digital technologies, the number of images we are subjected to every day has increased significantly. Predicting and recommending human subjective preferences for images is useful for selecting image data efficiently to avoid the unnecessary use of valuable storage space. In this study, we investigate the use of a machine learning model for estimating human preferences for images from spontaneous facial features extracted from video images of human faces while they are performing a natural preference evaluation task. We use two image categories and compare the results between categories. We also conduct an experiment to assess the performance of human raters in predicting the preferences of others from facial videos. As a standard to compare predictive performance from facial expressions, we also test prediction from high-level image features by training a deep learning model using the obtained experimental data. The results show that the spontaneous facial features produce prediction performance comparable with, and for lunch box images, marginally better than, the image features specifically trained for our dataset, and clearly outperform the human raters. We further examine which facial expression features are important for prediction and show that the important facial features differ between image categories. Our results show that facial expressions can be used to predict the preference for images, to some extent, although we need to be careful when generalizing the learned model to other image categories. Our machine learning approach also provides insights into the differences in the cognitive mechanisms used for preference evaluation for different image categories.
... There is evidence that an emotional prototype face is likely to be a better approximation of the centre of emotional 'face space' than a neutral face [57], and a neutral face is not without emotion. For example, neutral and angry faces have been found to elicit comparable negative facial responses when passively viewed, which may indicate that neutral faces are perceived to be negatively valenced [58]. Finally, 15-image linear morph sequences were generated for each emotion, ranging in equally spaced emotional intensities from the emotional prototype (an emotionally ambiguous face; 5% emotional signal) to the emotional exemplar (the full intensity unambiguous emotion; 100% emotional signal). ...
Article
Full-text available
State anxiety appears to influence facial emotion processing (Attwood et al . 2017 R. Soc. Open Sci. 4 , 160855). We aimed to (i) replicate these findings and (ii) investigate the role of trait anxiety, in an experiment with healthy UK participants ( N = 48, 50% male, 50% high trait anxiety). High and low state anxiety were induced via inhalations of 7.5% carbon dioxide enriched air and medical air, respectively. High state anxiety reduced global emotion recognition accuracy ( p = 0.01, η p 2 = 0.14 ), but it did not affect interpretation bias towards perceiving anger in ambiguous angry–happy facial morphs ( p = 0.18, η p 2 = 0.04 ). We found no clear evidence of a relationship between trait anxiety and global emotion recognition accuracy ( p = 0.60, η p 2 = 0.01 ) or interpretation bias towards perceiving anger ( p = 0.83, η p 2 = 0.01 ). However, there was greater interpretation bias towards perceiving anger (i.e. away from happiness) during heightened state anxiety, among individuals with high trait anxiety ( p = 0.03, d z = 0.33). State anxiety appears to impair emotion recognition accuracy, and among individuals with high trait anxiety, it appears to increase biases towards perceiving anger (away from happiness). Trait anxiety alone does not appear to be associated with facial emotion processing.
Article
Full-text available
Several psychological brand performance indicators that predict a brand’s intermediate market share have been identified. So far, rating studies have exclusively investigated brand effects in terms of linear relationships, and their specific and possibly nonlinear interactions have yet to be examined in comparison. Hence, we investigated the relative importance of three well-established psychological performance indicators, attitude toward the brand, perceived quality, and brand experience, in predicting brand loyalty. A sample of 1,077 participants completed an online survey and rated subsets of 105 international brands from various product and service industries. Relations between attitude, perceived quality, and experience in predicting loyalty toward a brand were analyzed using semi-parametric additive mixed regression models. We replicated that all three predictors significantly impacted brand loyalty and revealed a pronounced nonlinear relationship between attitude and loyalty. The inclusion of nonlinear interactions between predictors improved model fit. In particular, the nonlinear interaction between perceived quality and attitude substantially impacted brand loyalty. In addition, these effects differ by type of industry, specifically fast-moving consumer goods, automotive, fashion, electronics, and finance/insurance. These findings draw attention to nonlinear patterns between specific psychological features of brands. Future research should address nonlinear effects and the specific interactions of other essential predictors of brand equity.
Chapter
Full-text available
Cada vez es más evidente que las funciones del cerebro tienen una fuerte implicación en los procesos de conocimiento y conducta. Actualmente, se está planteando llevar los avances científicos sobre el cerebro al campo de la educación, con el objetivo de ampliar la comprensión de los procesos de enseñanza-aprendizaje, instrucción-evaluación y de las interacciones que se producen entre estudiantes y maestros en el desarrollo de habilidades a través de la educación formal. Existen numerosos estudios que reflejan la confluencia entre las neurociencias y la pedagogía, abordando diversos aspectos que son objeto de estudio de esta última. Este trabajo tiene como propósito describir el quehacer de la neuropsicología escolar, poniendo énfasis en la evaluación neurocognitiva dentro de las escuelas. Además, busca detallar los objetivos y aportaciones de esta disciplina, señalando cómo converge con otras áreas del conocimiento y cómo orienta sus intervenciones para maximizar el aprendizaje de los estudiantes y el desempeño de los maestros.
Chapter
Full-text available
En el proceso de neurodesarrollo, múltiples factores influyen en la adquisición de habilidades, competencias y aprendizajes, así como en el desarrollo de funciones que facilitan la adaptación de una persona (Fejerman, 2010). El cerebro es particularmente vulnerable durante la infancia, y cualquier alteración en su funcionamiento puede impactar significativamente la vida de una persona a largo plazo. A nivel mundial, se estima que 20% de los niños experimentan trastornos mentales relacionados con dificultades en el aprendizaje y el desarrollo del lenguaje, siendo esta la principal causa de discapacidad en las primeras etapas de la vida (Boyle et al.,1994; Merikangas et al., 2010). El capítulo 9 presenta datos de instituciones públicas y privadas relacionados a los servicios de Neuropsicología Clínica Infantil con los trastornos más frecuentes que atienden.
Book
Full-text available
Se explora cómo la psicología aborda y resuelve algunos de los desafíos más urgentes de la sociedad actual. Desde la educación inclusiva y el estrés en el trabajo hasta la neuropsicología y la intervención clínica, este libro aborda una amplia gama de problemas sociales y personales que los profesionales de la psicología ayudan a mitigar. A través de enfoques innovadores y prácticos, se destacan las contribuciones de la psicología para mejorar el bienestar, la salud mental y la calidad de vida en diversos contextos, demostrando su papel esencial en un mundo en constante transformación.
Article
Full-text available
Facial emotion recognition (FER) represents a significant outcome of the rapid advancements in artificial intelligence (AI) technology. In today's digital era, the ability to decipher emotions from facial expressions has evolved into a fundamental mode of human interaction and communication. As a result, FER has penetrated diverse domains, including but not limited to medical diagnosis, customer feedback analysis, the automation of automobile driver systems, and the evaluation of student comprehension. Furthermore, it has matured into a captivating and dynamic research field, capturing the attention and curiosity of contemporary scholars and scientists. The primary objective of this paper is to provide an exhaustive review of FER systems. Its significance goes beyond offering a comprehensive resource; it also serves as a valuable guide for emerging researchers in the FER domain. Through a meticulous examination of existing FER systems and methodologies, this review equips them with essential insights and guidance for their future research pursuits. Moreover, this comprehensive review contributes to the expansion of their knowledge base, facilitating a profound understanding of this rapidly evolving field. In a world increasingly dependent on technology for communication and interaction, the study of FER holds a piv-otal role in human-computer interaction (HCI). It not only provides valuable insights but also unlocks a multitude of possibilities for future innovations and applications. As we continue to integrate AI and facial emotion recognition into our daily lives, the importance of comprehending and enhancing FER systems becomes increasingly evident. This paper serves as a stepping stone for researchers, nurturing their involvement in this exciting and ever-evolving field. K E Y W O R D S computer human interaction, facial emotion recognition, machine learning and deep learning 1 | INTRODUCTION Facial emotion recognition (FER) stands at the intersection of artificial intelligence and human psychology, representing a fascinating field with a multitude of real-world applications. In an era characterized by increased human-computer interaction and the integration of artificial intelligence (AI) into various facets of our lives, the ability to discern and understand emotions conveyed through facial expressions has become paramount. FER is the technology that empowers machines to interpret human emotions by analysing facial features and expressions. It is akin to giving computers the capacity to "read" human emotional states, mirroring the way humans instinctively understand each other's feelings through facial cues. Sharma et al. (2017) the journey into the world of FER involves delving into cutting-edge technologies like deep learning for computer vision and pattern recognition. These technologies, combined with vast datasets of facial expressions, enable machines to decipher emotions such as happiness, sadness, anger, surprise, fear, and disgust. Facial expressions convey valuable information about an individual's inner emotions,
Article
Full-text available
Facial emotion recognition (FER) represents a significant outcome of the rapid advancements in artificial intelligence (AI) technology. In today's digital era, the ability to decipher emotions from facial expressions has evolved into a fundamental mode of human interaction and communication. As a result, FER has penetrated diverse domains, including but not limited to medical diagnosis, customer feedback analysis, the automation of automobile driver systems, and the evaluation of student comprehension. Furthermore, it has matured into a captivating and dynamic research field, capturing the attention and curiosity of contemporary scholars and scientists. The primary objective of this paper is to provide an exhaustive review of FER systems. Its significance goes beyond offering a comprehensive resource; it also serves as a valuable guide for emerging researchers in the FER domain. Through a meticulous examination of existing FER systems and methodologies, this review equips them with essential insights and guidance for their future research pursuits. Moreover, this comprehensive review contributes to the expansion of their knowledge base, facilitating a profound understanding of this rapidly evolving field. In a world increasingly dependent on technology for communication and interaction, the study of FER holds a pivotal role in human‐computer interaction (HCI). It not only provides valuable insights but also unlocks a multitude of possibilities for future innovations and applications. As we continue to integrate AI and facial emotion recognition into our daily lives, the importance of comprehending and enhancing FER systems becomes increasingly evident. This paper serves as a stepping stone for researchers, nurturing their involvement in this exciting and ever‐evolving field.
Article
Full-text available
Introduction Consumers’ emotional responses are the prime target for marketing commercials. Facial expressions provide information about a person’s emotional state and technological advances have enabled machines to automatically decode them. Method With automatic facial coding we investigated the relationships between facial movements (i.e., action unit activity) and self-report of commercials advertisement emotion, advertisement and brand effects. Therefore, we recorded and analyzed the facial responses of 219 participants while they watched a broad array of video commercials. Results Facial expressions significantly predicted self-report of emotion as well as advertisement and brand effects. Interestingly, facial expressions had incremental value beyond self-report of emotion in the prediction of advertisement and brand effects. Hence, automatic facial coding appears to be useful as a non-verbal quantification of advertisement effects beyond self-report. Discussion This is the first study to measure a broad spectrum of automatically scored facial responses to video commercials. Automatic facial coding is a promising non-invasive and non-verbal method to measure emotional responses in marketing.
Article
Full-text available
Emotion regulation plays a central role in mental health and illness, but little is known about even the most basic forms of emotion regulation. To examine the acute effects of inhibiting negative and positive emotion, we asked 180 female participants to watch sad, neutral, and amusing films under 1 of 2 conditions. Suppression participants (N = 90) inhibited their expressive behavior while watching the films; no suppression participants (N = 90) simply watched the films. Suppression diminished expressive behavior in all 3 films and decreased amusement self-reports in sad and amusing films. Physiologically, suppression had no effect in the neutral film, but clear effects in both negative and positive emotional films, including increased sympathetic activation of the cardiovascular system. On the basis of these findings, we suggest several ways emotional inhibition may influence psychological functioning.
Article
Full-text available
Emotional facial expressions can inform researchers about an individual's emotional state. Recent technological advances open up new avenues to automatic Facial Expression Recognition (FER). Based on machine learning, such technology can tremendously increase the amount of processed data. FER is now easily accessible and has been validated for the classification of standardized prototypical facial expressions. However, applicability to more naturalistic facial expressions still remains uncertain. Hence, we test and compare performance of three different FER systems (Azure Face API, Microsoft; Face++, Megvii Technology; FaceReader, Noldus Information Technology) with human emotion recognition (A) for standardized posed facial expressions (from prototypical inventories) and (B) for non-standardized acted facial expressions (extracted from emotional movie scenes). For the standardized images, all three systems classify basic emotions accurately (FaceReader is most accurate) and they are mostly on par with human raters. For the non-standardized stimuli, performance drops remarkably for all three systems, but Azure still performs similarly to humans. In addition, all systems and humans alike tend to misclassify some of the non-standardized emotional facial expressions as neutral. In sum, emotion recognition by automated facial expression recognition can be an attractive alternative to human emotion recognition for standardized and non-standardized emotional facial expressions. However, we also found limitations in accuracy for specific facial expressions; clearly there is need for thorough empirical evaluation to guide future developments in computer vision of emotional facial expressions.
Article
Full-text available
Facial expressions provide insight into a person’s emotional experience. To automatically decode these expressions has been made possible by tremendous progress in the field of computer vision. Researchers are now able to decode emotional facial expressions with impressive accuracy in standardized images of prototypical basic emotions. We tested the sensitivity of a well-established automatic facial coding software program to detect spontaneous emotional reactions in individuals responding to emotional pictures. We compared automatically generated scores for valence and arousal of the Facereader (FR; Noldus Information Technology) with the current psychophysiological gold standard of measuring emotional valence (Facial Electromyography, EMG) and arousal (Skin Conductance, SC). We recorded physiological and behavioral measurements of 43 healthy participants while they looked at pleasant, unpleasant, or neutral scenes. When viewing pleasant pictures, FR Valence and EMG were both comparably sensitive. However, for unpleasant pictures, FR Valence showed an expected negative shift, but the signal differentiated not well between responses to neutral and unpleasant stimuli, that were distinguishable with EMG. Furthermore, FR Arousal values had a stronger correlation with self-reported valence than with arousal while SC was sensitive and specifically associated with self-reported arousal. This is the first study to systematically compare FR measurement of spontaneous emotional reactions to standardized emotional images with established psychophysiological measurement tools. This novel technology has yet to make strides to surpass the sensitivity of established psychophysiological measures. However, it provides a promising new measurement technique for non-contact assessment of emotional responses.
Article
Full-text available
In the wake of rapid advances in automatic affect analysis, commercial automatic classifiers for facial affect recognition have attracted considerable attention in recent years. While several options now exist to analyze dynamic video data, less is known about the relative performance of these classifiers, in particular when facial expressions are spontaneous rather than posed. In the present work, we tested eight out-of-the-box automatic classifiers, and compared their emotion recognition performance to that of human observers. A total of 937 videos were sampled from two large databases that conveyed the basic six emotions (happiness, sadness, anger, fear, surprise, and disgust) either in posed (BU-4DFE) or spontaneous (UT-Dallas) form. Results revealed a recognition advantage for human observers over automatic classification. Among the eight classifiers, there was considerable variance in recognition accuracy ranging from 48% to 62%. Subsequent analyses per type of expression revealed that performance by the two best performing classifiers approximated those of human observers, suggesting high agreement for posed expressions. However, classification accuracy was consistently lower (although above chance level) for spontaneous affective behavior. The findings indicate potential shortcomings of existing out-of-the-box classifiers for measuring emotions, and highlight the need for more spontaneous facial databases that can act as a benchmark in the training and testing of automatic emotion recognition systems. We further discuss some limitations of analyzing facial expressions that have been recorded in controlled environments.
Article
Full-text available
Human faces express emotions, informing others about their affective states. In order to measure expressions of emotion, facial Electromyography (EMG) has widely been used, requiring electrodes and technical equipment. More recently, emotion recognition software has been developed that detects emotions from video recordings of human faces. However, its validity and comparability to EMG measures is unclear. The aim of the current study was to compare the Affectiva Affdex emotion recognition software by iMotions with EMG measurements of the zygomaticus mayor and corrugator supercilii muscle, concerning its ability to identify happy, angry and neutral faces. Twenty participants imitated these facial expressions while videos and EMG were recorded. Happy and angry expressions were detected by both the software and by EMG above chance, while neutral expressions were more often falsely identified as negative by EMG compared to the software. Overall, EMG and software values correlated highly. In conclusion, Affectiva Affdex software can identify facial expressions and its results are comparable to EMG findings.
Article
Full-text available
This study validates automated emotion and action unit (AU) coding applying FaceReader 7 to a dataset of standardized facial expressions of six basic emotions (Standardized and Motivated Facial Expressions of Emotion). Percentages of correctly and falsely classified expressions are reported. The validity of coding AUs is provided by correlations between the automated analysis and manual Facial Action Coding System (FACS) scoring for 20 AUs. On average 80% of the emotional facial expressions are correctly classified. The overall validity of coding AUs is moderate with the highest validity indicators for AUs 1, 5, 9, 17 and 27. These results are compared to the performance of FaceReader 6 in previous research, with our results yielding comparable validity coefficients. Practical implications and limitations of the automated method are discussed.
Article
Full-text available
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Article
Full-text available
Facial expressions that show emotion play an important role in human social interactions. In previous theoretical studies, researchers have suggested that there are universal, prototypical facial expressions specific to basic emotions. However, the results of some empirical studies that tested the production of emotional facial expressions based on particular scenarios only partially supported the theoretical predictions. In addition, all of the previous studies were conducted in Western cultures. We investigated Japanese laypeople (n = 65) to provide further empirical evidence regarding the production of emotional facial expressions. The participants produced facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) in specific scenarios. Under the baseline condition, the participants imitated photographs of prototypical facial expressions. The produced facial expressions were automatically coded using FaceReader in terms of the intensities of emotions and facial action units. In contrast to the photograph condition, where all target emotions were shown clearly, the scenario condition elicited the target emotions clearly only for happy and surprised expressions. The photograph and scenario conditions yielded different profiles for the intensities of emotions and facial action units associated with all of the facial expressions tested. These results provide partial support for the theory of universal, prototypical facial expressions for basic emotions but suggest the possibility that the theory may need to be modified based on empirical evidence.
Article
Full-text available
This article discusses a new systems model of dyadic nonverbal interaction. The model builds on earlier theories by integrating partners’ parallel sending and receiving nonverbal processes into a broader, dynamic ecological system. It does so in two ways. First, it moves the level of description beyond the individual level to the coordination of both partners’ contributions to the interaction. Second, it recognizes that the relationships between (a) individuals’ characteristics and processes and (b) the social ecology of the interaction setting are reciprocal and best analyzed at the systems level. Thus, the systems model attempts to describe and explain the dynamic interplay among individual, dyadic, and environmental processes in nonverbal interactions. The potential utility and the limitations of the systems model are discussed and the implications for future research considered. Although the systems model is focused explicitly on face-to-face nonverbal communication, it has considerable relevance for digital communication. Specifically, this model provides a useful framework for examining the social effects of mobile device use and as a template for studying human–robot interactions.
Article
Full-text available
The few previous studies testing whether or not microexpressions are indicators of deception have produced equivocal findings, which may have resulted from restrictive operationalizations of microexpression duration. In this study, facial expressions of emotion produced by community participants in an initial screening interview in a mock crime experiment were coded for occurrence and duration. Various expression durations were tested concerning whether they differentiated between truthtellers and liars concerning their intent to commit a malicious act in the future. We operationalized microexpressions as expressions occurring less than the duration of spontaneously occurring, non-concealed, non-repressed facial expressions of emotion based on empirically documented findings, that is ≤0.50 s, and then more systematically ≤0.40, ≤0.30, and ≤0.20 s. We also compared expressions occurring between 0.50 and 6.00 s and all expressions ≤6.00 s. Microexpressions of negative emotions occurring ≤0.40 and ≤0.50 s differentiated truthtellers and liars. Expressions of negative emotions occurring ≤6.00 s also differentiated truthtellers from liars but this finding did not survive when expressions ≤1.00 s were filtered from the data. These findings provided the first systematic evidence for the existence of microexpressions at various durations and their possible ability to differentiate truthtellers from liars about their intent to commit an act of malfeasance in the future.
Article
Full-text available
Most experimental studies of facial expression processing have used static stimuli (photographs), yet facial expressions in daily life are generally dynamic. In its original photographic format, the Karolinska Directed Emotional Faces (KDEF) has been frequently utilized. In the current study, we validate a dynamic version of this database, the KDEF-dyn. To this end, we applied animation between neutral and emotional expressions (happy, sad, angry, fearful, disgusted, and surprised; 1,033-ms unfolding) to 40 KDEF models, with morphing software. Ninety-six human observers categorized the expressions of the resulting 240 video-clip stimuli, and automated face analysis assessed the evidence for 6 expressions and 20 facial action units (AUs) at 31 intensities. Low-level image properties (luminance, signal-to-noise ratio, etc.) and other purely perceptual factors (e.g., size, unfolding speed) were controlled. Human recognition performance (accuracy, efficiency, and confusions) patterns were consistent with prior research using static and other dynamic expressions. Automated assessment of expressions and AUs was sensitive to intensity manipulations. Significant correlations emerged between human observers’ categorization and automated classification. The KDEF-dyn database aims to provide a balance between experimental control and ecological validity for research on emotional facial expression processing. The stimuli and the validation data are available to the scientific community.
Article
Full-text available
Experience and expression are orthogonal emotion dimensions: we do not always show what we feel, nor do we always feel what we show. However, the experience and expression dimensions of emotion are rarely considered simultaneously. We propose a model outlining the intersection of goals for emotion experience and expression. We suggest that these goals may be aligned (e.g., feeling and showing) or misaligned (e.g., feeling but not showing). Our model posits these states can be separated into goals to (a) experience and express, (b) experience but not express, (c) express but not experience, or (d) neither experience nor express positive and negative emotion. We contend that considering intersections between experience and expression goals will advance understanding of emotion regulation choice and success.
Article
Full-text available
It is generally thought to be adaptive that fear relevant stimuli in the environment can capture and hold our attention; and in psychopathology attentional allocation is thought to be cue-specific. Such hypervigilance toward threatening cues or difficulty to disengage attention from threat has been demonstrated for a variety of stimuli, for example, toward evolutionary prepared animals or toward socially relevant facial expressions. Usually, specific stimuli have been examined in individuals with particular fears (e.g., animals in animal fearful and faces in socially fearful participants). However, different kinds of stimuli are rarely examined in one study. Thus, it is unknown how different categories of threatening stimuli compete for attention and how specific kinds of fears modulate these attentional processes. In this study, we used a free viewing paradigm: pairs of pictures with threat-related content (spiders or angry faces) or neutral content (butterflies or neutral faces) were presented side by side (i.e., spiders and angry faces, angry and neutral faces, spiders and butterflies, butterflies and neutral faces). Eye-movements were recorded while spider fearful, socially anxious, or non-anxious participants viewed the picture pairs. Results generally replicate the finding that unpleasant pictures more effectively capture attention in the beginning of a trial compared to neutral pictures. This effect was more pronounced in spider fearful participants: the higher the fear the quicker they were in looking at spiders. This was not the case for high socially anxious participants and pictures of angry faces. Interestingly, when presented next to each other, there was no preference in initial orientation for either spiders or angry faces. However, neutral faces were looked at more quickly than butterflies. Regarding sustained attention, we found no general preference for unpleasant pictures compared to neutral pictures.
Article
Full-text available
Much emotion research has focused on the end result of the emotion process, categorical emotions, as reported by the protagonist or diagnosed by the researcher, with the aim of differentiating these discrete states. In contrast, this review concentrates on the emotion process itself by examining how (a) elicitation, or the appraisal of events, leads to (b) differentiation, in particular, action tendencies accompanied by physiological responses and manifested in facial, vocal, and gestural expressions, before (c) conscious representation or experience of these changes (feeling) and (d) categorizing and labeling these changes according to the semantic profiles of emotion words. The review focuses on empirical, particularly experimental, studies from emotion research and neighboring domains that contribute to a better understanding of the unfolding emotion process and the underlying mechanisms, including the interactions among emotion components. Expected final online publication date for the Annual Review of Psychology Volume 70 is January 4, 2019. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Full-text available
Using one of the key bibliometric methods, namely the index of citations, from a comprehensive multidisciplinary bibliographic electronic database, Web of Science, this article provides a circumscribed descriptive analysis of 1000 most-cited papers in the research field of visible nonverbal behavior. Using this method, we outline the most influential topics and research programs, and sketch the development of relevant features over the years. Topics include nonverbal behavior, facial expression, personal space, gesture, thin slices, and others, but exclude vocal or auditory cues. The results show that the 1000 most cited papers on visible nonverbal behavior emerged in the 1960s, and peaked in 2008. Revealing the strong interdisciplinary nature of the field, the 1000 papers come from 297 journals. Further, 33 journals had 7 or more papers, contributing to more than 50% (n = 515) of the 1000 most cited papers. The most cited paper (Whalen et al. in Emotion 1(1):70–83, 2001. 10.1037/0033-2909.111.2.256, a neuroscience paper) is cited 1341 times, and Paul Ekman has the highest number of papers (17) as first or last author. Results are compared with two other corpora of papers (i.e., a random sample control group and a current papers group) to provide a more thorough understanding of possible future directions in visible nonverbal behavior. Results differ from those that emerge from other citation indexes and are intended to give a flavor of key peer reviewed papers (excluding books and chapters) contributing to the development of scientific knowledge on visible nonverbal behavior.
Article
Full-text available
Facial mimicry (FM) is an automatic response to imitate the facial expressions of others. However, neural correlates of the phenomenon are as yet not well established. We investigated this issue using simultaneously recorded EMG and BOLD signals during perception of dynamic and static emotional facial expressions of happiness and anger. During display presentations, BOLD signals and zygomaticus major (ZM), corrugator supercilii (CS) and orbicularis oculi (OO) EMG responses were recorded simultaneously from 46 healthy individuals. Subjects reacted spontaneously to happy facial expressions with increased EMG activity in ZM and OO muscles and decreased CS activity, which was interpreted as FM. Facial muscle responses correlated with BOLD activity in regions associated with motor simulation of facial expressions [i.e., inferior frontal gyrus, a classical Mirror Neuron System (MNS)]. Further, we also found correlations for regions associated with emotional processing (i.e., insula, part of the extended MNS). It is concluded that FM involves both motor and emotional brain structures, especially during perception of natural emotional expressions.
Article
Full-text available
The goal of this study was to validate AFFDEX and FACET, two algorithms classifying emotions from facial expressions, in iMotions’s software suite. In Study 1, pictures of standardized emotional facial expressions from three databases, the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), the Amsterdam Dynamic Facial Expression Set (ADFES), and the Radboud Faces Database (RaFD), were classified with both modules. Accuracy (Matching Scores) was computed to assess and compare the classification quality. Results show a large variance in accuracy across emotions and databases, with a performance advantage for FACET over AFFDEX. In Study 2, 110 participants’ facial expressions were measured while being exposed to emotionally evocative pictures from the International Affective Picture System (IAPS), the Geneva Affective Picture Database (GAPED) and the Radboud Faces Database (RaFD). Accuracy again differed for distinct emotions, and FACET performed better. Overall, iMotions can achieve acceptable accuracy for standardized pictures of prototypical (vs. natural) facial expressions, but performs worse for more natural facial expressions. We discuss potential sources for limited validity and suggest research directions in the broader context of emotion research.
Article
Full-text available
We collected and Facial Action Coding System (FACS) coded over 2,600 free-response facial and body displays of 22 emotions in China, India, Japan, Korea, and the United States to test 5 hypotheses concerning universals and cultural variants in emotional expression. New techniques enabled us to identify cross-cultural core patterns of expressive behaviors for each of the 22 emotions. We also documented systematic cultural variations of expressive behaviors within each culture that were shaped by the cultural resemblance in values, and identified a gradient of universality for the 22 emotions. Our discussion focused on the science of new expressions and how the evidence from this investigation identifies the extent to which emotional displays vary across cultures.
Article
Full-text available
According to dominant theories of affect, humans innately and universally express a set of emotions using specific configurations of prototypical facial activity. Accordingly, thousands of studies have tested emotion recognition using sets of highly intense and stereotypical facial expressions, yet their incidence in real life is virtually unknown. In fact, a commonplace experience is that emotions are expressed in subtle and nonprototypical forms. Such facial expressions are at the focus of the current study. In Experiment 1, we present the development and validation of a novel stimulus set consisting of dynamic and subtle emotional facial displays conveyed without constraining expressers to using prototypical configurations. Although these subtle expressions were more challenging to recognize than prototypical dynamic expressions, they were still well recognized by human raters, and perhaps most importantly, they were rated as more ecological and naturalistic than the prototypical expressions. In Experiment 2, we examined the characteristics of subtle versus prototypical expressions by subjecting them to a software classifier, which used prototypical basic emotion criteria. Although the software was highly successful at classifying prototypical expressions, it performed very poorly at classifying the subtle expressions. Further validation was obtained from human expert face coders: Subtle stimuli did not contain many of the key facial movements present in prototypical expressions. Together, these findings suggest that emotions may be successfully conveyed to human viewers using subtle nonprototypical expressions. Although classic prototypical facial expressions are well recognized, they appear less naturalistic and may not capture the richness of everyday emotional communication. (PsycINFO Database Record
Article
Full-text available
Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.
Article
Full-text available
Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings.
Article
Full-text available
Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.
Article
Full-text available
In this study, we validated automated facial coding (AFC) software-FaceReader (Noldus, 2014)-on 2 publicly available and objective datasets of human expressions of basic emotions. We present the matching scores (accuracy) for recognition of facial expressions and the Facial Action Coding System (FACS) index of agreement. In 2005, matching scores of 89% were reported for FaceReader. However, previous research used a version of FaceReader that implemented older algorithms (version 1.0) and did not contain FACS classifiers. In this study, we tested the newest version (6.0). FaceReader recognized 88% of the target emotional labels in the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) and Amsterdam Dynamic Facial Expression Set (ADFES). The software reached a FACS index of agreement of 0.67 on average in both datasets. The results of this validation test are meaningful only in relation to human performance rates for both basic emotion recognition and FACS coding. The human emotions recognition for the 2 datasets was 85%, therefore FaceReader is as good at recognizing emotions as humans. To receive FACS certification, a human coder must reach an agreement of 0.70 with the master coding of the final test. Even though FaceReader did not attain this score, action units (AUs) 1, 2, 4, 5, 6, 9, 12, 15, and 25 might be used with high accuracy. We believe that FaceReader has proven to be a reliable indicator of basic emotions in the past decade and has a potential to become similarly robust with FACS.
Article
Full-text available
Facial expression is central to human experience. Its efficiency and valid measurement are challenges that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka “spontaneous”) facial expressions differ along several dimensions including complexity and timing, well-annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video may be insufficient, and therefore 3D video archives are required. We present a newly developed 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains. To the best of our knowledge, this new database is the first of its kind for the public. The work promotes the exploration of 3D spatiotemporal features in subtle facial expression, better understanding of the relation between pose and motion dynamics in facial action units, and deeper understanding of naturally occurring facial action.
Article
Full-text available
Increasing evidence suggests that Duchenne (D) smiles may not only occur as a sign of spontaneous enjoyment, but can also be deliberately posed. The aim of this paper was to investigate whether people mimic spontaneous and deliberate D and non-D smiles to a similar extent. Facial EMG responses were recorded while participants viewed short video-clips of each smile category which they had to judge with respect to valence, arousal, and genuineness. In line with previous research, valence and arousal ratings varied significantly as a function of smile type and elicitation condition. However, differences in facial reactions occurred only for smile type (i.e., D and non-D smiles). The findings have important implications for questions relating to the role of facial mimicry in expression understanding and suggest that mimicry may be essential in discriminating among various meanings of smiles.
Article
Full-text available
Access to well-labeled recordings of facial expression is critical to progress in automated facial expression recognition. With few exceptions, publicly available databases are limited to posed facial behavior that can differ markedly in conformation, intensity, and timing from what occurs spontaneously. To meet the need for publicly available corpora of well-labeled video, we collected, ground-truthed, and prepared for distribution the Denver intensity of spontaneous facial action database. Twenty-seven young adults were video recorded by a stereo camera while they viewed video clips intended to elicit spontaneous emotion expression. Each video frame was manually coded for presence, absence, and intensity of facial action units according to the facial action unit coding system. Action units are the smallest visibly discriminable changes in facial action; they may occur individually and in combinations to comprise more molar facial expressions. To provide a baseline for use in future research, protocols and benchmarks for automated action unit intensity measurement are reported. Details are given for accessing the database for research in computer vision, machine learning, and affective and behavioral science.
Article
Full-text available
Successful interpersonal functioning often requires both the ability to mask inner feelings and the ability to accurately recognize others' expressions-but what if effortful control of emotional expressions impacts the ability to accurately read others? In this study, we examined the influence of self-controlled expressive suppression and mimicry on facial affect sensitivity-the speed with which one can accurately identify gradually intensifying facial expressions of emotion. Muscle activity of the brow (corrugator, related to anger), upper lip (levator, related to disgust), and cheek (zygomaticus, related to happiness) were recorded using facial electromyography while participants randomized to one of three conditions (Suppress, Mimic, and No-Instruction) viewed a series of six distinct emotional expressions (happiness, sadness, fear, anger, surprise, and disgust) as they morphed from neutral to full expression. As hypothesized, individuals instructed to suppress their own facial expressions showed impairment in facial affect sensitivity. Conversely, mimicry of emotion expressions appeared to facilitate facial affect sensitivity. Results suggest that it is difficult for a person to be able to simultaneously mask inner feelings and accurately "read" the facial expressions of others, at least when these expressions are at low intensity. The combined behavioral and physiological data suggest that the strategies an individual selects to control his or her own expression of emotion have important implications for interpersonal functioning. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Article
Full-text available
Zusammenfassung. Die vorliegende Arbeit berichtet erste Analysen zur Reliabilitat und Validitat sowie klinische cut-off-Werte der deutschen Bearbeitung der Social Interaction Anxiety Scale und der Social Phobia Scale (Mattick & Clarke, 1989). Die Skalen wurden 43 Patienten mit Sozialer Phobie, 69 Patienten mit anderen psychischen Storungen und 24 Kontrollpersonen ohne psychische Storungen vorgelegt. Die ermittelten Werte fur die innere Konsistenz und Test-Retest-Korrelation sprechen fur eine sehr hohe Reliabilitat. Hinweise auf eine konvergente Validitat ergaben sich aus hohen Korrelationen mit konstruktnahen Mesinstrumenten zur Sozialen Phobie, wahrend die Korrelationen zu Depressions- und Angstmasen erwartungsgemas geringer ausfielen. Die beiden Skalen diskriminieren Soziophobiker sehr gut von Personen ohne psychische Storung und Angstpatienten, wahrend die Diskriminationsleistung von depressiven Patienten geringer ausgepragt ist. Die ermittelten cut-off-Werte liegen deutlich unter den amerikanischen We...
Article
Full-text available
Structural models of emotion represent the fact that emotions are perceived as systematically interrelated. These interrelations may reveal a basic property of the human conception of emotions, or they may represent an artifact that is due to semantic relations learned along with the emotion lexicon. The 1st alternative was supported by results from a series of scalings of 20 emotional facial expressions, results that could not easily be attributed to word similarity. Similarity data on the facial expressions were obtained from 30 undergraduates and 42 4–5 yr olds. For both groups, similarity was measured without the use of emotion labels by asking Ss to group together people who appear to feel alike. The structure of emotions obtained from both children and adults was as predicted: a roughly circular order in a 2-dimensional space, the axes of which could be interpreted as pleasure–displeasure and arousal–sleepiness. The form and meaning of this structure was supported through 2 additional scalings of the facial expressions with adults: a multidimensional scaling based on direct ratings of similarity–dissimilarity and unidimensional scalings on the pleasure–displeasure and arousal–sleepiness dimensions. (27 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Emotional cues facilitate motor responses that are associated with approach or avoidance. Previous research has shown that evaluative processing of positive and negative facial expression stimuli is also linked to motor schemata of facial muscles. To further investigate the influence of different types of emotional stimuli on facial reactions, we conducted a study with pictures of emotional facial expressions (KDEF) and scenes (IAPS). Healthy participants were asked to respond to the positive or negative facial expressions (KDEF) and scenes (IAPS) with specific facial muscles in a valence-congruent (stimulus valence matches muscle related valence) or a valence-incongruent condition (stimulus valence is contrary to muscle related valence). Additionally, they were asked to rate pictures in terms of valence and arousal. Muscular response latencies were recorded by an electromyogram. Overall, response latencies were shorter in response to facial expressions than to complex pictures of scenes. For both stimulus categories, response latencies with valence-compatible muscles were shorter compared to reactions with incompatible muscles. Moreover, correlations between picture ratings and facial muscle reactions for happy facial expressions as well as positive scenes reflect a direct relationship between perceived intensity of the subjective emotional experience and physiological responding. Results replicate and extend previous research, indicating that incompatibility effects are reliable across different stimulus types and are not limited to facial mimicry. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The Karolinska Directed Emotional Faces (KDEF; Lundqvist, Flykt, & Öhman, 1998) is a database of pictorial emotional facial expressions for use in emotion research. The original KDEF database consists of a total of 490 JPEG pictures (72x72 dots per inch) showing 70 individuals (35 women and 35 men) displaying 7 different emotional expressions (Angry, Fearful, Disgusted, Sad, Happy, Surprised, and Neutral). Each expression is viewed from 5 different angles and was recorded twice (the A and B series). All the individuals were trained amateur actors between 20 and 30 years of age. For participation in the photo session, beards, moustaches, earrings, eyeglasses, and visible make-up were exclusion criteria. All the participants were instructed to try to evoke the emotion that was to be expressed and to make the expression strong and clear. In a validation study (Goeleven et al., 2008), a series of the KDEF images were used and participants rated emotion, intensity, and arousal on 9-point Likert scales. In that same study, a test-retest reliability analysis was performed by computing the percentage similarity of emotion type ratings and by calculating the correlations for the intensity and arousal measures over a one-week period. With regard to the intensity and arousal measures, a mean correlation across all pictures of .75 and .78 respectively was found. (APA PsycTests Database Record (c) 2019 APA, all rights reserved)
Article
Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.
Article
Evidence on the coherence between emotion and facial expression in adults from laboratory experiments is reviewed. High coherence has been found in several studies between amusement and smiling; low to moderate coherence between other positive emotions and smiling. The available evidence for surprise and disgust suggests that these emotions are accompanied by their "traditional" facial expressions, and even components of these expressions, only in a minority of cases. Evidence concerning sadness, anger, and fear is very limited. For sadness, one study suggests that high emotion-expression coherence may exist in specific situations, whereas for anger and fear, the evidence points to low coherence. Insufficient emotion intensity and inhibition of facial expressions seem unable to account for the observed dissociations between emotion and facial expression.
Article
G*Power (Erdfelder, Faul, & Buchner, 1996) was designed as a general stand-alone power analysis program for statistical tests commonly used in social and behavioral research. G*Power 3 is a major extension of, and improvement over, the previous versions. It runs on widely used computer platforms (i.e., Windows XP, Windows Vista, and Mac OS X 10.4) and covers many different statistical tests of the t, F, and chi2 test families. In addition, it includes power analyses for z tests and some exact tests. G*Power 3 provides improved effect size calculators and graphic options, supports both distribution-based and design-based input modes, and offers all types of power analyses in which users might be interested. Like its predecessors, G*Power 3 is free.
Article
Automatic facial reactions to near-threshold presented facial displays of emotion can be due to motor-mimicry or evaluation. To examine the mechanisms underlying such automatic facial responses we presented facial displays of joy, anger, and disgust for 16.67ms with a backwards masking technique and assessed electromyographic activity over the zygomaticus major, the levator labii, and the corrugator supercilii. As expected, we found that participants responded to displays of joy with contractions of the zygomaticus major and to expressions of anger with contractions of the corrugator supercilii. Critically, facial displays of disgust automatically activated the corrugator supercilii rather than the levator labii. This supports the notion that evaluative processes mediate facial responses to near-threshold presented facial displays of emotion rather than direct mimicry of emotional facial features.
Article
The paper presents a German version of the „Berkeley Expressivity Questionnaire“ (BEQ; Gross & John, 1995). The instrument uses 16 items to assess three dimensions of expressivity: negative expressivity, positive expressivity, and impulse strength. In study 1 (n = 385), the factor structure and the psychometric properties of the BEQ were determined using confirmatory factor analysis. In a longitudinal study (study 2) the stability and validity of the BEQ were investigated: At t1, 220 participants filled out the BEQ. At t2 (6 months later), in addition to the BEQ self-report, the judgments of two raters aswell as personality characteristics, positive and negative affectivity, and psychological and physical symptoms were assessed. The results show ,that the dimensions ,of the ,BEQ are stable and positively correlated with the raters’ judgments. While negative expressivity and impulse strength are related to neuroticism, negative affectivity, physical complaints, and depression, positive expressivity is correlated to extraversion, openness, and positive affectivity. Women showed higher scores in all three dimensions ,of the ,BEQ which were negatively related to age. Key words: Berkeley Expressivity Questionnaire, emotional expressivity Inder psychologischen ,Emotionsforschung scheint ,ein
Article
The authors provide a cautionary note on reporting accurate eta-squared values from multifactor analysis of variance (ANOVA) designs. They reinforce the distinction between classical and partial eta-squared as measures of strength of association. They provide examples from articles published in premier psychology journals in which the authors erroneously reported partial eta-squared values as representing classical etasquared values. Finally, they discuss broader impacts of inaccurately reported etasquared values for theory development, meta-analytic reviews, and intervention programs.
Article
Based on a model in which the facial muscles can be both automatically/ involuntarily controlled and voluntarily controlled by conscious processes, we explore whether spontaneously evoked facial reactions can be evaluated in terms of criteria for what characterises an automatic process. In three experiments subjects were instructed to not react with their facial muscles, or to react as quickly as possible by wrinkling the eyebrows (frowning) or elevating the cheeks (smiling) when exposed to pictures of negative or positive emotional stimuli, while EMG activity was measured from the corrugator supercilii and zygomatic major muscle regions. Consistent with the proposition that facial reactions are automatically controlled, the results showed that the corrugator muscle reaction was facilitated to negative stimuli and the zygomatic muscle reaction was facilitated to positive stimuli. The results further showed that, despite the fact that subjects were required to not react with their facial muscles at all, they could not avoid producing a facial reaction that corresponded to the negative and positive stimuli.
Article
German version of the Berkeley Expressivity Questionnaire mmm mmm mmm mmm mmm m Abstract. The paper presents a German version of the "Berkeley Expressivity Questionnaire" (BEQ; Gross & John, 1995). The instrument uses 16 items to assess three dimensions of expressivity: negative expressivity, positive expressivity, and impulse strength. In study 1 (n = 385), the factor structure and the psychometric properties of the BEQ were determined using confirmatory factor analysis. In a longitudinal study (study 2) the stability and validity of the BEQ were investigated: At t1, 220 participants filled out the BEQ. At t2 (6 months later), in addition to the BEQ self-report, the judgments of two raters as well as personality characteristics, positive and negative affectivity, and psychological and physical symptoms were assessed. The results show that the dimensions of the BEQ are stable and positively correlated with the raters' judgments. While negative expressivity and impulse strength are related to neuroticism, negative affectivity, physical complaints, and depression, positive expressivity is correlated to extraversion, openness, and positive affectivity. Women showed higher scores in all three dimensions of the BEQ which were negatively related to age.