ArticlePDF Available

It's all in the mind: Linking internal representations of emotion with facial expression recognition

Authors:
It’s all in the mind: linking internal representations of emotion with facial
expression recognition
Connor T. Keating1 & Jennifer L Cook1
1University of Birmingham
Email: CXK655@bham.ac.uk
University/Lab Homepage: https://jencooklab.com
Twitter Handle: @ConnorTKeating
Keywords: Emotion recognition; Emotion Representations
Background and aims of the research
Over the past six decades, researchers have extensively studied emotion recognition
(e.g., Ekman & Friesen, 1975; Bassili, 1979; Young et al., 1997; Goldman & Sripada, 2005;
Zheng et al., 2017; Sowden, Schuster, Keating, Fraser & Cook, 2021). Despite being highly
related to (and potentially important for) emotion recognition, it was only recently that
researchers began investigating internal representations of emotion (e.g., Jack, Garrod, Yu,
Caldara & Schyns, 2012; Jack, Garrod & Schyns, 2014; Jack, Sun, Delis, Garrod & Schyns,
2016; Chen, Garrod, Ince, Schyns & Jack, 2021). Such studies have typically adopted
psychophysical approaches to index the way in which facial expressions appear in the
“mind’s eye” (i.e., internal representations) and compared these across emotions (e.g., Jack,
Garrod & Schyns, 2014; Chen et al., 2018), cultures (e.g., Jack, Caldara & Schuns, 2012;
Jack, Sun, Delis, Garrod & Schyns, 2016), and participant groups (e.g., Pichon et al., 2020).
Despite great progress in these areas, research has not yet investigated the extent to which
internal representations influence emotion recognition. For example, studies have not
explored whether the precision/clarity of internal representations contributes to emotion
recognition abilities and/or difficulties. In our recent study, we tested the hypothesis that
individuals with less clear internal representations of emotion would have low scores on an
emotion recognition task.
Methodology
To test this hypothesis, participants completed two tasks which employed dynamic
point light displays (a series of dots that convey biological motion) of angry, happy and sad
facial expressions (PLFs). In the first task (taken from Sowden, Schuster, Keating, Fraser &
Cook, 2021; Keating, Fraser Sowden & Cook, 2021), participants viewed emotional PLFs
and rated how angry, happy and sad the facial expressions appeared. We calculated emotion
recognition accuracy scores by subtracting the mean of the two incorrect ratings from the
correct rating. For example, for a trial that displayed an angry expression, the mean rating of
the two incorrect emotions (happy and sad) was subtracted from the rating for the correct
emotion (angry).
The second task was an adapted version of a task we had employed previously
(Keating, Sowden & Cook, under revision). In this task, on each trial, participants moved a
dial to manipulate the speed of a PLF until it moved at the speed of a typical angry, happy or
sad expression. This task operates on the premise that, compared to participants with clear
internal representations, those with less clear representations of emotion would attribute more
variable speeds to the expressions. For instance, someone with a clear internal representation
anger would be consistent in their attributions (e.g., by attributing 120% speed, 121% speed
and 119% speed to the angry expression). In contrast, someone with a less clear internal
representation would be more variable (e.g., by attributing 120% speed, 60% speed and 180%
speed to an angry expression). Therefore, to index the clarity (or lack thereof) of participants’
internal representations, we calculated variability by taking the standard deviation of the
speeds attributed to the angry, happy and sad expressions respectively. Mean variability was
calculated by taking a mean of the variability scores for the angry, happy and sad PLFs.
Our preliminary results
Our preliminary results suggest that people that have less clear internal
representations of emotion find it more difficult to recognise emotional facial expressions.
However, further work needs to be done to replicate these findings and, to determine the
direction of causality. In the case of the latter- it could be that those with less clear internal
representations of facial expressions do not have consistent “templates” to compare observed
expressions to, thus resulting in poorer emotion recognition. Alternatively, it could be that
those who struggle to read emotional expressions do not build up clear internal
representations as they do not know the correct “label” or give an incorrect “label” to
expressions they observe. In addition, further work needs to be done to a) identify how other
emotional processes are implicated in emotion recognition (e.g., the interoceptive experience
of emotion) and b) identify how different traits (e.g., autistic and alexithymic) are implicated
in these different emotional sub-abilities.
Next steps
In our next experiment, we aim to test how features of internal emotional experiences,
such as the consistency and overlap between emotions, contribute to internal representations
and facial emotion recognition. By doing so, we hope to construct a mechanistic model of
emotion recognition that elucidates how different emotional sub-abilities and traits are
associated with one another. We hope that such work will illuminate potential pathways for
supporting emotion recognition in clinical and sub-clinical groups (see Keating & Cook,
2020).
Acknowledgements
We thank the British Psychological Society Cognitive Section for awarding us the
grant. The funds were used to offset the costs of online testing (e.g., recruiting participants
via Prolific).
References
Bassili, J. N. (1979). Emotion recognition: The role of facial movement and the relative
importance of upper and lower areas of the face. Journal of Personality and Social
Psychology, 37(11), 2049–2058. https://doi.org/10.1037/0022-3514.37.11.2049
Chen, C., Crivelli, C., Garrod, O. G., Schyns, P. G., Fernández-Dols, J. M., & Jack, R. E.
(2018). Distinct facial expressions represent pain and pleasure across
cultures. Proceedings of the National Academy of Sciences, 115(43), E10013-E10021.
https://doi.org/10.1073/pnas.1807862115
Chen, C., Garrod, O. G., Ince, R. A., Schyns, P. G., & Jack, R. E. (2021). Facial Expressions
Reveal Cross-Cultural Variance in Emotion Signaling. Journal of Vision, 21(9), 2500-
2500. https://doi.org/10.1167/jov.21.9.2500
Ekman, P., & Friesen, W. V. (1975). Unmasking the face: A guide to recognizing emotions
from facial clues. Prentice Hall.
Goldman, A. I., & Sripada, C. S. (2005). Simulationist models of face-based emotion
recognition. Cognition, 94(3), 193-213.
Jack, R. E., Caldara, R., & Schyns, P. G. (2012). Internal representations reveal cultural
diversity in expectations of facial expressions of emotion. Journal of Experimental
Psychology: General, 141(1), 19-25. https://doi.org/10.1037/a0023463
Jack, R. E., Garrod, O. G., & Schyns, P. G. (2014). Dynamic facial expressions of emotion
transmit an evolving hierarchy of signals over time. Current biology, 24(2), 187-192.
https://doi.org/10.1016/j.cub.2013.11.064
Jack, R. E., Garrod, O. G., Yu, H., Caldara, R., & Schyns, P. G. (2012). Facial expressions of
emotion are not culturally universal. Proceedings of the National Academy of
Sciences, 109(19), 7241-7244. https://doi.org/10.1073/pnas.1200155109
Jack, R. E., Sun, W., Delis, I., Garrod, O. G., & Schyns, P. G. (2016). Four not six: Revealing
culturally common facial expressions of emotion. Journal of Experimental Psychology:
General, 145(6), 708. https://doi.org/10.1037/xge0000162
Keating, C. T., & Cook, J. L. (2020). Facial expression production and recognition in autism
spectrum disorders: A shifting landscape. Child and Adolescent Psychiatric
Clinics, 29(3), 557-571. https://doi.org/10.1016/j.chc.2020.02.006
Keating, C. T., Fraser, D. S., Sowden, S., & Cook, J. L. Differences Between Autistic and
Non-Autistic Adults in the Recognition of Anger from Facial Motion Remain after
Controlling for Alexithymia. Journal of Autism and Developmental Disabilities, 1-17.
Advance online publication. https://doi.org/10.1007/s10803-021-05083-9
Keating, C. T., Sowden, S., & Cook, J. L. (2022). Comparing internal representations of
facial expression kinematics between autistic and non‐autistic adults. Autism
Research, 15(3), 493-506. https://doi.org/10.1002/aur.2642
Pichon, S., Bediou, B., Antico, L., Jack, R., Garrod, O., Sims, C., Green, C. S., Schyns, P., &
Bavelier, D. (2020). Emotion perception in habitual players of action video
games. Emotion. https://doi.org/10.1037/emo0000740
Sowden, S., Schuster, B. A., Keating, C. T., Fraser, D. S., & Cook, J. L. (2021). The role of
movement kinematics in facial emotion expression production and
recognition. Emotion. http://dx.doi.org/10.1037/emo0000835
Young, A. W., Rowland, D., Calder, A. J., Etcoff, N. L., Seth, A., & Perrett, D. I. (1997).
Facial expression megamix: Tests of dimensional and category accounts of emotion
recognition. Cognition, 63(3), 271-313.
Zheng, W. L., Zhu, J. Y., & Lu, B. L. (2017). Identifying stable patterns over time for
emotion recognition from EEG. IEEE Transactions on Affective Computing, 10(3),
417-429. https://doi.org/10.1109/TAFFC.2017.2712143.
Article
Full-text available
The study of visual representations of emotion - the way facial expressions look in the “mind’s eye” – is a burgeoning field. However, to date, studies have not yet investigated accompanying features, such as the precision or differentiation, of visual emotion representations. Here, we introduce ExpressionMap, a novel tool for indexing these features.
Article
Full-text available
To date, studies have not established whether autistic and non-autistic individuals differ in emotion recognition from facial motion cues when matched in terms of alexithymia. Here, autistic and non-autistic adults (N = 60) matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions manipulated in terms of speed and spatial exaggeration. Autistic participants exhibited significantly lower accuracy for angry, but not happy or sad, facial motion with unmanipulated speed and spatial exaggeration. Autistic, and not alexithymic, traits were predictive of accuracy for angry facial motion with unmanipulated speed and spatial exaggeration. Alexithymic traits, in contrast, were predictive of the magnitude of both correct and incorrect emotion ratings.
Article
Full-text available
The kinematics of peoples' body movements provide useful cues about emotional states: for example, angry movements are typically fast and sad movements slow. Unlike the body movement literature, studies of facial expressions have focused on spatial, rather than kinematic, cues. This series of experiments demonstrates that speed comprises an important facial emotion expression cue. In Experiments 1a-1c we developed (N = 47) and validated (N = 27) an emotion-induction procedure, and recorded (N = 42) posed and spontaneous facial expressions of happy, angry, and sad emotional states. Our novel analysis pipeline quantified the speed of changes in distance between key facial landmarks. We observed that happy expressions were fastest, sad were slowest, and angry expressions were intermediate. In Experiment 2 (N = 67) we replicated our results for posed expressions and introduced a novel paradigm to index communicative emotional expressions. Across Experiments 1 and 2, we demonstrate differences between posed, spontaneous, and communicative expression contexts. Whereas mouth and eyebrow movements reliably distinguished emotions for posed and communicative expressions, only eyebrow movements were reliable for spontaneous expressions. In Experiments 3 and 4 we manipulated facial expression speed and demonstrated a quantifiable change in emotion recognition accuracy. That is, in a discovery (N = 29) and replication sample (N = 41), we showed that speeding up facial expressions promotes anger and happiness judgments, and slowing down expressions encourages sad judgments. This influence of kinematics on emotion recognition is dissociable from the influence of spatial cues. These studies demonstrate that the kinematics of facial movements provide added value, and an independent contribution to emotion recognition. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Article
Full-text available
Action video game players (AVGPs) display superior performance in various aspects of cognition, especially in perception and top-down attention. The existing literature has examined these performance almost exclusively with stimuli and tasks devoid of any emotional content. Thus, whether the superior performance documented in the cognitive domain extend to the emotional domain remains unknown. We present 2 cross-sectional studies contrasting AVGPs and nonvideo game players (NVGPs) in their ability to perceive facial emotions. Under an enhanced perception account, AVGPs should outperform NVGPs when processing facial emotion. Yet, alternative accounts exist. For instance, under some social accounts, exposure to action video games, which often contain violence, may lower sensitivity for empathy-related expressions such as sadness, happiness, and pain while increasing sensitivity to aggression signals. Finally, under the view that AVGPs excel at learning new tasks (in contrast to the view that they are immediately better at all new tasks), the use of stimuli that participants are already experts at predicts little to no group differences. Study 1 uses drift-diffusion modeling and establishes that AVGPs are comparable to NVGPs in every decision-making stage mediating the discrimination of facial emotions, despite showing group difference in aggressive behavior. Study 2 uses the reverse inference technique to assess the mental representation of facial emotion expressions, and again documents no group differences. These results indicate that the perceptual benefits associated with action video game play do not extend to overlearned stimuli such as facial emotion, and rather indicate equivalent facial emotion skills in AVGPs and NVGPs. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Article
Full-text available
Significance Humans often use facial expressions to communicate social messages. However, observational studies report that people experiencing pain or orgasm produce facial expressions that are indistinguishable, which questions their role as an effective tool for communication. Here, we investigate this counterintuitive finding using a new data-driven approach to model the mental representations of facial expressions of pain and orgasm in individuals from two different cultures. Using complementary analyses, we show that representations of pain and orgasm are distinct in each culture. We also show that pain is represented with similar face movements across cultures, whereas orgasm shows differences. Our findings therefore inform understanding of the possible communicative role of facial expressions of pain and orgasm, and how culture could shape their representation.
Article
Full-text available
As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record
Article
Full-text available
Designed by biological [1, 2] and social [3] evolutionary pressures, facial expressions of emotion comprise specific facial movements [4-8] to support a near-optimal system of signaling and decoding [9, 10]. Although highly dynamical [11, 12], little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling [13-15], information theory [16, 17], and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals [1] supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories [18], and instead suggesting four. VIDEO ABSTRACT:
Preprint
Recent developments suggest that autistic individuals may require static and dynamic angry expressions to be of higher emotional intensity in order for them to be successfully identified. In the case of dynamic stimuli, autistic individuals require angry facial motion to have a higher speed. Therefore, it is plausible that autistic individuals do not have a ‘deficit’ in angry expression recognition, but rather their internal representation of these expressions is characterized by very high-speed movement. In this (pre-registered) study, 25 autistic and 25 non-autistic adults matched on age, gender, non-verbal reasoning and alexithymia completed a novel emotion-based task which employed dynamic displays of happy, angry and sad point light facial (PLF) expressions. On each trial, participants moved a slider to manipulate the speed of a PLF stimulus such that it moved at a speed that, in their ‘mind’s eye’, was typical of happy, angry or sad expressions. Results showed that participants attributed the highest speeds to angry, then happy, then sad, facial motion. Participants increased the speed of angry and happy expressions by 41% and 27% respectively and decreased the speed of sad expressions by 18%. This suggests that participants have ‘caricatured’ internal representations of emotion, wherein emotion-related kinematic cues are over-emphasized. There were no differences between autistic and non-autistic individuals in the speeds attributed to full-face and partial-face (those only showing the eyes or mouth) angry, happy and sad facial motion respectively. Consequently, we find no evidence that autistic adults possess atypical fast internal representations of angry expressions.
Article
Social ‘difficulties’ associated with Autism Spectrum Disorder (ASD) may be a product of neurotypical-autistic differences in emotion expression and recognition. Research suggests that neurotypical and autistic individuals exhibit expressive differences, with autistic individuals displaying less frequent expressions that are rated low in quality by non-autistic raters. Furthermore, autistic individuals have difficulties recognising neurotypical facial expressions, and conversely, neurotypical individuals have difficulties recognising autistic expressions. Nevertheless, findings are mixed. Task-related factors (e.g. intensity of stimuli) and participant characteristics (e.g. age, intelligence quotient (IQ), co-morbid diagnoses) may contribute to the mixed findings. In particular, co-occurring alexithymia may significantly contribute to or even accounts for atypical emotion recognition and production in autistic individuals. We conclude by highlighting important areas for future research and the clinical implications of the discussed findings.
Article
In this paper, we investigate stable patterns of electroencephalogram (EEG) over time for emotion recognition using a machine learning approach. Up to now, various findings of activated patterns associated with different emotions have been reported. However, their stability over time has not been fully investigated yet. In this paper, we focus on identifying EEG stability in emotion recognition. To validate the efficiency of the machine learning algorithms used in this study, we systematically evaluate the performance of various popular feature extraction, feature selection, feature smoothing and pattern classification methods with the DEAP dataset and a newly developed dataset for this study. The experimental results indicate that stable patterns exhibit consistency across sessions; the lateral temporal areas activate more for positive emotion than negative one in beta and gamma bands; the neural patterns of neutral emotion have higher alpha responses at parietal and occipital sites; and for negative emotion, the neural patterns have significant higher delta responses at parietal and occipital sites and higher gamma responses at prefrontal sites. The performance of our emotion recognition system shows that the neural patterns are relatively stable within and between sessions.