Article

Bidirectional contextual influence between faces and bodies in emotion perception

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Recent evidence shows that body context may alter the categorization of facial expressions. However, less is known about how facial expressions influence the categorization of emotional bodies. We hypothesized that context effects would be displayed bidirectionally, from bodies to faces and from faces to bodies. Participants viewed emotional face-body compounds and were required to categorize emotions of faces (Condition 1), bodies (Condition 2), or full persons (Condition 3). Results showed evidence for bidirectional context effects: faces were influenced by bodies, and bodies were influenced by faces. However, because the specific confusability patterns differ for faces and bodies (e.g., disgust and anger expressions are confusable in the face, but less so in the body) we found unique patterns of contextual influence in each expression channel. Together, the findings suggest that the emotional expressions of faces and bodies contextualize each other bidirectionally and that emotion categorization is sensitive to the perceptual focus determined by task instructions. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A facial expression is a centric target, and body gesture is a prominent surrounding cue (Ekman, 1993;Izard, 1994). Facial expressions and body gestures are processed automatically and holistically as a single emotional perception (Abo Foul et al., 2018;Aviezer et al., 2012a;Lecker et al., 2020). Growing evidence indicates that body gestures can alter fixations and perceptions of facial expressions (Aviezer et al., 2008(Aviezer et al., , 2012b. ...
... Growing evidence indicates that body gestures can alter fixations and perceptions of facial expressions (Aviezer et al., 2008(Aviezer et al., , 2012b. Emotional perception is easier to observe in congruent combinations of facial cues and body gestures than with each of these in isolation (Aviezer et al., 2012a;Karaaslan et al., 2020;Lecker et al., 2020). ...
... One possible explanation for this difference could be that body gestures do not play an equal role in different facial expressions. Previous studies have shown that body gestures boost the recognition of expressions of disgust, but not as much for other facial expressions of emotion (Aviezer et al., 2008(Aviezer et al., , 2012aLecker et al., 2020). ...
... To date, research has not examined potential asymmetrical effects of emotion scenes in addition to emotion postures. Such work would further elucidate the debate over proposed mechanisms for contextualized emotion perception (e.g., Lecker et al., 2020;Mondloch et al., 2013). ...
... A concern with stimuli used in previous research is that participants may become increasingly familiar with the face identities with repeated exposure, which could facilitate their ability to directly compare stimuli across trials (Burton, 2013). For instance, Lecker et al. (2020) used 6 face identities to display 4 distinct emotions across 96 trials, resulting in each participant viewing each face identity 16 times. For the current study, we used 48 distinct face identities displaying one emotion each with face identity distributed randomly amongst stimuli blocks (see below), thus minimizing comparison effects compared to previous research using fully crossed designs. ...
... Previous research has shown that emotion faces paired with incongruent non-facial cues result in systematic changes in face and non-face categorizations (see Lecker et al., 2020). The following analyses first examined whether categorizations matching the face were uniquely influenced by incongruent postures or scenes (face vs. non-face). ...
Article
Full-text available
There is ongoing debate as to whether emotion perception is determined by facial expressions or context (i.e., non-facial cues). The present investigation examined the independent and interactive effects of six emotions (anger, disgust, fear, joy, sadness, neutral) conveyed by combinations of facial expressions, bodily postures, and background scenes in a fully crossed design. Participants viewed each face-posture-scene (FPS) combination for 5 s and were then asked to categorize the emotion depicted in the image. Four key findings emerged from the analyses: (1) For fully incongruent FPS combinations, participants categorized images using the face in 61% of instances and the posture and scene in 18% and 11% of instances, respectively; (2) postures (with neutral scenes) and scenes (with neutral postures) exerted differential influences on emotion categorizations when combined with incongruent facial expressions; (3) contextual asymmetries were observed for some incongruent face-posture pairings and their inverse (e.g., anger-fear vs. fear-anger), but not for face-scene pairings; (4) finally, scenes exhibited a boosting effect of posture when combined with a congruent posture and attenuated the effect of posture when combined with a congruent face. Overall, these findings highlight independent and interactional roles of posture and scene in emotion face perception. Theoretical implications for the study of emotions in context are discussed.
... Interpreting emotions of people around us is central to human experience, a process traditionally considered to involve the isolated face (Ekman & Friesen, 1976). Nevertheless, when categorizing facial expressions, people utilize the context in which the expression is embedded (e.g., body postures, scenes, sounds) in a manner that can shift perceivers' face categorization (Atias et al., 2019;Aviezer et al., 2008Aviezer et al., , 2012aGendron et al., 2013;Lecker et al., 2019;Reschke et al., 2019). Sources of contextual influence may also include the viewer's language, which may play a key role in the perception of other's emotions (Barrett et al., 2007;Lindquist & Gendron, 2013). ...
... A growing body of literature shows that humans use body language when deciphering facial emotional expressions. Robust evidence shows for example that a stereotypical disgust face appearing with an angry body context is typically classified as conveying anger (Aviezer et al., 2008(Aviezer et al., , 2011Lecker et al., 2019;Meeren et al., 2005;Noh & Isaacowitz, 2013; but see Reschke et al., 2018 for evidence that this effect is dependent on the type of stereotypical faces used). Such face-body compounds are processed holistically (Aviezer et al., 2012a;Mondloch, 2012;Mondloch et al., 2013) and automatically (Aviezer et al., 2011), even when participants attempt to focus on the face and ignore the task-irrelevant body. ...
Article
Full-text available
Semantic emotional labels can influence the recognition of isolated facial expressions. However, it is unknown if labels also influence the susceptibility of facial expressions to context. To examine this, participants categorized expressive faces presented with emotionally congruent or incongruent bodies, serving as context. Face-body composites were presented together, aligned in their natural form, or spatially misaligned with the head shifted horizontally beside the body—a condition known to reduce the contextual impact of the body on the face. Critically, participants responded either by choosing emotion labels or by perceptually matching the target expression with expression probes. The results show a label dominance effect: Face-body congruency effects were larger with semantic labels than with perceptual expression matching, indicating that facial expressions are more prone to contextual influence when categorized with emotion labels, an effect only found when faces and bodies were aligned. These findings suggest that the role of conceptual language in face-body context effects may be larger than previously assumed.
... Two additional studies investigated whether faces and bodies influence each other in a symmetrical/bidirectional way (Kret et al., 2013;Lecker et al., 2019). Both studies showed that body expressions modulated the perceived facial expressions and vice versa. ...
... Both studies showed that body expressions modulated the perceived facial expressions and vice versa. However, Lecker et al. (2019) posited that the magnitude of context effects may depend on the confusability of specific expressions. Some expressions may be more confusable in faces than in bodies (e.g., anger and disgust), vice versa. ...
Article
The human "person" is a common percept we encounter. Research on person perception has been focused either on face or body perception-with less attention paid to whole person perception. We review psychological and neuroscience studies aimed at understanding how face and body processing operate in concert to support intact person perception. We address this question considering: a.) the task to be accomplished (identification, emotion processing, detection), b.) the neural stage of processing (early/late visual mechanisms), and c.) the relevant brain regions for face/body/person processing. From the psychological perspective, we conclude that the integration of faces and bodies is mediated by the goal of the processing (e.g., emotion analysis, identification, etc.). From the neural perspective, we propose a hierarchical functional neural architecture of face-body integration that retains a degree of separation between the dorsal and ventral visual streams. We argue for two centers of integration: a ventral semantic integration hub that is the result of progressive, posterior-to-anterior, face-body integration; and a social agent integration hub in the dorsal stream STS.
... Beyond the perceptual analysis of the expression, prior knowledge about the situation may help to disambiguate a sender's message. In real life, emotion expressions are never perceived in isolation, as faces are typically seen attached to a body and against the backdrop of some scenery (Aviezer et al., 2008(Aviezer et al., , 2017Kret et al., 2013;Lecker et al., 2020). Previous research has shown that considering this contextual information is critical for accurately classifying emotion displays, i.e. whether they belong to one emotion category or another (Aviezer et al., 2017;Gendron et al., 2013). ...
Article
Full-text available
Most past research has focused on the role played by social context information in emotion classification, such as whether a display is perceived as belonging to one emotion category or another. The current study aims to investigate whether the effect of context extends to the interpretation of emotion displays, i.e. smiles that could be judged either as posed or spontaneous readouts of underlying positive emotion. A between-subjects design ( N = 93) was used to investigate the perception and recall of posed smiles, presented together with a happy or polite social context scenario. Results showed that smiles seen in a happy context were judged as more spontaneous than the same smiles presented in a polite context. Also, smiles were misremembered as having more of the physical attributes (i.e., Duchenne marker) associated with spontaneous enjoyment when they appeared in the happy than polite context condition. Together, these findings indicate that social context information is routinely encoded during emotion perception, thereby shaping the interpretation and recognition memory of facial expressions.
... Similarly, future research should investigate the influence of faces on label ratings, to gain a more complete understanding of the potential bidirectional relationship between language and emotional expressions. We expect that label ratings would be influenced by paired face information; similar bidirectional relationships have previously been shown between face-body and face-voice pairings [74][75][76][77][78][79][80], and we believe this would hold for face-language pairings. ...
Article
Full-text available
Whether language information influences recognition of emotion from facial expressions remains the subject of debate. The current studies investigate how variations in emotion labels that are paired with expressions influences participants’ judgments of the emotion displayed. Static (Study 1) and dynamic (Study 2) facial expressions depicting eight emotion categories were paired with emotion labels that systematically varied in arousal (low and high). Participants rated the arousal, valence, and dominance of expressions paired with labels. Isolated faces and isolated labels were also rated. As predicted, the label presented influenced participants’ judgments of the expressions. Across both studies, higher arousal labels were associated with: 1) higher ratings of arousal for sad, angry, and scared expressions, and 2) higher ratings of dominance for angry, proud, and disgust expressions. These results indicate that emotion labels influence judgments of facial expressions.
Article
Research has shown that context influences how sincere a smile appears to observers. That said, most studies on this topic have focused exclusively on situational cues (e.g. smiling while at a party versus smiling during a job interview) and few have examined other elements of context. One important element concerns any knowledge an observer might have about the smiler as an individual (e.g. their habitual behaviours, traits or attitudes). In this manuscript, we present three experiments that explored the influence of such knowledge on ratings of smile sincerity. In Experiments 1 and 2, participants rated the sincerity of Duchenne and non-Duchenne smiles after having been exposed to cues about the smiler's tendency to reciprocate (this person always, never or occasionally returns favours). In Experiment 3 they performed the same task but with cues about the smiler's love of learning (this person always, never or occasionally enjoys learning new tasks). The results show that cues about the smiler's reciprocity tendency influenced participants' ratings of smile sincerity and did so in a stronger manner than cues about the smiler's love of learning. Overall, these results both strengthen and broaden the literature on the role of context on judgements of smile sincerity.
Preprint
Full-text available
Accurately recognizing other individuals is fundamental for successful social interactions. While the neural underpinnings of this skill have been studied extensively in humans, less is known about the evolutionary origins of the brain areas specialized for recognising faces or bodies. Studying dogs ( Canis familiaris ), a non-primate species with the ability to perceive faces and bodies similarly to humans, promises insights into how visuo-social perception has evolved in mammals. We investigated the neural correlates of face and body perception in dogs ( N = 15) and humans ( N = 40) using functional MRI. Combining uni- and multivariate analysis approaches, we identified activation levels and patterns that suggested potentially homologous occipito-temporal brain regions in both species responding to faces and bodies compared to inanimate objects. Crucially, only human brain regions showed activation differences between faces and bodies and partly responded more strongly to humans compared to dogs. Moreover, only dogs represented both faces and dog bodies in olfactory regions. Overall, our novel findings revealed a prominent role of the occipito-temporal cortex in the perception of animate entities in dogs and humans but suggest a divergent evolution of face and body perception. This may reflect differences in the perceptual systems these species rely on to recognize others.
Article
Objectives It is commonly argued that older adults show difficulties in standardized tasks of emotional expression perception, yet most previous works relied on classic sets of static, decontextualized, and stereotypical facial expressions. In real-life, facial expressions are dynamic and embedded in a rich context, two key factors that may aid emotion perception. Specifically, body language provides important affective cues that may disambiguate facial movements. Method We compared emotion perception of dynamic faces, bodies, and their combination, in a sample of older (age 60-83, n=126) and young (age 18-30, n=124) adults. We used the Geneva Multimodal Emotion Portrayals (GEMEP) set, which includes a full view of expressers’ faces and bodies, displaying a diverse range of positive and negative emotions, portrayed dynamically and holistically in a non-stereotypical, unconstrained manner. Critically, we digitally manipulated the dynamic cue such that perceivers viewed isolated faces (without bodies), isolated bodies (without faces), or faces with bodies. Results Older adults showed better perception of positive and negative dynamic facial expressions, while young adults showed better perception of positive isolated dynamic bodily expressions. Importantly, emotion perception of faces with bodies was comparable across ages. Discussion Dynamic emotion perception in young and older adults may be more similar than previously assumed, especially when the task is more realistic and ecological. Our results emphasize the importance of contextualized and ecological tasks in emotion perception across ages.
Article
Full-text available
East Asians tend towards holistic styles of thinking whereas Westerners generally think more analytically. Recent work has shown that Western participants perceive emotional expressions in a somewhat holistic manner, however. Specifically, Westerners interpret emotional facial expressions differently when presented with a body displaying a congruent versus incongruent emotional expression. Here, we examined how processing these face-body combinations varies according to cultural differences in thinking style. Consistent with their proclivity towards contextual focus, Japanese perceivers focused more on the body when judging the emotions of face-body composites. Moreover, in line with their greater tendency towards holistic perceptual processing, we found that pairing facial expressions of emotion with emotionally congruent bodies facilitated Japanese participants’ recognition of faces’ emotions to a greater degree than it did for Canadians. Similarly, incongruent face-body combinations impaired facial emotion recognition more for Japanese than Canadian participants. These findings extend work on cultural differences in emotion recognition from interpersonal to intrapersonal contexts with implications for intercultural understanding.
Article
Full-text available
Although most research in the field of emotion perception has focused on the isolated face, recent studies have highlighted the integration of emotional faces and bodies. Instructed to be ignored, incongruent emotional body context can automatically alter the categorization of distinct and prototypical facial expressions. Previous work suggested that face-body integration is rapid, automatic, and even persists with spatial misalignment. However, the temporal limits of the face-body integration remain unclear. Using a novel measure of temporal visual integration, the current report examines the effect of introducing a temporal gap between the body and face. When presented simultaneously, faces and bodies showed strong integration, as evident by the face being strongly influenced by the information conveyed by the task-irrelevant body. By contrast, when faces and bodies were presented with a temporal lag we failed to find evidence for integration of bodily and facial emotion cues. These main findings were replicated across 3 experiments, and suggest that the integration between emotional faces and bodies may be more fragile than previously assumed.
Article
Full-text available
As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record
Article
Full-text available
Contrary to a common presupposition, the word disgust may refer to more than one emotion. From an array of 3 facial expressions (produced in our lab), participants (N = 44) in Study 1 selected the one that best matched 11 types of emotion-eliciting events: anger, sadness, and 9 types of disgust (7 types of physical disgust plus moral disgust and simply feeling ill). From an array of 4 facial expressions (two from Matsumoto & Ekman, 1988; two produced in lab), participants (N = 120) in Study 2 selected the one that best matched 14 types of disgust-eliciting events (8 physical and 6 moral). In both studies, the modal facial expression for physical disgust was the "sick face" developed by Widen, Pochedly, Pieloch, and Russell (2013), which shows someone about to vomit. The modal facial expression for the moral violations was the standard disgust face or, when available, an anger face. If facial expression is a constituent of an emotion, physical disgust and moral disgust are separate emotions. (PsycINFO Database Record
Article
Full-text available
How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.
Article
Full-text available
The accuracy and speed with which emotional facial expressions are identified is influenced by body postures. Two influential models predict that these congruency effects will be largest when the emotion displayed in the face is similar to that displayed in the body: the emotional seed model and the dimensional model. These models differ in whether similarity is based on physical characteristics or underlying dimensions of valence and arousal. Using a 3-alternative forced-choice task in which stimuli were presented briefly (Exp 1a) or for an unlimited time (Exp 1b) we provide evidence that congruency effects are more complex than either model predicts; the effects are asymmetrical and cannot be accounted for by similarity alone. Fearful postures are especially influential when paired with facial expressions, but not when presented in a flanker task (Exp 2). We suggest refinements to each model that may account for our results and suggest that additional studies be conducted prior to drawing strong theoretical conclusions.
Chapter
Full-text available
Two studies tested the hypothesis that in judging people's emotions from their facial expressions, Japanese, more than Westerners, incorporate information from the social context. In Study 1, participants viewed cartoons depicting a happy, sad, angry, or neutral person surrounded by other people expressing the same emotion as the central person or a different one. The surrounding people's emotions influenced Japanese but not Westerners' perceptions of the central person. These differences reflect differences in attention, as indicated by eye-tracking data (Study 2): Japanese looked at the surrounding people more than did Westerners. Previous findings on East–West differences in contextual sensitivity generalize to social contexts, suggesting that Westerners see emotions as individual feelings, whereas Japanese see them as inseparable from the feelings of the group.
Article
Full-text available
A key feature of facial behavior is its dynamic quality. However, most previous research has been limited to the use of static images of prototypical expressive patterns. This article explores the role of facial dynamics in the perception of emotions, reviewing relevant empirical evidence demonstrating that dynamic information improves coherence in the identification of affect (particularly for degraded and subtle stimuli), leads to higher emotion judgments (i.e., intensity and arousal), and helps to differentiate between genuine and fake expressions. The findings underline that using static expressions not only poses problems of ecological validity, but also limits our understanding of what facial activity does. Implications for future research on facial activity, particularly for social neuroscience and affective computing, are discussed.
Article
Full-text available
Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and emotionally congruent and incongruent face-body compounds. Participants' fixations were measured and their pupil size recorded with eye-tracking equipment and their facial reactions measured with electromyography. The results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, vice versa as well. From their facial expressions, it appeared that observers acted with signs of negative emotionality (increased corrugator activity) to angry and fearful facial expressions and with positive emotionality (increased zygomaticus) to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body facilitates the recognition of the emotion.
Article
Full-text available
Factor-analytic evidence has led most psychologists to describe affect as a set of dimensions, such as displeasure, distress, depression, excitement, and so on, with each dimension varying independently of the others. However, there is other evidence that rather than being independent, these affective dimensions are interrelated in a highly systematic fashion. The evidence suggests that these interrelationships can be represented by a spatial model in which affective concepts fall in a circle in the following order: pleasure (0), excitement (45), arousal (90), distress (135), displeasure (180), depression (225), sleepiness (270), and relaxation (315). This model was offered both as a way psychologists can represent the structure of affective experience, as assessed through self-report, and as a representation of the cognitive structure that laymen utilize in conceptualizing affect. Supportive evidence was obtained by scaling 28 emotion-denoting adjectives in 4 different ways: R. T. Ross's (1938) technique for a circular ordering of variables, a multidimensional scaling procedure based on perceived similarity among the terms, a unidimensional scaling on hypothesized pleasure–displeasure and degree-of-arousal dimensions, and a principal-components analysis of 343 Ss' self-reports of their current affective states. (70 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This paper introduces the freely available Bochum Emotional Stimulus Set (BESST) which contains pictures of bodies and faces depicting either a neutral expression or one of the six basic emotions (happiness, sadness, fear, anger, disgust, and surprise), presented from two different perspectives (0° frontal view vs. camera averted by 45° to the left). The set comprises 565 frontal view and 564 averted view pictures of real-life bodies with masked facial expressions and 560 frontal and 560 averted view faces which were synthetically created using the FaceGen 3.5 Modeller. All stimuli were validated in terms of categorization accuracy and the perceived naturalness of the expression. Additionally, each facial stimulus was morphed into three age versions (20/40/60 years). The results show high recognition of the intended facial expressions, even under speeded forced-choice conditions, as corresponds to common experimental settings. The average naturalness ratings for the stimuli range between medium and high.
Article
Full-text available
Facial expressions are of eminent importance for social interaction as they convey information about other individuals' emotions and social intentions. According to the predominant "basic emotion" approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual's face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research.
Article
Full-text available
Whole body expressions are among the main visual stimulus categories that are naturally associated with faces and the neuroscientific investigation of how body expressions are processed has entered the research agenda this last decade. Here we describe the stimulus set of whole body expressions termed bodily expressive action stimulus test (BEAST), and we provide validation data for use of these materials by the community of emotion researchers. The database was composed of 254 whole body expressions from 46 actors expressing 4 emotions (anger, fear, happiness, and sadness). In all pictures the face of the actor was blurred and participants were asked to categorize the emotions expressed in the stimuli in a four alternative-forced-choice task. The results show that all emotions are well recognized, with sadness being the easiest, followed by fear, whereas happiness was the most difficult. The BEAST appears a valuable addition to currently available tools for assessing recognition of affective signals. It can be used in explicit recognition tasks as well as in matching tasks and in implicit tasks, combined either with facial expressions, with affective prosody, or presented with affective pictures as context in healthy subjects as well as in clinical populations.
Article
Full-text available
We report two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES). The ADFES is distinct from existing datasets in that it includes a face-forward version and two different head-turning versions (faces turning toward and away from viewers), North-European as well as Mediterranean models (male and female), and nine discrete emotions (joy, anger, fear, sadness, surprise, disgust, contempt, pride, and embarrassment). Study 1 showed that the ADFES received excellent recognition scores. Recognition was affected by social categorization of the model: displays of North-European models were better recognized by Dutch participants, suggesting an ingroup advantage. Head-turning did not affect recognition accuracy. Study 2 showed that participants more strongly perceived themselves to be the cause of the other's emotion when the model's face turned toward the respondents. The ADFES provides new avenues for research on emotion expression and is available for researchers upon request.
Article
Full-text available
What does the "facial expression of disgust" communicate to children? When asked to label the emotion conveyed by different facial expressions widely used in research, children (N = 84, 4 to 9 years) were much more likely to label the "disgust face" as anger than as disgust, indeed just as likely as they were to label the "angry face" as anger. Shown someone with a disgust face and asked to generate a possible cause and consequence of that emotion, children provided answers indistinguishable from what they provided for an angry face--even for the minority who had labeled the disgust face as disgust. A majority of adults (N = 22) labeled the same disgust faces shown to the children as disgust and generated causes and consequences that implied disgust.
Article
Full-text available
Why bodies? It is rather puzzling that given the massive interest in affective neuroscience in the last decade, it still seems to make sense to raise the question 'Why bodies' and to try to provide an answer to it, as is the goal of this article. There are now hundreds of articles on human emotion perception ranging from behavioural studies to brain imaging experiments. These experimental studies complement decades of reports on affective disorders in neurological patients and clinical studies of psychiatric populations. The most cursory glance at the literature on emotion in humans, now referred to by the umbrella term of social and affective neuroscience, shows that over 95 per cent of them have used faces as stimuli. Of the remaining 5 per cent, a few have used scenes or auditory information including human voices, music or environmental sounds. But by far the smallest number has looked into whole-body expressions. As a rough estimate, a search on PubMed today, 1 May 2009, yields 3521 hits for emotion x faces, 1003 hits for emotion x music and 339 hits for emotion x bodies. When looking in more detail, the body x emotion category in fact yields a majority of papers on well-being, nursing, sexual violence or organ donation. But the number of cognitive and affective neuroscience studies of emotional body perception as of today is lower than 20. Why then have whole bodies and bodily expressions not attracted the attention of researchers so far? The goal of this article is to contribute some elements for an answer to this question. I believe that there is something to learn from the historical neglect of bodies and bodily expressions. I will next address some historical misconceptions about whole-body perception, and in the process I intend not only to provide an impetus for this kind of work but also to contribute to a better understanding of the significance of the affective dimension of behaviour, mind and brain as seen from the vantage point of bodily communication. Subsequent sections discuss available evidence for the neurofunctional basis of facial and bodily expressions as well as neuropsychological and clinical studies of bodily expressions.
Article
Full-text available
This study used a technique for assessing the relative impact of facial-gestural expressions, as opposed to contextual information regarding the elicitor and situation, on the judgment of emotion. In Study 1, 28 undergraduates rated videotapes of spontaneous facial-gestural expressions and separately rated the emotionally loaded color slides that elicited those expressions. The source clarities of the expressions and slides were matched using correlation and distance measures, and 18 expressions and 9 slides were selected. In Study 2, 72 undergraduate receivers were shown systematic pairings of these expressions and slides and rated the emotional state of the expresser, who was supposedly watching that slide under public or private situational conditions. Expressions were found to be more important sources for all emotion judgments. For female receivers slides were relatively more important in the public than in the private situation.
Article
Full-text available
This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures.
Article
Full-text available
In our natural world, a face is usually encountered not as an isolated object but as an integrated part of a whole body. The face and the body both normally contribute in conveying the emotional state of the individual. Here we show that observers judging a facial expression are strongly influenced by emotional body language. Photographs of fearful and angry faces and bodies were used to create face-body compound images, with either matched or mismatched emotional expressions. When face and body convey conflicting emotional information, judgment of facial expression is hampered and becomes biased toward the emotion expressed by the body. Electrical brain activity was recorded from the scalp while subjects attended to the face and judged its emotional expression. An enhancement of the occipital P1 component as early as 115 ms after presentation onset points to the existence of a rapid neural mechanism sensitive to the degree of agreement between simultaneously presented facial and bodily emotional expressions, even when the latter are unattended. • emotion communication • event-related potentials • visual perception
Article
Full-text available
The most familiar emotional signals consist of faces, voices, and whole-body expressions, but so far research on emotions expressed by the whole body is sparse. The authors investigated recognition of whole-body expressions of emotion in three experiments. In the first experiment, participants performed a body expression-matching task. Results indicate good recognition of all emotions, with fear being the hardest to recognize. In the second experiment, two alternative forced choice categorizations of the facial expression of a compound face-body stimulus were strongly influenced by the bodily expression. This effect was a function of the ambiguity of the facial expression. In the third experiment, recognition of emotional tone of voice was similarly influenced by task irrelevant emotional body expressions. Taken together, the findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices.
Article
Full-text available
Two studies tested the hypothesis that in judging people's emotions from their facial expressions, Japanese, more than Westerners, incorporate information from the social context. In Study 1, participants viewed cartoons depicting a happy, sad, angry, or neutral person surrounded by other people expressing the same emotion as the central person or a different one. The surrounding people's emotions influenced Japanese but not Westerners' perceptions of the central person. These differences reflect differences in attention, as indicated by eye-tracking data (Study 2): Japanese looked at the surrounding people more than did Westerners. Previous findings on East-West differences in contextual sensitivity generalize to social contexts, suggesting that Westerners see emotions as individual feelings, whereas Japanese see them as inseparable from the feelings of the group.
Article
The present research tested the notion that emotion expressions and context are bidirectionally related. Specifically, in two studies focusing on moral violations (N = 288) and positive moral deviations (N = 245) respectively, we presented participants with short vignettes describing behaviors that were either (im)moral, (im)polite or unusual together with a picture of the emotional reaction of a person who supposedly had been witness to the event. Participants rated both the emotional reactions observed and their own moral appraisal of the situation described. In both studies, we found that situational context influences how emotional reactions to this context are rated and in turn the emotional expression shown in reaction to a situation influenced the appraisal of the situation. That is, neither the moral events nor the emotion expressions were judged in an absolute fashion. Rather, the perception of one also depended on the other.
Article
A growing literature shows that body postures influence recognition of static facial expressions; a fearful face, for example, is perceived as angry when presented on an angry body posture. In daily life, however, people conveying emotions are moving. Here we provide the first examination of such congruency effects for stimuli with naturalistic movement. Adults and children were asked to label the facial expression in static or dynamic whole-person displays comprising congruent (e.g., sad face on sad body) and incongruent (e.g., sad face on fearful body) expressions. Recognition was impaired on incongruent trials, especially for dynamic stimuli and despite eye-tracking data confirming that both age groups attended to the face, as instructed. Our findings highlight the importance of integrating whole-person and dynamic stimuli into research and theories of emotion perception.
Article
With a few yet increasing number of exceptions, the cognitive sciences enthusiastically endorsed the idea that there are basic facial expressions of emotions that are created by specific configurations of facial muscles. We review evidence that suggests an inherent role for context in emotion perception. Context does not merely change emotion perception at the edges; it leads to radical categorical changes. The reviewed findings suggest that configurations of facial muscles are inherently ambiguous, and they call for a different approach towards the understanding of facial expressions of emotions. Prices of sticking with the modal view, and advantages of an expanded view, are succinctly reviewed.
Article
Recent judgment studies have shown that people are able to fairly correctly attribute emotional states to others' bodily expressions. It is, however, not clear which movement qualities are salient, and how this applies to emotional gesture during speech-based interaction. In this study we investigated how the expression of emotions that vary on three major emotion dimensions-that is, arousal, valence, and potency-affects the perception of dynamic arm gestures. Ten professional actors enacted 12 emotions in a scenario-based social interaction setting. Participants (N = 43) rated all emotional expressions with muted sound and blurred faces on six spatiotemporal characteristics of gestural arm movement that were found to be related to emotion in previous research (amount of movement, movement speed, force, fluency, size, and height/vertical position). Arousal and potency were found to be strong determinants of the perception of gestural dynamics, whereas the differences between positive or negative emotions were less pronounced. These results confirm the importance of arm movement in communicating major emotion dimensions and show that gesture forms an integrated part of multimodal nonverbal emotion communication.
Article
The extent to which people can focus attention in the face of irrelevant distractions has been shown to critically depend on the level and type of information load involved in their current task. The ability to focus attention improves under task conditions of high perceptual load but deteriorates under conditions of high load on cognitive control processes such as working memory. I review recent research on the effects of load on visual awareness and brain activity, including changing effects over the life span, and I outline the consequences for distraction and inattention in daily life and in clinical populations.
Article
The distinction between positive and negative emotions is fundamental in emotion models. Intriguingly, neurobiological work suggests shared mechanisms across positive and negative emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak intensities of emotion, positive and negative situations were successfully discriminated from isolated bodies but not faces. Nevertheless, viewers perceived illusory positivity or negativity in the nondiagnostic faces when seen with bodies. To reveal the underlying mechanisms, we created compounds of intense negative faces combined with positive bodies, and vice versa. Perceived affect and mimicry of the faces shifted systematically as a function of their contextual body emotion. These findings challenge standard models of emotion expression and highlight the role of the body in expressing and perceiving emotions.
Article
Although age-related declines in facial expression recognition are well documented, previous research has relied mostly on isolated faces devoid of context. The authors investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had the highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were the lowest. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Article
Faces and bodies are typically encountered simultaneously, yet little research has explored the visual processing of the full person. Specifically, it is unknown whether the face and body are perceived as distinct components or as an integrated, gestalt-like unit. To examine this question, we investigated whether emotional face-body composites are processed in a holistic-like manner by using a variant of the composite face task, a measure of holistic processing. Participants judged facial expressions combined with emotionally congruent or incongruent bodies that have been shown to influence the recognition of emotion from the face. Critically, the faces were either aligned with the body in a natural position or misaligned in a manner that breaks the ecological person form. Converging data from 3 experiments confirm that breaking the person form reduces the facilitating influence of congruent body context as well as the impeding influence of incongruent body context on the recognition of emotion from the face. These results show that faces and bodies are processed as a single unit and support the notion of a composite person effect analogous to the classic effect described for faces.
Article
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.
Article
Recent studies have demonstrated that context can dramatically influence the recognition of basic facial expressions, yet the nature of this phenomenon is largely unknown. In the present paper we begin to characterize the underlying process of face-context integration. Specifically, we examine whether it is a relatively controlled or automatic process. In Experiment 1 participants were motivated and instructed to avoid using the context while categorizing contextualized facial expression, or they were led to believe that the context was irrelevant. Nevertheless, they were unable to disregard the context, which exerted a strong effect on their emotion recognition. In Experiment 2, participants categorized contextualized facial expressions while engaged in a concurrent working memory task. Despite the load, the context exerted a strong influence on their recognition of facial expressions. These results suggest that facial expressions and their body contexts are integrated in an unintentional, uncontrollable, and relatively effortless manner.
Article
Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly "read out" from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels.
Article
Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the 'peak' of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.
Article
People's faces show fear in many different circumstances. However, when people are terrified, as well as showing emotion, they run for cover. When we see a bodily expression of emotion, we immediately know what specific action is associated with a particular emotion, leaving little need for interpretation of the signal, as is the case for facial expressions. Research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are automatically perceived and understood, and their role in emotional communication and decision-making.
Article
Neuropsychological and neuroimaging evidence suggests that the human brain contains facial expression recognition detectors specialized for specific discrete emotions. However, some human behavioral data suggest that humans recognize expressions as similar and not discrete entities. This latter observation has been taken to indicate that internal representations of facial expressions may be best characterized as varying along continuous underlying dimensions. To examine the potential compatibility of these two views, the present study compared human and support vector machine (SVM) facial expression recognition performance. Separate SVMs were trained to develop fully automatic optimal recognition of one of six basic emotional expressions in real-time with no explicit training on expression similarity. Performance revealed high recognition accuracy for expression prototypes. Without explicit training of similarity detection, magnitude of activation across each emotion-specific SVM captured human judgments of expression similarity. This evidence suggests that combinations of expert classifiers from separate internal neural representations result in similarity judgments between expressions, supporting the appearance of a continuous underlying dimensionality. Further, these data suggest similarity in expression meaning is supported by superficial similarities in expression appearance.
Social and emotional functions in facial expression and communication: The readout hypothesis
  • R Buck
Buck, R. (1994). Social and emotional functions in facial expression and communication: The readout hypothesis. Biological Psychology, 38, 95-115. http://dx.doi.org/10.1016/0301-0511(94)90032-9
The role of context in interpreting facial expression: Comment on Russell and Fehr (1987)
  • P Ekman
  • M Sullivan
Ekman, P., & O'Sullivan, M. (1988). The role of context in interpreting facial expression: Comment on Russell and Fehr (1987). Journal of Experimental Psychology: General, 117, 86 -88. http://dx.doi.org/10 .1037/0096-3445.117.1.86
Computational models and the human perception of emotional body language (EBL)
  • M Fridin
  • A Barliya
  • E Schechtman
  • B De Gelder
  • T Flash
Fridin, M., Barliya, A., Schechtman, E., de Gelder, B., & Flash, T. (2009). Computational models and the human perception of emotional body language (EBL). Proceedings of AISB (pp. 16 -19). Edinburgh, Scotland: SSAISB.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Mondloch
  • C J Nelson
  • N L Horner
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Mondloch, C. J., Nelson, N. L., & Horner, M. (2013). Asymmetries of influence: Differential effects of body postures on perceptions of emotional facial expressions. PLoS ONE, 8(9), e73605. http://dx.doi.org/10 .1371/journal.pone.0073605