ArticlePDF Available

The "Disgust face" conveys anger to children

Authors:
  • Committee for Children

Abstract

What does the "facial expression of disgust" communicate to children? When asked to label the emotion conveyed by different facial expressions widely used in research, children (N = 84, 4 to 9 years) were much more likely to label the "disgust face" as anger than as disgust, indeed just as likely as they were to label the "angry face" as anger. Shown someone with a disgust face and asked to generate a possible cause and consequence of that emotion, children provided answers indistinguishable from what they provided for an angry face--even for the minority who had labeled the disgust face as disgust. A majority of adults (N = 22) labeled the same disgust faces shown to the children as disgust and generated causes and consequences that implied disgust.
A preview of the PDF is not available
... There are many challenges in implementing emotions in virtual agents and social robots. While understanding emotions, especially basic emotions, is considered to be intuitive for humans [6], both children and adults have difficulty in distinguishing between some emotions in human images, such as anger and disgust [7,8]. Further, limitations in movements of social robots or in the design of virtual agents can affect how affective expressions are implemented. ...
... Furthermore, adults [8] and children [7] are reported to have difficulty in distinguishing between disgust and anger, which are two of the expressions that did not receive a high accuracy in our study. Recognition of the affective expressions is also expected to be improved if the robot's expressions are observed in a specific context of humanrobot interaction rather than watching them in isolation and with no explicit contextual cues. ...
Article
Full-text available
This article proposes design guidelines for 11 affective expressions for the Miro robot, and evaluates the expressions through an online video study with 116 participants. All expressions were recognized significantly above the chance level. For six of the expressions, the correct response was selected significantly more than the others, while more than one emotion was associated to some other expressions. Design decisions and the robot’s limitations that led to selecting other expressions, along with the correct expression, are discussed. We also investigated how participants’ abilities to recognize human and animal emotions, their tendency to anthropomorphize, and their familiarity with and attitudes towards animals and pets might have influenced the recognition of the robot’s affective expressions. Results show significant impact of human emotion recognition, difficulty in understanding animal emotions, and anthropomorphism tendency on recognition of Robot’s expressions. We did not find such effects regarding familiarity with/attitudes towards animals/pets in terms of how they influenced participants’ recognition of the designed affective expressions. We further studied how the robot is perceived in general and showed that it is mostly perceived to be gender neutral, and, while it is often associated with a dog or a rabbit, it can also be perceived as a variety of other animals.
... Furthermore, previous research also suggests that observers spend more time looking at the mouth regions of disgusted facial expressions (more specifically, the upper lip in [40]). Previous research also suggests high confusion rates between disgust and anger, especially disgusted expressions being misclassified as angry [53][54][55][56][57]. Jack et al. [56] found that the resolution of this misclassification occurs when the upper lip raiser action unit is activated in the expression dynamics. ...
... The enhanced classification of disgust with fixation at the mouth was associated with a reduction in the misclassifications of disgusted expressions as angry. These two expressions are commonly confused, with anger typically being confused for disgust more often than vice versa [53,[55][56][57]63,65]. This same asymmetry in confusion rates was also evident in our data; moreover, foveation of the mouth reduced misclassifications of disgusted expressions as angry whereas it did not reduce misclassifications of angry expressions as disgusted. ...
Article
Full-text available
Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.
... Par exemple, il existe un questionnement dans littérature à propos de l'émotion de dégout. Plusieurs études ont montré que les enfants et adultes associaient l'expression faciale du dégout à une seule et même émotion de base : la colère (Aviezer et al., 2008;Widen & Russell, 2010). ...
Thesis
Full-text available
Ce projet de thèse vise à étudier la pertinence d’utiliser la médiation équine en thérapie, auprès de personnes ayant des troubles addictifs. Dans un premier temps, il s’agira d’étudier l’influence du style d’attachement des patients sur leur niveau d’autonomie ; en s’appuyant sur des modèles théoriques tels que la théorie de l’attachement (Bowlby, 1969-82 ; Hazan, 1987) et la théorie de la motivation autonome (Decy et Ryan, 2000). Dans un deuxième temps, l’objectif sera d’explorer, de décrire et d’évaluer les processus à l’œuvre durant l’intervention à visée thérapeutique avec le cheval. Cette recherche s’inscrit dans le cadre de la compréhension et de l’évaluation des interventions complexes, axe fort de recherche du laboratoire APEMAC. Le questionnement principal de ce projet de thèse est d'interroger la place de la théorie de l’attachement dans les interventions en psychologie de la santé, notamment dans les programmes de prévention de la reconsommation et de la rechute. Quels liens la motivation et l’attachement entretiennent-ils ? En quoi les troubles de l’attachement peuvent-ils entraver le processus de guérison et la tenue de l’abstinence chez ces patients ? L’utilisation du cheval en thérapie peut-elle permettre d’augmenter le sentiment de sécurité interne des personnes et favoriser le développement de leurs compétences d’auto-régulation et de la motivation autonome ? En somme, peut-on augmenter l’autonomie des patients en leur proposant une intervention qui cible les troubles de l’attachement ? Le recueil des données sera réalisé au Centre de Soins de Suite et de Réadaptation en Addictologie « la Fontenelle ». Tout au long de cette recherche, nous prévoyons d’effectuer différentes évaluations quantitatives à l’aide d’outils psychométriques. Nous utiliserons également des méthodes qualitatives en réalisant des entretiens cliniques.
... This is the first longitudinal study to investigate PAE-related effects on mentalizing ability from late childhood to adolescence. Overall RME accuracy improved with age for all participants, which is consistent with previous findings suggesting that the ability to recognize emotions from facial expressions increases with age (Fox, 2001;Herba & Phillips, 2004;Widen & Russell, 2007). However, the mentalizing impairment seen in children with FAS at 11 years persisted at TA B L E 3 Overall and valence accuracy on the RME by AA/day and FASD diagnostic group at the 11-and 17-year follow-up assessments Note: Values are means (SD) for the total mean raw scores, except for AA/day where values are Pearson r or standardized regression coefficients (β) after adjustment for confounders. ...
Article
Background: The ability to identify and interpret facial emotions plays a critical role in effective social functioning, which may be impaired in individuals with fetal alcohol spectrum disorders (FASD). We previously reported deficits in children with fetal alcohol syndrome (FAS) and partial FAS (PFAS) on the "Reading the Mind in the Eyes" (RME) test, which assesses interpretation of facial emotion. This adolescent follow-up study was designed to determine whether this impairment persists or represents a developmental delay; to classify the RME stimuli by valence (positive, negative or neutral) and determine whether RME deficits differ by affective valence; and to explore how components of executive function mediate these associations. Methods: The RME stimuli were rated and grouped according to valence. 62 participants who had been administered the RME in late childhood (mean±SD=11.0±0.4yr) were re-administered this test during adolescence (17.2±0.6yr). Overall and valence-specific RME accuracy was examined in relation to prenatal alcohol exposure (PAE) and FASD diagnosis. Results: Children with FAS (n=8) and PFAS (n=15) performed more poorly on the RME than non-syndromal heavily exposed (HE; n=19) and controls (n=20). By adolescence, the PFAS group performed similarly to HE and controls, whereas the FAS group continued to perform more poorly. No deficits were seen for positively-valenced items in any of the groups. For negative and neutral items, in late childhood those with FAS and PFAS performed more poorly than HE and controls, but by adolescence only the FAS continued to perform more poorly. Test-retest reliability was moderate across the two ages. At both timepoints, the effects in the FAS group were partially mediated by Verbal Fluency but not by other aspects of executive function. Conclusions: Individuals with full FAS have greater difficulty interpreting facial emotions than nonsyndromal HE and healthy controls in both childhood and adolescence. By contrast, RME deficits in individuals with PFAS in childhood represent developmental delay.
... Past work has explored these dimensional and categorical mappings of emotion in adults (e.g., Cowen & Keltner, 2021); however, it is still unclear what these relations might look like in children, and how they develop. Some kinds of relations can be inferred through patterns of errors observed in verbalresponse paradigms-such as the consistency of children's confusion about anger versus disgust (Leitzke & Pollak, 2016;Widen & Russell, 2010b). Yet, for the most part, information about how children perceive and think about underlying relations among emotion cues is limited. ...
Article
Full-text available
The present study examined how children spontaneously represent facial cues associated with emotion. 106 three‐ to six‐year‐old children (48 male, 58 female; 9.4% Asian, 84.0% White, 6.6% more than one race) and 40 adults (10 male, 30 female; 10% Hispanic, 30% Asian, 2.5% Black, 57.5% White) were recruited from a Midwestern city (2019–2020), and sorted emotion cues in a spatial arrangement method that assesses emotion knowledge without reliance on emotion vocabulary. Using supervised and unsupervised analyses, the study found evidence for continuities and gradual changes in children's emotion knowledge compared to adults. Emotion knowledge develops through an incremental learning process in which children change their representations using combinations of factors—particularly valence—that are weighted differently across development.
... This was in line with the results of Miro's expressions: while happy and sad were recognized accurately by more than half of the participants, most participants had difficulties in understanding Miro's angry and disgusted expressions. This is expected, as angry and disgusted are reported to be hard to be distinguished by many children [37] and adults [38]. Therefore, it is challenging to understand if/how these expressions can be improved in Miro, which needs to be investigated in future work. ...
Chapter
Social robots that are capable of showing affective expressions can improve human-robot interaction and users’ experiences in many ways. This capability is important in many application areas. In this paper, we describe the design and evaluation of 11 affective expressions for Miro, an animal-like robot. Affective expressions were inspired by the animal and human behavior literature. The designs were evaluated through a video study on Mechanical Turk with 88 participants. Five of the expressions—happy, sad, excited, surprised, and tired—were correctly recognized by more than half of the participants. While fewer participants were able to recognize the other, more complex affective expressions, we observed a significant correlation between the recognition of the robot’s displayed affective states and participants’ understanding of human emotions. This suggested that the reduced accuracy in the recognition of the other affective expressions can be due to the general challenges involved in recognizing complex emotions.
Article
Full-text available
Infants’ ability to discriminate facial expressions has been widely explored, but little is known about the rapid and automatic ability to discriminate a given expression against many others in a single experiment. Here we investigated the development of facial expression discrimination in infancy with fast periodic visual stimulation coupled with scalp electroencephalography (EEG). EEG was recorded in eighteen 3.5- and eighteen 7-month-old infants presented with a female face expressing disgust, happiness, or a neutral emotion (in different stimulation sequences) at a base stimulation frequency of 6 Hz. Pictures of the same individual expressing other emotions (either anger, disgust, fear, happiness, sadness, or neutrality, randomly and excluding the expression presented at the base frequency) were introduced every six stimuli (at 1 Hz). Frequency-domain analysis revealed an objective (i.e., at the predefined 1-Hz frequency and harmonics) expression-change brain response in both 3.5- and 7-month-olds, indicating the visual discrimination of various expressions from disgust, happiness and neutrality from these early ages. At 3.5 months, the responses to the discrimination from disgust and happiness expressions were located mainly on medial occipital sites, whereas a more lateral topography was found for the response to the discrimination from neutrality, suggesting that expression discrimination from an emotionally neutral face relies on distinct visual cues than discrimination from a disgust or happy face. Finally, expression discrimination from happiness was associated with a reduced activity over posterior areas and an additional response over central frontal scalp regions at 7 months as compared to 3.5 months. This result suggests developmental changes in the processing of happiness expressions as compared to negative/neutral ones within this age range.
Article
Social interactions often require the simultaneous processing of emotions from facial expressions and speech. However, the development of the gaze behavior used for emotion recognition, and the effects of speech perception on the visual encoding of facial expressions is less understood. We therefore conducted a word-primed face categorization experiment, where participants from multiple age groups (six-year-olds, 12-year-olds, and adults) categorized target facial expressions as positive or negative after priming with valence-congruent or -incongruent auditory emotion words, or no words at all. We recorded our participants' gaze behavior during this task using an eye-tracker, and analyzed the data with respect to the fixation time toward the eyes and mouth regions of faces, as well as the time until participants made the first fixation within those regions (time to first fixation, TTFF). We found that the six-year-olds showed significantly higher accuracy in categorizing congruently primed faces compared to the other conditions. The six-year-olds also showed faster response times, shorter total fixation durations, and faster TTFF measures in all primed trials, regardless of congruency, as compared to unprimed trials. We also found that while adults looked first, and longer, at the eyes as compared to the mouth regions of target faces, children did not exhibit this gaze behavior. Our results thus indicate that young children are more sensitive than adults or older children to auditory emotion word primes during the perception of emotional faces, and that the distribution of gaze across the regions of the face changes significantly from childhood to adulthood.
Chapter
Disgust has been described as a basic emotion that evolved largely as a defensive mechanism. Although previously described as the ‘forgotten emotion,’ research on the emotion of disgust has seen a considerable rise in the last few decades. Accordingly, investigation into individual differences in disgust responding has become increasingly more prominent in the emotion literature over the last three decades. With this increase in attention, a parallel evolution in the development of disgust assessment has also emerged. Each measure has its own strengths and weaknesses, and choosing the appropriate measure may vary depending on the research question of interest, the study population, and research methodology. The present chapter will review the assessment of disgust through available self-report, behavioral, and implicit measures. Implications for understanding disgust and its measurement are discussed.
Article
Full-text available
Historically, research characterizing the development of emotion recognition has focused on identifying specific skills and the age periods, or milestones, at which these abilities emerge. However, advances in emotion research raise questions about whether this conceptualization accurately reflects how children learn about, understand, and respond to others' emotions in everyday life. In this review, we propose a developmental framework for the emergence of emotion reasoning-that is, how children develop the ability to make reasonably accurate inferences and predictions about the emotion states of other people. We describe how this framework holds promise for building upon extant research. Our review suggests that use of the term emotion recognition can be misleading and imprecise, with the developmental processes of interest better characterized by the term emotion reasoning. We also highlight how the age at which children succeed on many tasks reflects myriad developmental processes. This new framing of emotional development can open new lines of inquiry about how humans learn to navigate their social worlds.
Article
Full-text available
This study is concerned with the recognition of the human facial expressions of emotions proposed by Ekman and Friesen (1978a). The first objective is to examine the developmental pattern of the recognition of some of these expressions, and the second one is to verify whether the recognition is affected by the accentuation of the expression. Ninety children, between the ages of 5 and 10 years, and 30 young adults participated in the study. There were asked to choose between several words (happiness, anger, surprise and disgust) the one which best described the emotion portrayed by the facial expression. The results indicate that the recognition accuracy increases with age for the expressions of surprise and disgust. Two kinds of errors were more common than others for the children : the interpretation of disgust expressions as anger expressions, and the interpretation of surprise expressions as disgust expressions. Finally, the results generally do not support the prediction concerning the accentuation of facial expression.
Article
Full-text available
Children (N = 160), aged 3 to 4 years, generated stories describing the causes of six different emotions: happiness, surprise, fear, anger, disgust, and sadness. The emotion was specified to the child either by a word (such as scared or disgusted) or by a photograph of a facial expression said to be a universal, biologically based signal for that emotion. For no emotion did the face produce significantly better performance than did the word. For fear and disgust, the word produced significantly better performance than did the face.
Article
Full-text available
In the last 20 years, it has been established that children's understanding of emotion changes with age. A review of the extensive literature reveals at least nine distinct components of emotion understanding that have been studied (from the simple attribution of emotions on the basis of facial cues to the emotions involved in moral judgments). Despite this large corpus of findings, there has been little research in which children's understanding of all these various components has been simultaneously assessed. The goal of the current research was to examine the development of these nine components and their interrelationship. For this purpose, 100 children of 3, 5, 7, 9 and 11 years were tested on all nine components. The results show that: (1) children display a clear improvement with age on each component; (2) three developmental phases may be identified, each characterized by the emergence of three of the nine components; (3) correlational relations exist among components within a given phase; and (4) hierarchical relations exist among components from successive phases. The results are discussed in terms of their theoretical and practical implications.
Chapter
This chapter deals with three broad issues: Whether emotions are epiphenomenal, how emotions play a crucial role in determining appraisal processes, and what the mechanisms are by which emotions may influence interpersonal behavior. We present evidence from studies indicating that emotions play a crucial role in the regulation of social behavior. Social regulation by emotion is particularly clear in a process we call social referencing—the active search by a person for emotional information from another person, and the subsequent use of that emotion to help appraise an uncertain situation. Social referencing has its roots in infancy, and we propose that it develops through a four-level sequence of capacities to process emotional information from facial expression. We discuss whether the social regulatory functions of emotion are innate or socially learned, whether feeling plays an important role in mediating the effects of emotional expressions of one person on the behavior of another, and whether stimulus context is important in accounting for differences in reaction to the same emotional information.
Article
A structural model of emotions was used to reveal patterns in how children interpret the emotional facial expressions of others. Three-, four-, five-year-olds, and adults (n = 38 in each group) were asked to match 15 emotion-descriptive words (happy, excited, surprised, afraid, scared, angry, mad, disgusted, miserable, sad, sleepy, calm, relaxed, wide awake, and, as a check on response bias, insipid) with still photographs of actors showing different facial expressions. Whereas prior research had indicated that preschool-aged children are "inaccurate" in associating labels with faces, our results indicated that that research may have severely underestimated children's knowledge of emotions. In this study children used terms systematically to refer to a specifiable range of expressions, centered around a focal point. Multidimensional scaling of the word/facial expression associations yielded a two-dimensional structure able to account for the interrelationships among emotions, and this structure was the same for all age groups. The nature of this structure, the blurry boundaries between emotion words, and developmental shifts in the referents of emotion words suggested the primacy of two dimensions, pleasure-displeasure and arousal-sleep, in children's interpretation of emotion.
Article
12-month-old infants were observed responding to 3 stimulus toys: 1 pleasant, 1 ambiguous, and 1 aversive. One-third of the infants (N = 16 in the final analysis) were randomly assigned to each of 3 maternal display conditions. In 1 condition mothers displayed positive affect in face, voice, and gestures; in 1, mothers displayed negative, disgust affect; and in 1, mothers were silent and neutral. After all 3 toys had been presented once, infants in the Mother Positive and Mother Negative conditions were shown the toys again, this time with their mothers silent and neutral. The specificity of effects was examined by comparing infant responses to the stimulus toys with responses to free-play toys. Maternal displays influenced responses only to the stimulus toys as predicted by the social referencing hypothesis. The results also suggested that maternal negative affect displays have a more immediate effect on infant behavior than do positive affect displays. Finally, the data indicated that the infants carried over their appraisals to the second presentation of the toys even though their mothers ceased delivering the affectively toned messages.
Article
This study compared 17 abused and 17 matched, nonabused children on their ability to identify six facial expressions of emotions and on teacher ratings of social competency. Abused children were less skilled in decoding facial expressions of emotions and were rated less socially competent than nonabused children. The findings suggest a strategy for studying the development of emotion recognition skills by abused and nonabused children.