Conference Paper

Emotional faces of children and adults: What changes in their perception

Authors:
  • Università degli Studi della Campania "Luigi Vanvitelli"
  • Università degli studi della Campania "Luigi Vanvitelli", Caserta, Italy
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... He et al. [17] reported an "own age effect" both in young and older adults. Children and middleaged adults seem not to be exposed to it [11] reported no age effects in children and adults called to decode children and young adults' emotional faces. ...
... The abovementioned factors, all contributing to the interpretation of emotional faces have been shown to differently affect their decoding accuracy. Indeed, it has been shown that also factors such as face's familiarity, cultural closeness, actors' skills, black versus white photos, dynamic versus static faces, subjects' education, and their visual acuity can play a role [1,2,5,8,11,25,27]. The decoding of emotional faces cannot be studied in pristine isolation. ...
Article
Full-text available
This paper proposes a systematic approach to investigate the impact of factors such as the gender and age of participants and gender, and age of faces on the decoding accuracy of emotional expressions of disgust, anger, sadness, fear, happiness, and neutrality. The emotional stimuli consisted of 76 posed and 76 naturalistic faces, differently aged (young, middle-aged, and older) selected from FACES and SFEW databases. Either a posed or naturalistic faces’ decoding task was administered. The posed faces’ decoding task involved three differently aged groups (young, middle-aged, and older adults). The naturalistic faces’ decoding task involved two groups of older adults. For the posed decoding task, older adults were found significantly less accurate than middle-aged and young participants, and middle-aged significantly less accurate than young participants. Old faces were significantly less accurately decoded than young and middle-aged faces of disgust, and anger, and young faces of fear, and neutrality. Female faces were significantly more accurately decoded than male faces of anger and sadness, significantly less accurately decoded than male faces of neutrality. For the naturalistic decoding task, older adults were significantly less accurate in decoding naturalistic rather than posed faces of disgust, fear, and neutrality, contradicting an older adults’ emended support from a prior naturalistic emotional experience. Young faces were more accurately decoded than old and middle-aged faces of disgust and anger and old faces of neutrality. Female faces were significantly more accurately decoded than male faces of fear, and significantly less accurately decoded than male faces of anger. Significant effects and significant interdependencies were observed among the age of participants, emotional categories, age, and gender of faces, and type of stimuli (naturalistic vs. posed), not allowing to distinctly isolate the effects of each involved variable. Nevertheless, the data collected in this paper weakens both the assumptions on women enhanced ability to display and decode emotions and participants enhanced ability to decode faces closer to their own age (“own age bias” theory). Considerations are made on how these data would guide the development of assessment tools and preventive interventions and the design of emotionally and socially believable virtual agents and robots to assists and coach emotionally vulnerable people in their daily routines.
... The effect of age on emotion decoding has been investigated by several studies which focused on how this ability evolve during the growth process and with aging. The results of these studies seem to converge on reporting that equally aged children are better in recognizing some emotions rather than others such as happiness and anger compared to sadness and fear (Esposito et al. 2018;Chronaki et al., 2015;Lawrance et al., 2015;Rodger et al., 2015). However, how the learning and growing processes affect children's ability to recognize emotions over time is still matter of investigation. ...
... Nevertheless results for children and adolescents are conflicting. In fact, while some studies have shown the existence of the OAB in children, who tend to better recognize facial expressions of their peers (Rodhes & Anastasi, 2005), others showed that children can recognize emotional expressions equally well, regardless of whether these are expressed by peers or adults (Esposito et al., 2018). Studies comparing elders and young adults' ability to recognize facial expressions, and the effect of the OAB on this process, reached discordant results. ...
... While some studies have shown that this is true for adults [17] [18], who tend to recognize facial expressions of peers more quickly and accurately than those expressed by younger or older people, results for children and adolescents are conflicting. In fact, while some studies have shown the existence of the Own Age Bias in children, who tend to better recognize facial expressions of their peers [19], others showed that children can recognize emotional expressions equally well, regardless of whether these are expressed by their peers or by adults [20]. Even the gender of the face which expresses an emotion could have an impact on facial expressions' accuracy recognition. ...
Article
Full-text available
Considered the increasing use of assistive technologies in the shape of virtual agents, it is necessary to investigate those factors which characterize and affect the interaction between the user and the agent, among these emerges the way in which people interpret and decode synthetic emotions, i.e., emotional expressions conveyed by virtual agents. For these reasons, an article is proposed, which involved 278 participants split in differently aged groups (young, middle-aged, and elders). Within each age group, some participants were administered a “naturalistic decoding task,” a recognition task of human emotional faces, while others were administered a “synthetic decoding task” namely emotional expressions conveyed by virtual agents. Participants were required to label pictures of female and male humans or virtual agents of different ages (young, middle-aged, and old) displaying static expressions of disgust, anger, sadness, fear, happiness, surprise, and neutrality. Results showed that young participants showed better recognition performances (compared to older groups) of anger, sadness, and neutrality, while female participants showed better recognition performances (compared to males) of sadness, fear, and neutrality; sadness and fear were better recognized when conveyed by real human faces, while happiness, surprise, and neutrality were better recognized when represented by virtual agents. Young faces were better decoded when expressing anger and surprise, middle-aged faces were better decoded when expressing sadness, fear, and happiness, while old faces were better decoded in the case of disgust; on average, female faces where better decoded compared to male ones.
... Some coginfocom researchers have studied linguistic representation of reasoning [35]. The dynamics of human use of gesture during dialogue is a core topic in coginfocom [36], [37], [38], [39], [40], [41], as is emotion [42], the linguistic expression of emotion [43], emotion voicing [44], [45], [46], emotion depiction [47], [48], influence of emotion on reasoning [49], and the synthesis of modalities of expression [50], [51], [52], [53], [54], [55], [56]. ...
... Some have addressed linguistic representation of reasoning [16]. Inside the coginfocom community the dynamics of gesture in dialogue has been addressed [17], [18], [19], [20], [21], [22], as have emotion [23], the language of emotion [24], voice of emotion [25], [26], [27], image of emotion [28], [29], impacts of emotion on reasoning [30], and modality synthesis [31], [32], [33], [34], [35], [36], [37]. ...
... The development of emotional skills, among which we also consider facial recognition, is an on going process from childhood to adulthood. Moreover the accuracy and the rapidity with which individuals recognize emotions is still modifying (increasing) in the adolescence period [6][7][8][9]. When it comes to individual with ASD, studies have shown that their ability to recognize emotions may be impaired when compared with typical developing individuals [10][11]. ...
... In this context, it was hypothesized that males rely less on subtle differences in facially expressed arousal when processing faces (Thayer & Johnsen, 2000) which could lead to over-or misinterpretations of neutral expressions. Furthermore, regarding facial emotion recognition, a moderate female superiority seems to exist (e.g., Donges, Kersting, & Suslow, 2012;Montagne et al., 2005;Andric-Petrovic et al., 2019;Thompson & Voyer, 2014), whereas this effect seemingly depends, inter alia, on the properties of the stimulus such as valence, specific emotion, gender of displayed face, or subtleness of emotion (e.g., Connolly, Lefevre, Young, & Lewis, 2019;Esposito et al., 2018;Hoffmann et al., 2010;Thompson & Voyer, 2014). The result regarding neutral words contradicts previous studies that showed no gender difference (Deckert, 2014). ...
Article
Full-text available
Subjective emotional arousal in typically developing adults was investigated in an explorative study. 177 participants (20–70 years) rated facial expressions and words for self-experienced arousal and perceived intensity, and completed the Difficulties in Emotion Regulation scale and the Hospital Anxiety and Depression scale (HADS-D). Exclusion criteria were psychiatric or neurological diseases, or clinically relevant scores in the HADS-D. Arousal regarding faces and words was significantly predicted by emotional clarity. Separate analyses showed following significant results: arousal regarding faces and arousal regarding words constantly predicted each other; negative faces were predicted by age and intensity; neutral faces by gender and impulse control; positive faces by gender and intensity; negative words by emotional clarity; and neutral words by gender. Males showed higher arousal scores than females regarding neutral faces and neutral words; for the other arousal scores, no explicit group differences were shown. Cluster analysis yielded three distinguished emotional characteristics groups: “emotional difficulties disposition group” (mainly females; highest emotion regulation difficulties, depression and anxiety scores; by trend highest arousal), “low emotional awareness group” (exclusively males; lowest awareness regarding currently experienced emotions; by trend intermediate arousal), and a “low emotional difficulties group” (exclusively females; lowest values throughout). No age effect was shown. Results suggest that arousal elicited by facial expressions and words are specialized parts of a greater emotional processing system and that typically developing adults show some kind of stable, modality-unspecific dispositional baseline of emotional arousal. Emotional awareness and clarity, and impulse control probably are trait aspects of emotion regulation that influence emotional arousal in typically developing adults and can be regarded as aspects of meta-emotion. Different emotional personality styles were shown between as well as within gender groups.
Chapter
Full-text available
This study aims to contribute to the research on emotion recognition through facial cues during preschool age, which is a critical period for human cognitive development. The work investigates differences between 3-year and 5-year-old children in the ability to decode facial expressions of five emotions (happiness, anger, surprise, sadness and fear), by using three different types of stimuli: children faces, stylized faces and picto. To this aim, 133 children aged between 3 and 5.6 years (mean age = 4.2; SD = 0.9; 65 females) were recruited and assigned to six different groups according to their age (3-year-olds versus 5-year-olds) and the type of stimuli presented (children faces, stylized faces, picto) during the experimental session. Children were presented with one of the three sets of stimuli, and they were required to perform an emotional recognition task. Three repeated measures ANOVAs were separately conducted on the recognition accuracy scores associated with the three different types of stimuli. Results show that children’s age and emotional category play a crucial role in the emotion recognition process, even though to a different extent for each type of stimuli. Instead, gender differences were observed only in response to children's facial expressions. These results will be discussed in the proposed paper, considering the implications they may have for human–computer interaction research.KeywordsEmotion recognitionFacial emotion expressionsPreschool childrenAffective computing
Article
Full-text available
There exists a stereotype that women are more expressive than men; however, research has almost exclusively focused on a single facial behavior, smiling. A large-scale study examines whether women are consistently more expressive than men or whether the effects are dependent on the emotion expressed. Studies of gender differences in expressivity have been somewhat restricted to data collected in lab settings or which required labor-intensive manual coding. In the present study, we analyze gender differences in facial behaviors as over 2,000 viewers watch a set of video advertisements in their home environments. The facial responses were recorded using participants’ own webcams. Using a new automated facial coding technology we coded facial activity. We find that women are not universally more expressive across all facial actions. Nor are they more expressive in all positive valence actions and less expressive in all negative valence actions. It appears that generally women express actions more frequently than men, and in particular express more positive valence actions. However, expressiveness is not greater in women for all negative valence actions and is dependent on the discrete emotional state.
Conference Paper
Full-text available
Every year, another generation of smartphones is released that is more capable and stronger than the prior top-class devices. Because of the increased performance and ergonomics of smartphones, the shift away from personal computers is continuously accelerating. Due to this effect, many developers are interested in adapting PC-based solutions to mobile platforms. In this paper, we focus on adapting face recognition algorithms to Android mobile platforms. These algorithms are already part of a Windows desktop application and our aim is to create such an architecture where the application logic has the same source code in different platforms. That is, the article is about a cross-platform development of image processing algorithms. According to our long-term plans, we develop our face recognition algorithms under Windows, but every stable release will be also built as the part of an Android application. In addition, multiple user interfaces must be developed for each platform and we also need interfaces to reach the functionalities of the common application logic from user interfaces. Besides the concept of the architecture and we also quantify the performance of the same algorithms under different platforms.
Article
Full-text available
Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children's ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6–16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children's ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6–16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.
Article
Full-text available
According to a common sense theory, facial expressions signal specific emotions to people of all ages and therefore provide children easy access to the emotions of those around them. The evidence, however, does not support that account. Instead, children's understanding of facial expressions is poor and changes qualitatively and slowly over the course of development. Initially, children divide facial expressions into two simple categories (feels good, feels bad). These broad categories are then gradually differentiated until an adult system of discrete categories is achieved, likely in the teen years. Children's understanding of most specific emotions begins not with facial expressions, but with their understanding of the emotion's antecedents and behavioral consequences.
Article
Full-text available
The present study examined the development of recognition ability and affective reactions to emotional facial expressions in a large sample of school-aged children (n = 504, ages 8–11 years of age). Specifically, the study aimed to investigate if changes in the emotion recognition ability and the affective reactions associated with the viewing of facial expressions occur during late childhood. Moreover, because small but robust gender differences during late-childhood have been proposed, the effects of gender on the development of emotion recognition and affective responses were examined. The results showed an overall increase in emotional face recognition ability from 8 to 11 years of age, particularly for neutral and sad expressions. However, the increase in sadness recognition was primarily due to the development of this recognition in boys. Moreover, our results indicate different developmental trends in males and females regarding the recognition of disgust. Finally, developmental changes in affective reactions to emotional facial expressions were found. Whereas recognition ability increased over the developmental time period studied, affective reactions elicited by facial expressions were characterised by a decrease in arousal over the course of late childhood.
Article
Full-text available
We describe a set of face processing tests suitable for use with children aged from 4 to 10 years, which include tests of expression, lipreading and gaze processing as well as identification. The tests can be administered on paper or using a computer, and comparisons between the performance on computer- and paper-based versions suggest that format of administration makes little difference. We present results obtained from small samples of children at four different age groups (Study 1, computer-based tests), and larger samples at three age groups (Study 2, paper-based tests) from preschool to 10 years of age. The tests were found to be developmentally sensitive. There were quite strong correlations between performance on different tests of the same face processing ability (e.g. gaze processing), and generally rather weaker correlations between tests of different abilities.
Article
Full-text available
Quantitative and qualitative reviews of the literature on sex differences in facial expression processing (FEP) have yielded conflicting findings regarding children. This study was designed to review quantitatively the literature on sex differences in FEP from infancy through adolescence and to evaluate consistency between the course of FEP development and predictions derived from preliminary theoretical models. Results, which indicate a female advantage at FEP, are consistent with predictions derived from an integrated neurobehavioral/social constructivist model. These findings suggest a need for research examining both neurological maturation and socialization as important factors in the development of sex differences in FEP and related skills. Possible directions for future study are discussed, with emphasis on the need to integrate the infant literature with research focused on older children and adults.
Article
Full-text available
The authors tested gender differences in emotion judgments by utilizing a new judgment task (Studies 1 and 2) and presenting stimuli at the edge of conscious awareness (Study 2). Women were more accurate than men even under conditions of minimal stimulus information. Women's ratings were more variable across scales, and they rated correct target emotions higher than did men.
Article
Full-text available
Most research on the perception of emotional expressions is conducted using static faces as stimuli. However, facial displays of emotion are a highly dynamic phenomenon and a static photograph is its very unnatural representation. The goal of the present research was to assess the role of stimuli dynamics as well as subjects' sex in the perception of emotional expressions. In the experiment, subjects rated the intensity of expressions of anger and happiness presented as photographs (static stimuli) and animations (dynamic stimuli). The impact of both stimulus dynamics and emotion type on the perceived intensity was observed. The emotions on 'angry faces' were judged as more intense than on 'happy faces' and the intensity ratings were higher in the case of animation rather than photography. Moreover, gender differences in the rated intensity were found. For male subjects higher intensity ratings for dynamic than for static expressions were noted in the case of anger, whereas in the case of happiness, no differences were observed. For female subjects, however, differences for both anger and happiness were significant. The results suggest that the dynamic characteristic of facial display is an important factor in the perception of the intensity of emotional expressions. Its effect, however, depends on the subjects' sex and emotional valence.
Article
Full-text available
The development of children's ability to recognize facial emotions and the role of configural information in this development were investigated. In the study, 100 5-, 7-, 9-, and 11-year-olds and 26 adults needed to recognize the emotion displayed by upright and upside-down faces. The same participants needed to recognize the emotion displayed by the top half of an upright or upside-down face that was or was not aligned with a bottom half that displayed another emotion. The results showed that the ability to recognize facial emotion develops with age, with a developmental course that depends on the emotion to be recognized. Moreover, children at all ages and adults exhibited both an inversion effect and a composite effect, suggesting that children rely on configural information to recognize facial emotions.
Article
Full-text available
Age differences in emotion recognition from lexical stimuli and facial expressions were examined in a cross-sectional sample of adults aged 18 to 85 (N = 357). Emotion-specific response biases differed by age: Older adults were disproportionately more likely to incorrectly label lexical stimuli as happiness, sadness, and surprise and to incorrectly label facial stimuli as disgust and fear. After these biases were controlled, findings suggested that older adults were less accurate at identifying emotions than were young adults, but the pattern differed across emotions and task types. The lexical task showed stronger age differences than the facial task, and for lexical stimuli, age groups differed in accuracy for all emotional states except fear. For facial stimuli, in contrast, age groups differed only in accuracy for anger, disgust, fear, and happiness. Implications for age-related changes in different types of emotional processing are discussed.
Conference Paper
The present paper identifies differences in the expression features of compassion, sympathy and empathy in British English and Polish that need to be tuned accordingly in socially interactive robots to enable them to operate successfully in these cultures. The results showed that English compassion is characterised by more positive valence and more of a desire to act than Polish wspolczucie. Polish empatia is also characterised by a more negative valence than English empathy, which has a wider range of application. When used in positive contexts, English sympathy corresponds to Polish sympatia; however, it also acquires elements of negative valence in English. The results further showed that although the processes of emotion recognition and expression in robotics must be tuned to culture-specific emotion models, the more explicit patterns of responsiveness (British English for the compassion model in our case) is also recommended for the transfer to make the cognitive and sensory infocommunication more readily interpretable by the interacting agents.
Conference Paper
The identification of emotional hints from speech shows a large number of applications. Machine learning researchers have analyzed sets of acoustic parameters as potential cues for the identification of discrete emotional categories or, alternatively, of the dimensions of emotions. Experiments have been carried out over records including simulated or induced emotions, even if recently more research has been carried out on spontaneous emotions. However, it is well known that emotion expression depends not only on cultural factors but also on the individual and also on the specific situation. In this work we deal with the tracking of annoyance shifts during real phone-calls to complaint services. The audio files analyzed show different ways to express annoyance, as, for example, disappointment, impotence or anger. However variations of parameters derived from intensity combined with some spectral information and suprasegmental features have shown to be very robust for each speaker and annoyance rate. The work also discussed the annotation problem and proposed an extended rating scale in order to include annotators disagreements. Our frame classification results validated the annotation procedure. Experimental results also showed that shifts in customer annoyance rates could be potentially tracked during phone calls.
Conference Paper
In the context of automatic behavioral analysis, we aim to classify empathy in human-human spoken conversations. Empathy underlies to the human ability to recognize, understand and to react to emotions, attitudes, and beliefs of others. While empathy and its different manifestations (e.g., sympathy, compassion) have been widely studied in psychology, very little has been done in the computational research literature. In this paper, we present a case study where we investigate the occurrences of empathy in call-centers human-human conversations. In order to propose an operational definition of empathy, we adopt the modal model of emotions, where the appraisal processes of the unfolding of emotional states are modeled sequentially. We have designed a binary classification system to detect the presence of empathic manifestations in spoken conversations. The automatic classification system has been evaluated using spoken conversations by exploiting and comparing performances of the lexical, acoustic and psycholinguistic features.
Article
Reading the non-verbal cues from faces to infer the emotional states of others is central to our daily social interactions from very early in life. Despite the relatively well-documented ontogeny of facial expression recognition in infancy, our understanding of the development of this critical social skill throughout childhood into adulthood remains limited. To this end, using a psychophysical approach we implemented the QUEST threshold-seeking algorithm to parametrically manipulate the quantity of signals available in faces normalized for contrast and luminance displaying the six emotional expressions, plus neutral. We thus determined observers' perceptual thresholds for effective discrimination of each emotional expression from 5 years of age up to adulthood. Consistent with previous studies, happiness was most easily recognized with minimum signals (35% on average), whereas fear required the maximum signals (97% on average) across groups. Overall, recognition improved with age for all expressions except happiness and fear, for which all age groups including the youngest remained within the adult range. Uniquely, our findings characterize the recognition trajectories of the six basic emotions into three distinct groupings: expressions that show a steep improvement with age - disgust, neutral, and anger; expressions that show a more gradual improvement with age - sadness, surprise; and those that remain stable from early childhood - happiness and fear, indicating that the coding for these expressions is already mature by 5 years of age. Altogether, our data provide for the first time a fine-grained mapping of the development of facial expression recognition. This approach significantly increases our understanding of the decoding of emotions across development and offers a novel tool to measure impairments for specific facial expressions in developmental clinical populations. © 2015 John Wiley & Sons Ltd.
Article
The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.
Article
A procedure has been developed for measuring visibly different facial movements. The Facial Action Code was derived from an analysis of the anatomical basis of facial movement. The method can be used to describe any facial movement (observed in photographs, motion picture film or videotape) in terms of anatomically based action units. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed.
Article
Adolescence is a time of dramatic physical, cognitive, emotional, and social changes as well as a time for the development of many social-emotional problems. These characteristics raise compelling questions about accompanying neural changes that are unique to this period of development. Here, we propose that studying adolescent-specific changes in face processing and its underlying neural circuitry provides an ideal model for addressing these questions. We also use this model to formulate new hypotheses. Specifically, pubertal hormones are likely to increase motivation to master new peer-oriented developmental tasks, which will in turn, instigate the emergence of new social/affective components of face processing. We also predict that pubertal hormones have a fundamental impact on the re-organization of neural circuitry supporting face processing and propose, in particular, that, the functional connectivity, or temporal synchrony, between regions of the face-processing network will change with the emergence of these new components of face processing in adolescence. Finally, we show how this approach will help reveal why adolescence may be a period of vulnerability in brain development and suggest how it could lead to prevention and intervention strategies that facilitate more adaptive functional interactions between regions within the broader social information processing network.
Article
Interest in sex-related differences in psychological functioning has again come to the foreground with new findings about their possible functional basis in the brain. Sex differences may be one way how evolution has capitalized on the capacity of homologous brain regions to process social information between men and women differently. This paper focuses specifically on the effects of emotional valence, sex of the observed and sex of the observer on regional brain activations. We also discuss the effects of and interactions between environment, hormones, genes and structural differences of the brain in the context of differential brain activity patterns between men and women following exposure to seen expressions of emotion and in this context we outline a number of methodological considerations for future research. Importantly, results show that although women are better at recognizing emotions and express themselves more easily, men show greater responses to threatening cues (dominant, violent or aggressive) and this may reflect different behavioral response tendencies between men and women as well as evolutionary effects. We conclude that sex differences must not be ignored in affective research and more specifically in affective neuroscience.
Article
We report three experiments investigating the recognition of emotion from facial expressions across the adult life span. Increasing age produced a progressive reduction in the recognition of fear and, to a lesser extent, anger. In contrast, older participants showed no reduction in recognition of disgust, rather there was some evidence of an improvement. The results are discussed in terms of studies from the neuropsychological and functional imaging literature that indicate that separate brain regions may underlie the emotions fear and disgust. We suggest that the dissociable effects found for fear and disgust are consistent with the differential effects of ageing on brain regions involved in these emotions.
Article
Research on the development of face recognition in infancy has shown that infants respond to faces as if they are special and recognize familiar faces early in development. Infants also show recognition and differential attachment to familiar people very early in development. We tested the hypothesis that infants' responses to familiar and unfamiliar faces differ at different ages. Specifically, we present data showing age-related changes in infants' brain responses to mother's face versus a stranger's face in children between 18 and 54 months of age. We propose that these changes are based on age-related differences in the perceived salience of the face of the primary caregiver versus strangers.
Article
Intact emotion processing is critical for normal emotional development. Recent advances in neuroimaging have facilitated the examination of brain development, and have allowed for the exploration of the relationships between the development of emotion processing abilities, and that of associated neural systems. A literature review was performed of published studies examining the development of emotion expression recognition in normal children and psychiatric populations, and of the development of neural systems important for emotion processing. Few studies have explored the development of emotion expression recognition throughout childhood and adolescence. Behavioural studies suggest continued development throughout childhood and adolescence (reflected by accuracy scores and speed of processing), which varies according to the category of emotion displayed. Factors such as sex, socio-economic status, and verbal ability may also affect this development. Functional neuroimaging studies in adults highlight the role of the amygdala in emotion processing. Results of the few neuroimaging studies in children have focused on the role of the amygdala in the recognition of fearful expressions. Although results are inconsistent, they provide evidence throughout childhood and adolescence for the continued development of and sex differences in amygdalar function in response to fearful expressions. Studies exploring emotion expression recognition in psychiatric populations of children and adolescents suggest deficits that are specific to the type of disorder and to the emotion displayed. Results from behavioural and neuroimaging studies indicate continued development of emotion expression recognition and neural regions important for this process throughout childhood and adolescence. Methodological inconsistencies and disparate findings make any conclusion difficult, however. Further studies are required examining the relationship between the development of emotion expression recognition and that of underlying neural systems, in particular subcortical and prefrontal cortical structures. These will inform understanding of the neural bases of normal and abnormal emotional development, and aid the development of earlier interventions for children and adolescents with psychiatric disorders.
Article
The ability to recognize emotions that were easily identifiable and those that were more difficult to identify, as expressed by male and female faces, was studied in 48 nondisabled children and 76 children with learning disabilities (LD) ages 9 through 12. On the basis of their performance on the Rey Auditory Verbal Learning Test and the Benton Visual Retention Test, the LD group was divided into three subgroups: those with verbal (VD), nonverbal (NVD), and both verbal and nonverbal (BD) deficits. A shortened version of Ekman and Friesen's Pictures of Facial Affect, including pictures of both men and women, was the measure of ability to identify facial expressions of affect. Children of both genders in all three groups of children with LD, as well as their normally achieving peers, were more accurate in identifying expressions of affect from female faces, notwithstanding differences in sensitivity to such emotional communication in favor of the nondisabled and VD groups. However, a significant interaction was found between gender and emotional recognition difficulty level, with female faces being more expressive for emotions that were difficult to recognize.