Are Facial Displays Social? Situational Influences in the Attribution of Emotion to Facial Expressions

Facultad de Psicologia, Universidad Autónoma de Madrid, Ciudad Universitaria de Cantoblanco, 28049, Madrid, Spain.
The Spanish Journal of Psychology (Impact Factor: 0.74). 12/2002; 5(2):119-24. DOI: 10.1017/S1138741600005898
Source: PubMed


Observers are remarkably consistent in attributing particular emotions to particular facial expressions, at least in Western societies. Here, we suggest that this consistency is an instance of the fundamental attribution error. We therefore hypothesized that a small variation in the procedure of the recognition study, which emphasizes situational information, would change the participants' attributions. In two studies, participants were asked to judge whether a prototypical "emotional facial expression" was more plausibly associated with a social-communicative situation (one involving communication to another person) or with an equally emotional but nonsocial, situation. Participants were found more likely to associate each facial display with the social than with the nonsocial situation. This result was found across all emotions presented (happiness, fear, disgust, anger, and sadness) and for both Spanish and Canadian participants.

Download full-text


Available from: José-Miguel Fernandez-Dols,
  • Source
    • "Moreover, facial expressions are more associated to social, rather than non‐social situations (Fernández‐Dols et al. 2002). The difference in views, between facial expressions being only communicative, and as innate response to emotional stimuli is minute but nonetheless important. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The issue of using educational games versus entertainment games as the base for learning environments is complex, and various data to base the decision on is needed. While participant´s verbal accounts of their situation is important, also other modes of expression would be meaningful as data sources. The availability of valid and reliable methods for evaluating games is central to building ones that are successful, and should preferably include outside measurements that are less affected by the participants choice of what to share. The present study considers a method using software for analysing facial expressions during gameplay, testing its ability to reveal inherent differences between educational and entertainment games. Participants (N=11) played two games, an entertainment game and an educational game, while facial expressions were measured continuously. The main finding was significantly higher degrees of expressions associated with negative emotions (anger [p < 0.001], fear [p < 0.001] and disgust [p < 0.001]) while playing the educational game, indicating that participants were more negative towards this game type. The combination of cognitive load inherent in learning and negative emotions found in the educational game may explain why educational games sometimes have been less successful. The results suggest that the method used in the present study might be useful as part of the evaluation of educational games.
    ECGBL 2015, The 9th European Conference on Games Based Learning, Steinkjer, Norway; 10/2015
  • Source
    • "In relation to the production/perception issue, metadata about the emotion-inducing context (type of environment, what the encoder is doing when filmed, possible presence of others, etc.) must be specified in descriptions of emotional content. Such metadata are especially necessary due to the strong context dependence of emotions [35] [36]. So databases must accurately specify the emotional content of the recordings they contain, both in terms of production, perception and contextual information. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Free download on the link: DynEmo is a database available to the scientific community ( It contains dynamic and natural emotional facial expressions (EFEs) displaying subjective affective states rated by both the expresser and observers. Methodological and contextual information is provided for each expression. This multimodal corpus meets psychological, ethical, and technical criteria. It is quite large, containing two sets of 233 and 125 recordings of EFE of ordinary Caucasian people (ages 25 to 65, 182 females and 176 males) filmed in natural but standardized conditions. In the Set 1, EFE recordings are associated with the affective state of the expresser (self-reported after the emotion inducing task, using dimensional, action readiness, and emotional labels items). In the Set 2, EFE recordings are both associated with the affective state of the expresser and with the time line (continuous annotations) of observers’ ratings of the emotions displayed throughout the recording. The time line allows any researcher interested in analysing non-verbal human behavior to segment the expressions into emotions. Free download here:
    10/2013; 5(5):61-80. DOI:10.5121/ijma.2013.5505
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To examine schizophrenia patients' visual attention to social contextual information during a novel mental state perception task. Groups of healthy participants (n = 26) and schizophrenia patients (n = 24) viewed 7 image pairs depicting target characters presented context-free and context-embedded (i.e., within an emotion-congruent social context). Gaze position was recorded with the EyeLink I Gaze Tracker while participants performed a mental state inference task. Mean eye movement variables were calculated for each image series (context-embedded v. context-free) to examine group differences in social context processing. The schizophrenia patients demonstrated significantly fewer saccadic eye movements when viewing context-free images and significantly longer eye-fixation durations when viewing context-embedded images. Healthy individuals significantly shortened eye-fixation durations when viewing context-embedded images, compared with context-free images, to enable rapid scanning and uptake of social contextual information; however, this pattern of visual attention was not pronounced in schizophrenia patients. In association with limited scanning and reduced visual attention to contextual information, schizophrenia patients' assessment of the mental state of characters embedded in social contexts was less accurate. In people with schizophrenia, inefficient integration of social contextual information in real-world situations may negatively affect the ability to infer mental and emotional states from facial expressions.
    Journal of psychiatry & neuroscience: JPN 02/2008; 33(1):34-42. · 5.86 Impact Factor
Show more