Are Facial Displays Social? Situational Influences in the Attribution of Emotion to Facial Expressions

Facultad de Psicologia, Universidad Autónoma de Madrid, Ciudad Universitaria de Cantoblanco, 28049, Madrid, Spain.
The Spanish Journal of Psychology (Impact Factor: 0.74). 12/2002; 5(2):119-24. DOI: 10.1017/S1138741600005898
Source: PubMed

ABSTRACT Observers are remarkably consistent in attributing particular emotions to particular facial expressions, at least in Western societies. Here, we suggest that this consistency is an instance of the fundamental attribution error. We therefore hypothesized that a small variation in the procedure of the recognition study, which emphasizes situational information, would change the participants' attributions. In two studies, participants were asked to judge whether a prototypical "emotional facial expression" was more plausibly associated with a social-communicative situation (one involving communication to another person) or with an equally emotional but nonsocial, situation. Participants were found more likely to associate each facial display with the social than with the nonsocial situation. This result was found across all emotions presented (happiness, fear, disgust, anger, and sadness) and for both Spanish and Canadian participants.

Download full-text


Available from: José-Miguel Fernandez-Dols, Aug 13, 2015
  • Source
    • "In relation to the production/perception issue, metadata about the emotion-inducing context (type of environment, what the encoder is doing when filmed, possible presence of others, etc.) must be specified in descriptions of emotional content. Such metadata are especially necessary due to the strong context dependence of emotions [35] [36]. So databases must accurately specify the emotional content of the recordings they contain, both in terms of production, perception and contextual information. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Free download on the link: DynEmo is a database available to the scientific community ( It contains dynamic and natural emotional facial expressions (EFEs) displaying subjective affective states rated by both the expresser and observers. Methodological and contextual information is provided for each expression. This multimodal corpus meets psychological, ethical, and technical criteria. It is quite large, containing two sets of 233 and 125 recordings of EFE of ordinary Caucasian people (ages 25 to 65, 182 females and 176 males) filmed in natural but standardized conditions. In the Set 1, EFE recordings are associated with the affective state of the expresser (self-reported after the emotion inducing task, using dimensional, action readiness, and emotional labels items). In the Set 2, EFE recordings are both associated with the affective state of the expresser and with the time line (continuous annotations) of observers’ ratings of the emotions displayed throughout the recording. The time line allows any researcher interested in analysing non-verbal human behavior to segment the expressions into emotions. Free download here:
    10/2013; 5(5):61-80. DOI:10.5121/ijma.2013.5505
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To examine schizophrenia patients' visual attention to social contextual information during a novel mental state perception task. Groups of healthy participants (n = 26) and schizophrenia patients (n = 24) viewed 7 image pairs depicting target characters presented context-free and context-embedded (i.e., within an emotion-congruent social context). Gaze position was recorded with the EyeLink I Gaze Tracker while participants performed a mental state inference task. Mean eye movement variables were calculated for each image series (context-embedded v. context-free) to examine group differences in social context processing. The schizophrenia patients demonstrated significantly fewer saccadic eye movements when viewing context-free images and significantly longer eye-fixation durations when viewing context-embedded images. Healthy individuals significantly shortened eye-fixation durations when viewing context-embedded images, compared with context-free images, to enable rapid scanning and uptake of social contextual information; however, this pattern of visual attention was not pronounced in schizophrenia patients. In association with limited scanning and reduced visual attention to contextual information, schizophrenia patients' assessment of the mental state of characters embedded in social contexts was less accurate. In people with schizophrenia, inefficient integration of social contextual information in real-world situations may negatively affect the ability to infer mental and emotional states from facial expressions.
    Journal of psychiatry & neuroscience: JPN 02/2008; 33(1):34-42. · 7.49 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article examines the importance of semantic processes in the recognition of emotional expressions, through a series of three studies on false recognition. The first study found a high frequency of false recognition of prototypical expressions of emotion when participants viewed slides and video clips of nonprototypical fearful and happy expressions. The second study tested whether semantic processes caused false recognition. The authors found that participants made significantly higher error rates when asked to detect expressions that corresponded to semantic labels than when asked to detect visual stimuli. Finally, given that previous research reported that false memories are less prevalent in younger children, the third study tested whether false recognition of prototypical expressions increased with age. The authors found that 67% of eight- to nine-year-old children reported nonpresent prototypical expressions of fear in a fearful context, but only 40% of 6- to 7-year-old children did so. Taken together, these three studies demonstrate the importance of semantic processes in the detection and categorization of prototypical emotional expressions.
    Emotion 09/2008; 8(4):530-9. DOI:10.1037/a0012724 · 3.88 Impact Factor
Show more