Article

Multisensory emotions: perception, combination and underlying neural processes

Department of Psychiatry, RWTH Aachen University, Aachen, Germany.
Reviews in the neurosciences (Impact Factor: 3.31). 08/2012; 23(4):381-92. DOI: 10.1515/revneuro-2012-0040
Source: PubMed

ABSTRACT In our everyday lives, we perceive emotional information via multiple sensory channels. This is particularly evident for emotional faces and voices in a social context. Over the past years, a multitude of studies have addressed the question of how affective cues conveyed by auditory and visual channels are integrated. Behavioral studies show that hearing and seeing emotional expressions can support and influence each other, a notion which is supported by investigations on the underlying neurobiology. Numerous electrophysiological and neuroimaging studies have identified brain regions subserving the integration of multimodal emotions and have provided new insights into the neural processing steps underlying the synergistic confluence of affective information from voice and face. In this paper we provide a comprehensive review covering current behavioral, electrophysiological and functional neuroimaging findings on the combination of emotions from the auditory and visual domains. Behavioral advantages arising from multimodal redundancy are paralleled by specific integration patterns on the neural level, from encoding in early sensory cortices to late cognitive evaluation in higher association areas. In summary, these findings indicate that bimodal emotions interact at multiple stages of the audiovisual integration process.

0 Bookmarks
 · 
93 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Emotional verbal messages are typically encountered in meaningful contexts, for instance, during face-to-face communication in social situations. Yet, they are often investigated by confronting single participants with isolat-ed words on a computer screen, thus potentially lacking ecological validity. In the present study we recorded event-related brain potentials (ERPs) during emotional word processing in communicative situations provided by videos of a speaker, assuming that emotion effects should be augmented by the presence of a speaker address-ing the listener. Indeed, compared to non-communicative situations or isolated word processing, emotion effects were more pronounced, started earlier and lasted longer in communicative situations. Furthermore, while the brain responded most strongly to negative words when presented in isolation, a positivity bias with more pronounced emotion effects for positive words was observed in communicative situations. These findings demonstrate that communicative situations – in which verbal emotions are typically encountered – strongly enhance emotion effects, underlining the importance of social and meaningful contexts in processing emotional and verbal messages.
    NeuroImage 01/2015; 109:273–282. DOI:10.1016/j.neuroimage.2015.01.031 · 6.13 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion.
    IEEE Transactions on Affective Computing 10/2014; 5(4):377-390. DOI:10.1109/TAFFC.2014.2336244 · 3.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice's emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.
    Frontiers in Psychology 12/2014; 5:1351. DOI:10.3389/fpsyg.2014.01351 · 2.80 Impact Factor