Multisensory emotions: Perception, combination and underlying neural processes
Department of Psychiatry, RWTH Aachen University, Aachen, Germany. Reviews in the neurosciences
(Impact Factor: 3.33).
08/2012; 23(4):381-92. DOI: 10.1515/revneuro-2012-0040
In our everyday lives, we perceive emotional information via multiple sensory channels. This is particularly evident for emotional faces and voices in a social context. Over the past years, a multitude of studies have addressed the question of how affective cues conveyed by auditory and visual channels are integrated. Behavioral studies show that hearing and seeing emotional expressions can support and influence each other, a notion which is supported by investigations on the underlying neurobiology. Numerous electrophysiological and neuroimaging studies have identified brain regions subserving the integration of multimodal emotions and have provided new insights into the neural processing steps underlying the synergistic confluence of affective information from voice and face. In this paper we provide a comprehensive review covering current behavioral, electrophysiological and functional neuroimaging findings on the combination of emotions from the auditory and visual domains. Behavioral advantages arising from multimodal redundancy are paralleled by specific integration patterns on the neural level, from encoding in early sensory cortices to late cognitive evaluation in higher association areas. In summary, these findings indicate that bimodal emotions interact at multiple stages of the audiovisual integration process.
Available from: Rasha Abdel Rahman
- "Ample evidence has demonstrated that the congruency of multimodal stimuli may facilitate the perception and identification of emotional (e.g. Paulmann and Pell, 2011; Paulmann et al., 2009; see Klasen et al., 2012 for a review) and non-emotional speech signals (e.g. Schwartz et al., 2004; van Wassenhove et al., 2005) and is mandatorily processed (e.g. de Gelder and Vroomen, 2000) already during early perceptual processing stages (e.g. de Gelder et al., 1999; Gerdes et al., 2013; Pourtois et al., 2000, 2002; Stekelenburg and Vroomen, 2007) probably involving specialized structures (e.g. de Gelder and Van den Stock, 2011), while incongruent audiovisual input can even lead to perceptual illusions (cf. "
[Show abstract] [Hide abstract]
ABSTRACT: Emotional verbal messages are typically encountered in meaningful contexts, for instance, during face-to-face communication in social situations. Yet, they are often investigated by confronting single participants with isolat-ed words on a computer screen, thus potentially lacking ecological validity. In the present study we recorded event-related brain potentials (ERPs) during emotional word processing in communicative situations provided by videos of a speaker, assuming that emotion effects should be augmented by the presence of a speaker address-ing the listener. Indeed, compared to non-communicative situations or isolated word processing, emotion effects were more pronounced, started earlier and lasted longer in communicative situations. Furthermore, while the brain responded most strongly to negative words when presented in isolation, a positivity bias with more pronounced emotion effects for positive words was observed in communicative situations. These findings demonstrate that communicative situations – in which verbal emotions are typically encountered – strongly enhance emotion effects, underlining the importance of social and meaningful contexts in processing emotional and verbal messages.
NeuroImage 01/2015; 109:273–282. DOI:10.1016/j.neuroimage.2015.01.031 · 6.36 Impact Factor
Available from: PubMed Central
- "The questions addressed by a handful
of studies on multimodal face and voice processing focused on two major issues: (a)
Which brain regions/networks uniquely support multimodal face and voice processing
(functional magnetic resonance imaging [fMRI] studies), and (b) what cognitive
processes are associated with multimodal face and voice processing. Thus far, most
studies have been conducted using fMRI methodology (e.g., Klasen et al., 2012). In spite of the fact that many details of
the brain architecture involved in different aspects of multimodal face and voice
processing are missing, the network of main regions involved has been delineated. "
[Show abstract] [Hide abstract]
ABSTRACT: In the present review, social communication will be discussed in the context of
social cognition, and cold and hot cognition. The review presents research on
prosody, processing of faces, multimodal processing of voice and face, and the
impact of emotion on constructing semantic meaning. Since the focus of this mini
review is on brain processes involved in these cognitive functions, the bulk of
evidence presented will be from event related potential (ERP) studies as this
methodology offers the best temporal resolution of cognitive events under study.
The argument is made that social communication is accomplished via fast acting
sensory processes and later, top down processes. Future directions both in terms
of methodology and research questions are also discussed.
Advances in Cognitive Psychology 12/2013; 9(4):173-183. DOI:10.2478/v10053-008-0145-6 · 1.65 Impact Factor
Available from: Johan N Lundström
- "Multisensory integration mechanisms are of special importance during the perception and processing of emotions. Research from our own and other groups provides evidence for a neural network involving the amygdala, insula, frontal areas, FG, and STS which are responsible for integration of cross-or multisensory information related to emotional perception during stimulation with dynamic stimulus material of different modalities (Ethofer et al., 2006; Kreifelts et al., 2009; Seubert et al., 2010a,b; Klasen et al., 2011, 2012; Muller et al., 2011, 2012; Regenbogen et al., 2012a,b). Although we gained novel insights into multisensory integration processes with regards to object and emotion perception during the last years, a systematic investigation of multisensory integration in relation to differences between age groups and age-related pathologies using functional imaging means is still missing. "
[Show abstract] [Hide abstract]
ABSTRACT: The rapid demographical shift occurring in our society implies that understanding of healthy aging and age-related diseases is one of our major future challenges. Sensory impairments have an enormous impact on our lives and are closely linked to cognitive functioning. Due to the inherent complexity of sensory perceptions, we are commonly presented with a complex multisensory stimulation and the brain integrates the information from the individual sensory channels into a unique and holistic percept. The cerebral processes involved are essential for our perception of sensory stimuli and becomes especially important during the perception of emotional content. Despite ongoing deterioration of the individual sensory systems during aging, there is evidence for an increase in, or maintenance of, multisensory integration processing in aging individuals. Within this comprehensive literature review on multisensory integration we aim to highlight basic mechanisms and potential compensatory strategies the human brain utilizes to help maintain multisensory integration capabilities during healthy aging to facilitate a broader understanding of age-related pathological conditions. Further our goal was to identify where further research is needed.
Frontiers in Human Neuroscience 12/2013; 7:863. DOI:10.3389/fnhum.2013.00863 · 3.63 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.