Attention modulates the processing of emotional expression triggered by foveal faces.

School of Human and Life Sciences, Roehampton University, Whitelands College, Holybourne Avenue, London SW15 4JD, UK.
Neuroscience Letters (Impact Factor: 2.06). 03/2006; 394(1):48-52. DOI: 10.1016/j.neulet.2005.10.002
Source: PubMed

ABSTRACT To investigate whether the processing of emotional expression for faces presented within foveal vision is modulated by spatial attention, event-related potentials (ERPs) were recorded in response to stimulus arrays containing one fearful or neutral face at fixation, which was flanked by a pair of peripheral bilateral lines. When attention was focused on the central face, an enhanced positivity was elicited by fearful as compared to neutral faces. This effect started at 160 ms post-stimulus, and remained present for the remainder of the 700 ms analysis interval. When attention was directed away from the face towards the line pair, the initial phase of this emotional positivity remained present, but emotional expression effects beyond 220 ms post-stimulus were completely eliminated. These results demonstrate that when faces are presented foveally, the initial rapid stage of emotional expression processing is unaffected by attention. In contrast, attentional task instructions are effective in inhibiting later, more controlled stages of expression analysis.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Research on unconscious or unaware vision has demonstrated that unconscious processing can be flexibly adapted to the current goals of human agents. The present review focuses on one area of research, masked visual priming. This method uses visual stimuli presented in a temporal sequence to lower the visibility of one of these stimuli. In this way, a stimulus can be masked and even rendered invisible. Despite its invisibility, a masked stimulus if used as a prime can influence a variety of executive functions, such as response activation, semantic processing, or attention shifting. There are also limitations on the processing of masked primes. While masked priming research demonstrates the top-down dependent usage of unconscious vision during task-set execution it also highlights that the set-up of a new task-set depends on conscious vision as its input. This basic distinction captures a major qualitative difference between conscious and unconscious vision.
    Consciousness and Cognition 06/2014; 27C:268-287. DOI:10.1016/j.concog.2014.05.009 · 2.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Exogenous or automatic attention to emotional distractors has been observed for emotional scenes and faces. In the language domain, however, automatic attention capture by emotional words has been scarcely investigated. In the current event-related potentials study we explored distractor effects elicited by positive, negative and neutral words in a concurrent but distinct target distractor paradigm. Specifically, participants performed a digit categorization task in which task-irrelevant words were flanked by numbers. The results of both temporo-spatial principal component and source location analyses revealed the existence of early distractor effects that were specifically triggered by positive words. At the scalp level, task-irrelevant positive compared to neutral and negative words elicited larger amplitudes in an anterior negative component that peaked around 120 ms. Also, at the voxel level, positive distractor words increased activity in orbitofrontal regions compared to negative words. These results suggest that positive distractor words quickly and automatically capture attentional resources diverting them from the task where attention was voluntarily directed.
    Frontiers in Psychology 02/2015; 6(24). DOI:10.3389/fpsyg.2015.00024 · 2.80 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective-one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
    Journal of Cognitive Neuroscience 09/2014; DOI:10.1162/jocn_a_00734 · 4.69 Impact Factor


Available from