Article

Do emotions or gender drive our actions? A study of motor distractibility

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

People's interaction with the social environment depends on the ability to attend social cues with human faces being a key vehicle of this information. This study explores whether directing the attention to gender or emotion of a face interferes with ongoing actions. In two experiments, participants reached for one of two possible targets by relying on one of two features of a face, namely, emotion (Experiment 1) or gender (Experiment 2) of a non-target stimulus (a task-relevant distractor). Participants' reaching movements deviated toward the task-relevant distractor in both experiments. However, when attending to the gender of the face the distractor effect was modulated by both gender (task-relevant feature) and emotion (task-irrelevant feature), with the largest movement deviation being observed toward angry male faces. Endogenous allocation of attention toward faces elicits a competing motor response to the ongoing action and the emotional content of the face contributes to this process at a more automatic and implicit level.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... While there is no evidence for an effect of approach-avoidance tendencies on the fast and slow mechanisms underlying reaching corrections, recent work has shown that the spatial characteristics of reaching execution are modulated by task-irrelevant emotional distractors (Ambron & Foroni, 2015;Ambron, Rumiati, & Foroni, 2016). Positive and negative distractors presented simultaneously with the target were shown to bias reaching toward the location of the distractor relative to neutral stimuli (Ambron & Foroni, 2015). ...
... This effect also occurred implicitly, even when the emotional dimension was not task- relevant ( Ambron et al., 2016). This work demonstrates an effect of emotional stimuli on action execution and adds to the existing literature showing that task-irrelevant, non- emotional distractors cause changes in both the trajectory and the speed of movement (Buetti, Juan, Rinck, & Kerzel, 2012;Meegan & Tipper, 1998;Pratt & Abrams, 1994;Tipper et al., 1997;Welsh & Elliott, 2004). ...
... Reaching deviations that occur as a result of presenting a distractor may not reflect motor planning processes that occur prior to the initiation of a movement, but may reflect motor planning that occurs in parallel with action execution. As suggested by other studies (Ambron & Foroni, 2015;Ambron et al., 2016;Welsh & Elliott, 2004), the emotional valence of a distractor may cause arm deviations either towards or away from the distractor location. These deviations may occur early or later in the movement, which can provide indication about the contribution from fast or slow pathways guiding visuomotor corrections (Day & Lyon, 2000). ...
Thesis
This thesis examines the interaction between emotional stimuli and motor processes. Emotions are thought to be intimately linked to action, often triggering specific biases that automatically guide our behaviour for successful interaction. For example, emotional behaviour is influenced in a very specific way by two opposing appetitive and defensive motivational systems that trigger specific actions. Appetitive stimuli trigger approach, whereas aversive stimuli trigger avoidance, freezing or attack. These approachavoidance tendencies are likely to interact with and bias various motor processes, such as action selection, planning and execution, revealing a privileged relationship between emotion and motor processes. However, this interaction remains poorly understood. Investigating how emotional tendencies influence different motor processes will further our understanding of how emotions influence actions. To this end, five experiments were conducted. Experiment One examined how approach-avoidance actions are selected. Experiment Two investigated how emotional cues in the environment bias and prepare for action to anticipated emotional stimuli. Experiment Three evaluated the effect of approachavoidance tendencies on action execution and motor planning. Experiment Four assessed whether tendencies also influence our ability to suppress action execution. Finally, Experiment Five investigated whether updating of actions in response to emotional changes in the environment are influenced by response tendencies. Overall, this thesis found that approach-avoidance tendencies influence the selection of actions via a combination of topdown, goal-directed processes, and bottom-up, automatic processes. Predisposed tendencies did not influence any other motor process. Instead, positive and negative stimuli influenced action planning, execution and inhibition. Thus, motor processes are differentially influenced by approach-avoidance tendencies and emotional valence. This thesis demonstrates an intimate relationship between emotional stimuli and the motor system that appears to be influenced by both top-down and bottom-up mechanisms.
... Because of their particular biological and social relevance (Bruce and Young, 1998), faces may represent a unique category of stimuli that is difficult for both young and older adults to ignore. Facial expression in particular, even when task irrelevant, may be difficult to ignore (Ambron and Foroni, 2015;Ambron et al., 2016). Age-related differences in emotional face processing also could have contributed to the earlier findings of preferential processing of positive distractor information by older adults. ...
... More broadly, future investigations could also consider the impact of task-irrelevant emotional expressions on categorization of gender or ethnicity in faces as a means to understanding potential preferential processing of positive emotion information in older adults. Evidence suggests multiple facial features are processed when only one particular feature is explicitly attended to (Mouchetant-Rostaing et al., 2000;Ito and Urland, 2005;Ambron et al., 2016;Li and Tse, 2016). For example, recent work by Li and Tse (2016) with young participants showed that taskirrelevant emotional expressions in target faces interacted with the processing of gender or race when the faces were categorized on the latter dimensions, and Ambron et al., 2016 found that when gender in a distractor face was used to cue the nonface target to be reached for, task-irrelevant facial expression modulated the degree to which reaching paths deviated to the distractor faces. ...
... Evidence suggests multiple facial features are processed when only one particular feature is explicitly attended to (Mouchetant-Rostaing et al., 2000;Ito and Urland, 2005;Ambron et al., 2016;Li and Tse, 2016). For example, recent work by Li and Tse (2016) with young participants showed that taskirrelevant emotional expressions in target faces interacted with the processing of gender or race when the faces were categorized on the latter dimensions, and Ambron et al., 2016 found that when gender in a distractor face was used to cue the nonface target to be reached for, task-irrelevant facial expression modulated the degree to which reaching paths deviated to the distractor faces. Notably, this effect was particular to facial emotion, as gender did not modulate the reaching trajectory when task irrelevant. ...
Article
Full-text available
Cognitive aging may be accompanied by increased prioritization of social and emotional goals that enhance positive experiences and emotional states. The socioemotional selectivity theory suggests this may be achieved by giving preference to positive information and avoiding or suppressing negative information. Although there is some evidence of a positivity bias in controlled attention tasks, it remains unclear whether a positivity bias extends to the processing of affective stimuli presented outside focused attention. In two experiments, we investigated age-related differences in the effects of to-be-ignored non-face affective images on target processing. In Experiment 1, 27 older (64–90 years) and 25 young adults (19–29 years) made speeded valence judgments about centrally presented positive or negative target images taken from the International Affective Picture System. To-be-ignored distractor images were presented above and below the target image and were either positive, negative, or neutral in valence. The distractors were considered task relevant because they shared emotional characteristics with the target stimuli. Both older and young adults responded slower to targets when distractor valence was incongruent with target valence relative to when distractors were neutral. Older adults responded faster to positive than to negative targets but did not show increased interference effects from positive distractors. In Experiment 2, affective distractors were task irrelevant as the target was a three-digit array and did not share emotional characteristics with the distractors. Twenty-six older (63–84 years) and 30 young adults (18–30 years) gave speeded responses on a digit disparity task while ignoring the affective distractors positioned in the periphery. Task performance in either age group was not influenced by the task-irrelevant affective images. In keeping with the socioemotional selectivity theory, these findings suggest that older adults preferentially process task-relevant positive non-face images but only when presented within the main focus of attention.
... Sensorimotor interactions create internal models which detect all changes in body states and environment for different actions to make predictions. [1][2][3][4] As a protective mechanism, perception of actual or potential negative circumstances such as alarming and distressing situations can induce a negative emotional state. 5,6 Stress arises when the struggle mechanism is overloaded over time, as the organism is regulated by endocrine, autonomic, and somatic responses that are integrated in the central nervous system. ...
... Our main finding regarding the amplitude of RP indicated that motor preparation in pain led to get greater RP amplitudes to compensate the need of the mobilization of the sensorimotor resources. 1,3,4,9 Interaction between emotion and action in the way of accomplishing a motor response to a target in an emotional context was provided. 9 Neuroimaging studies demonstrated that involvement of the unpleasant stimuli had an increasing activation in the motor related areas at the parietofrontal regions. ...
Article
Full-text available
The readiness potential (RP), which is a slow negative electrical brain potential that occurs before voluntary movement, can be interpreted as a measure of intrinsic brain activity originating from self-regulating mechanisms. Early and late components of the RP may indicate clinical-neurophysiological features such as motivation, preparation, intention, and initiation of voluntary movements. In the present study, we hypothesized that electrical pain stimuli modulate the preparatory brain activity for movement. The grand average evoked potentials were measured at sensory motor regions with EEG during an experimental protocol consisting of painful and nonpainful stimuli. Our results demonstrated that painful stimuli were preceded by an enhanced RP when compared to non-painful stimuli at the Cz channel (p < 0.05). Furthermore, the mean amplitude of the RP at the early phase was significantly higher for the painful stimuli when compared to the non-painful stimuli (p < 0.05). Our results indicate that electrical painful stimuli, which can be considered as an unpleasant and stressful condition, modulate the motor preparation at sensory motor regions to a different extent when compared to non-painful electrical stimuli. Since early component of the RP represents cortical activation due to anticipation of the stimuli and the allocation of attentional resources, our results suggest that painful stimuli may affect the motor preparation processes and the prediction of the movement at the cortical level.
... Moreover, during the bring-to-the-body phase (Phase B), Peak Velocity was lower for pleasant and unpleasant stimuli than neutral stimuli. A main effect of phase F (1,20) = 49,622, p < 0.001] and of valence F(2, 40) = 10,744, p < 0.001] was also revealed. Post-hoc comparison showed that Peak Velocity was lower during Phase A when compared to Phase B. Additionally, Peak Velocity was lower for pleasant and unpleasant than neutral stimuli. ...
... The valence effects reflected in Time to Peak Velocity, Movement Time and Peak Velocity can thus not be attributed to differences in trajectory length. Previous work showed that kinematics is affected by the emotional context induced by valence-laden pictures 20,21 . In the present study, the source of emotion is inherent to the goal of action. ...
Article
Full-text available
The basic underpinnings of homeostatic behavior include interacting with positive items and avoiding negative ones. As the planning aspects of goal-directed actions can be inferred from their movement features, we investigated the kinematics of interacting with emotion-laden stimuli. Participants were instructed to grasp emotion-laden stimuli and bring them toward their bodies while the kinematics of their wrist movement was measured. The results showed that the time to peak velocity increased for bringing pleasant stimuli towards the body compared to unpleasant and neutral ones, suggesting higher easiness in undertaking the task with pleasant stimuli. Furthermore, bringing unpleasant stimuli towards the body increased movement time in comparison with both pleasant and neutral ones while the time to peak velocity for unpleasant stimuli was the same as for that of neutral stimuli. There was no change in the trajectory length among emotional categories. We conclude that during the “reach-tograsp” and “bring-to-the-body” movements, the valence of the stimuli affects the temporal but not the spatial kinematic features of motion. To the best of our knowledge, we show for the first time that the kinematic features of a goal-directed action are tuned by the emotional valence of the stimuli.
... Some studies showed that sex is 22 temporally earlier processed than emotion (Atkinson, Tipples, Burt, & Young, 2005), 23 whereas others demonstrated that negative emotions are preferentially caught by 24 attention so that they could be processed automatically (Vuilleumier, Armony, Driver, 25 & Dolan, 2001). A recent study also investigated how reaching kinematics is affected 26 by perception of sex and emotion by implementing a motor distractibility paradigm 27 (Ambron, Rumiati, & Foroni, 2016). Their results showed that when directing attention 28 to sex, participants' reaching trajectory was more distracted by the angry male than the 29 angry female and the happy male characters. ...
Article
Theorists have long postulated that facial properties such as emotion and gender are potent social stimuli that influence how individuals act accordingly. Yet extant scientific findings were mainly derived from investigations on the prompt motor response upon the presentation of affective stimuli, which were mostly delivered by means of pictures, videos, or text. A theoretical question remains unaddressed concerning how the perception of emotion and gender would modulate the dynamics of a continuous coordinated behavior. Conceived in the framework of dynamical approach to interpersonal motor coordination, the present study aimed to address this question by adopting the coupled-oscillators paradigm. Twenty-one participants performed in-phase and anti- phase coordination with two avatars (male and female) displaying three emotional expressions (neutral, happy, and angry) at different frequencies (100% and 150% of the participant’s own preferred frequency) by executing rhythmic left-right horizontal oscillatory movements. Time to initiate movement (TIM), mean relative phase error (MnRP) and standard deviation of relative phase (SDRP) were calculated as indices of reaction time, deviation from the intended pattern of coordination as well as coordination stability, respectively. Results showed a marginal shorter TIM with the angry avatar when initiating in-phase coordination at 150% frequency, whereas a significantly longer TIM was found with the happy avatar in the condition of anti-phase coordination at 100% frequency. Additionally, MnRP was significantly lower with the female avatar than male avatar, and with the angry avatar than neutral avatar when performing anti-phase coordination at 150% frequency. A significantly lower SDRP was found with the neutral male avatar relative to the neutral female avatar, but the happy female avatar yielded a significantly lower SDRP than the neutral female avatar. Our research complements scientific understanding regarding the effect of perceived emotion and gender on movement initiation by investigating the dynamics of the continuous coordinated behavior. Our results suggest that social perception is embodied in the individual’s interactive behavior so that it could be attained by behavioral assessment.
... Finally, one team has recently used a pointing task to investigate the relationship between action and emotional displays, and used emotional displays as distractors rather than as targets (Ambron & Foroni, 2015;Ambron, Rumiati, & Foroni, 2016). This approach is interesting because in everyday life, we do not only interact with people head on but our courses of action is also susceptible to their influence. ...
Thesis
Everyday action decision-making entails to take into account affordances provided by the environment, along with social information susceptible to guide our decisions. But within social contexts conveying potentially threatening information and multiple targets for action, as when entering a subway car, how do we decide very quickly where to sit while gauging the presence of a potential danger? The work conducted during my PhD aimed at investigating action and attentional processes in a realistic social context providing action opportunities. In the first study, spontaneous action choices and kinematics revealed that threat-related angry and fearful displays impact people’s free choice differently, i.e. favoured the selection of actions that avoided angry and approached fearful individuals. The second study further showed that attention was allocated to the space of the scene corresponding to the endpoint of the actions prioritized by those angry and fearful displays. Crucially, the third study evidenced that this effect disappeared when action opportunities were removed from the experimental context. Saccadic behaviour recorded in the fourth study allowed to access the development of attention allocation over time, and crucially revealed that attention was first quickly oriented toward threat before being directed toward the endpoint of the chosen action. Altogether, these findings suggest that action selection modulate attention allocation in response to social threat when embedded within realistic social contexts.
... In particular, we investigated the distracting role in performing goal-directed reaching movements played by task-irrelevant foods (natural or transformed). We focused on spatial and temporal parameters of reaching movements (Ambron & Foroni, 2015;Ambron, Rumiati, & Foroni, 2016) and tested whether they were influenced by the presence of task-irrelevant food stimuli (i.e., distractor effect; Howard & Tipper, 1997;Welsh & Elliott, 2005), and whether this effect was modulated by participants' implicit and explicit evaluations of the different types of food. ...
Chapter
Full-text available
The ability to categorize food and nonfood correctly and to distinguish between different foods is essential for our survival. Because of our omnivore nature and because of the food-rich environment in which we live, categorization processes involving food are particularly complex. The extent of the literature on this subject is an indication of our limited understanding of the mental processes underlying food perception, categorization, and choice.
... The present results together with the results reported by Foroni (2015) suggest that cognitive processes associated with L2 encoding and retrieval seem to be less associated to embodied processes, reinforcing the idea that embodied cognition and emotional memory are linked. Some research already shows how di erence in embodiment between L1 and L2 di erentially a ect individuals (e.g., Puntoni et al., 2009;Keysar et al., 2012) and future research should investigate memory processes as well other domains where the di erences between L1 and L2 may have a significant impact implementing other paradigms used to investigate the impact of emotion on behaviors (e.g., Ambron and Foroni, 2015;Ambron et al., 2016). ...
Article
Full-text available
Language and emotions are closely linked. However, previous research suggests that this link is stronger in a native language (L1) than in a second language (L2) that had been learned later in life. The present study investigates whether such reduced emotionality in L2 is reflected in changes in emotional memory and embodied responses to L2 in comparison to L1. Late Spanish/English bilinguals performed a memory task involving an encoding and a surprise retrieval phase. Facial motor resonance and skin conductance (SC) responses were recorded during encoding. The results give first indications that the enhanced memory for emotional vs. neutral content (EEM effect) is stronger in L1 and less present in L2. Furthermore, the results give partial support for decreased facial motor resonance and SC responses to emotional words in L2 as compared to L1. These findings suggest that embodied knowledge involved in emotional memory is associated to increased affective encoding and retrieval of L1 compared to L2.
... The event-relate potentials (ERPs) analyses showed that the processing of face gender occurred as early as 145-185 ms after the stimulus onset (Mouchetant-Rostaing et al., 2000). The automatic processing of facial emotion and face gender not only captures attention but also influences motoric action (Ambron and Foroni, 2015;Ambron et al., 2016). ...
Article
Full-text available
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).
... Participants underwent a series of computer-based tasks including a (i) food evaluation session, (ii) an irrelevant distractor experiment Ambron, Rumiati, & Foroni, 2015); (iii) a questionnaire session. The computer tasks were presented in a fixed order to allow unbiased assessment of the implicit and explicit association toward food. ...
Article
Emotion is an indispensable part of human emotion, which affects human normal physiological activities and daily life decisions. Human emotion recognition is a critical technology in artificial intelligence, human-computer interaction, and other fields. The brain is the information processing and control center of the human body. Electroencephalogram (EEG) physiological signals are generated directly by the central nervous system, closely related to human emotions. Therefore, EEG signals can objectively and now reflect the human emotional state in real-time. In recent years, with the development of the brain-computer interface, the acquisition and analysis technology of human EEG signals has become increasingly mature, so more and more researchers use the research method based on EEG signals to study emotion recognition. EEG processing plays a vital role in emotion recognition. This paper presents a recent research report on emotion recognition. This paper introduces the related analysis methods and research contents from the aspects of emotion induction, EEG preprocessing, feature extraction, and emotion classification and compares the advantages and disadvantages of these methods. This paper summarizes the problems existing in current research methods. This paper discusses the research direction of emotion classification based on EEG information.
Article
Full-text available
Human body postures convey useful information for understanding others' emotions and intentions. To investigate at which stage of visual processing emotional and movement-related information conveyed by bodies is discriminated, we examined event-related potentials (ERPs) elicited by laterally-presented images of bodies with static postures, and implied-motion body images with neutral, fearful or happy expressions. At the early stage of visual structural encoding (N190), we found a difference in the sensitivity of the two hemispheres to observed body postures. Specifically, the right hemisphere showed a N190 modulation both for the motion content (i.e., all the observed postures implying body movements elicited greater N190 amplitudes compared to static postures) and for the emotional content (i.e., fearful postures elicited the largest N190 amplitude), while the left hemisphere showed a modulation only for the motion content. In contrast, at a later stage of perceptual representation, reflecting selective attention to salient stimuli, an increased early posterior negativity (EPN) was observed for fearful stimuli in both hemispheres, suggesting an enhanced processing of motivationally relevant stimuli. The observed modulations, both at the early stage of structural encoding and at the later processing stage, suggest the existence of a specialized perceptual mechanism tuned to emotion- and action-related information conveyed by human body postures. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Article
Full-text available
Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs of genuine facial expressions. Thus, after proper training, participants were inclined to express “felt” emotions. The novel approach taken in this study was also used to establish whether a given expression was perceived as intended by untrained judges. The judgment task for perceivers was designed to be sensitive to subtle changes in meaning caused by the way an emotional display was evoked and expressed. Consequently, this allowed us to measure the purity and intensity of emotional displays, which are parameters that validation methods used by other researchers do not capture. The final set is comprised of those pictures that received the highest recognition marks (e.g. accuracy with intended display) from independent judges, totaling 210 high quality photographs of 30 individuals. Descriptions of the accuracy, intensity, and purity of displayed emotion as well as FACS AU’s codes are provided for each picture. Given the unique methodology applied to gathering and validating this set of pictures, it may be a useful tool for research using face stimuli. The Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) is freely accessible to the scientific community for noncommercial use by request at www.emotional-face.org.
Article
Full-text available
Studies indicate that perceiving emotional body language recruits fronto-parietal regions involved in action execution. However, the nature of such motor activation is unclear. Using transcranial magnetic stimulation (TMS) we provide correlational and causative evidence of two distinct stages of motor cortex engagement during emotion perception. Participants observed pictures of body expressions and categorized them as happy, fearful or neutral while receiving TMS over the left or right motor cortex at 150 and 300 ms after picture onset. In the early phase (150 ms), we observed a reduction of excitability for happy and fearful emotional bodies that was specific to the right hemisphere and correlated with participants’ disposition to feel personal distress. This ‘orienting’ inhibitory response to emotional bodies was also paralleled by a general drop in categorization accuracy when stimulating the right but not the left motor cortex. Conversely, at 300 ms, greater excitability for negative, positive and neutral movements was found in both hemispheres. This later motor facilitation marginally correlated with participants’ tendency to assume the psychological perspectives of others and reflected simulation of the movement implied in the neutral and emotional body expressions. These findings highlight the motor system’s involvement during perception of emotional bodies. They suggest that fast orienting reactions to emotional cues—reflecting neural processing necessary for visual perception—occur before motor features of the observed emotional expression are simulated in the motor system and that distinct empathic dispositions influence these two neural motor phenomena. Implications for theories of embodied simulation are discussed. Electronic supplementary material The online version of this article (doi:10.1007/s00429-014-0825-6) contains supplementary material, which is available to authorized users.
Article
Full-text available
Several different explanations have been proposed to account for the search asymmetry (SA) for angry schematic faces (i.e., the fact that an angry face target among friendly faces can be found faster than vice versa). The present study critically tested the perceptual grouping account, (a) that the SA is not due to emotional factors, but to perceptual differences that render angry faces more salient than friendly faces, and (b) that the SA is mainly attributable to differences in distractor grouping, with angry faces being more difficult to group than friendly faces. In visual search for angry and friendly faces, the number of distractors visible during each fixation was systematically manipulated using the gaze-contingent window technique. The results showed that the SA emerged only when multiple distractors were visible during a fixation, supporting the grouping account. To distinguish between emotional and perceptual factors in the SA, we altered the perceptual properties of the faces (dented-chin face) so that the friendly face became more salient. In line with the perceptual account, the SA was reversed for these faces, showing faster search for a friendly face target. These results indicate that the SA reflects feature-level perceptual grouping, not emotional valence.
Article
Full-text available
We establish attentional capture by emotional distractor faces presented as a "singleton" in a search task in which the emotion is entirely irrelevant. Participants searched for a male (or female) target face among female (or male) faces and indicated whether the target face was tilted to the left or right. The presence (vs. absence) of an irrelevant emotional singleton expression (fearful, angry, or happy) in one of the distractor faces slowed search reaction times compared to the singleton absent or singleton target conditions. Facilitation for emotional singleton targets was found for the happy expression but not for the fearful or angry expressions. These effects were found irrespective of face gender and the failure of a singleton neutral face to capture attention among emotional faces rules out a visual odd-one-out account for the emotional capture. The present study thus establishes irrelevant, emotional, attentional capture.
Article
Full-text available
Emotional faces communicate both the emotional state and behavioral intentions of an individual. They also activate behavioral tendencies in the perceiver, namely approach or avoidance. Here, we compared more automatic motor to more conscious rating responses to happy, sad, angry, and disgusted faces in a healthy student sample. Happiness was associated with approach and anger with avoidance. However, behavioral tendencies in response to sadness and disgust were more complex. Sadness produced automatic approach but conscious withdrawal, probably influenced by interpersonal relations or personality. Disgust elicited withdrawal in the rating task, whereas no significant tendency emerged in the joystick task, probably driven by expression style. Based on our results, it is highly relevant to further explore actual reactions to emotional expressions and to differentiate between automatic and controlled processes because emotional faces are used in various kinds of studies. Moreover, our results highlight the importance of gender of poser effects when applying emotional expressions as stimuli.
Article
Full-text available
The "face in the crowd effect" refers to the finding that threatening or angry faces are detected more efficiently among a crowd of distractor faces than happy or nonthreatening faces. Work establishing this effect has primarily utilized schematic stimuli and efforts to extend the effect to real faces have yielded inconsistent results. The failure to consistently translate the effect from schematic to human faces raises questions about its ecological validity. The present study assessed the face in the crowd effect using a visual search paradigm that placed veridical faces, verified to exemplify prototypical emotional expressions, within heterogeneous crowds. Results confirmed that angry faces were found more quickly and accurately than happy expressions in crowds of both neutral and emotional distractors. These results are the first to extend the face in the crowd effect beyond homogenous crowds to more ecologically valid conditions and thus provide compelling evidence for its legitimacy as a naturalistic phenomenon.
Article
Full-text available
In the present study we investigated the possibility of a dissociation between the visual control of reaching and the visual control of grasping in a prehension task. To this purpose we studied the kinematics of prehension movements in a patient with a right parietal lesion and in six right-handed healthy control subjects. The task we used was one in which the subjects had to reach and grasp target objects in the presence or absence of a simultaneously presented distractor object. All stimuli were presented in the space ipsilateral to the lesion. The distractor could be either of the same or different size to the target object and was presented either to the right or to the left of the target. The following parameters of the prehension ‘transport’ component were analysed: wrist trajectory, transport time, tangential peak velocity, acceleration. Maximal finger aperture, time to maximal finger aperture, peak acceleration and time to peak acceleration of grip aperture were the parameters of the ‘grasping’ component analysed. The results showed that, although the patient had no misreaching, her hand trajectory deviated abnormally towards the distractor position when the distractor was to the right (ipsilateral) side of the target. In contrast, the grasp kinematics was not affected by the distractors, even when the size of the right distractor was different from the target. It appears, therefore, that the attentional shift towards the ipsilesional side, typical of neglect patients, determines a surprising dissociation in motor control. In the presence of a right distractor, the patient plans and partially executes a reaching movement towards that object and simultaneously performs a grasping movement towards a second object, i.e. the centrally located target. The presentation of distractors had no effects on the prehension kinematics of the control subjects.
Article
Full-text available
Most attention research has viewed selection as essentially a perceptual problem, with attentional mechanisms required to protect the senses from overload. Although this might indeed be one of several functions that attention serves, the need for selection also arises when one considers the requirement of actions rather than perception. This review examines recent attempts to determine the role played by selective mechanisms in the control of action. Recent studies looking at reach-to-grasp responses to target objects in the presence of distracting objects within a three-dimensional space are discussed. The manner in which motor aspects of the reach-to-grasp response might be influenced by distractors is also highlighted, rather than merely addressing the perceptual consequences of distractors. The studies reviewed here emphasize several factors highlighting the importance of studying selective processes within three-dimensional environments from which attention and action have evolved.
Article
Full-text available
Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.
Article
Full-text available
Affect may have the function of preparing organisms for action, enabling approach and avoidance behavior. M. Chen and J. A. Bargh (1999) suggested that affective processing automatically resulted in action tendencies for arm flexion and extension. The crucial question is, however, whether automaticity of evaluation was actually achieved or whether their results were due to nonautomatic, conscious processing. When faces with emotional expressions were evaluated consciously, similar effects were obtained as in the M. Chen and J. A. Bargh study. When conscious evaluation was reduced, however, no action tendencies were observed, whereas affective processing of the faces was still evident from affective priming effects. The results suggest that tendencies for arm flexion and extension are not automatic consequences of automatic affective information processing.
Article
Full-text available
Evidence for a dichotomy between the planning of an action and its on-line control in humans is reviewed. This evidence suggests that planning and control each serve a specialized purpose utilizing distinct visual representations. Evidence from behavioral studies suggests that planning is influenced by a large array of visual and cognitive information, whereas control is influenced solely by the spatial characteristics of the target, including such things as its size, shape, orientation, and so forth. Evidence from brain imaging and neuropsychology suggests that planning and control are subserved by separate visual centers in the posterior parietal lobes, each constituting part of a larger network for planning and control. Planning appears to rely on phylogenetically newer regions in the inferior parietal lobe, along with the frontal lobes and basal ganglia, whereas control appears to rely on older regions in the superior parietal lobe, along with the cerebellum.
Article
Full-text available
In two experiments, event-related potentials were used to examine the effects of attentional focus on the processing of race and gender cues from faces. When faces were still the focal stimuli, the processing of the faces at a level deeper than the social category by requiring a personality judgment resulted in early attention to race and gender, with race effects as early as 120 msec. This time course corresponds closely to those in past studies in which participants explicitly attended to target race and gender (Ito & Urland, 2003). However, a similar processing goal, coupled with a more complex stimulus array, delayed social category effects until 190 msec, in accord with the effects of complexity on visual attention. In addition, the N170 typically linked with structural face encoding was modulated by target race, but not by gender, when faces were perceived in a homogenous context consisting only of faces. This suggests that when basic-level distinctions between faces and nonfaces are irrelevant, the mechanism previously associated only with structural encoding can also be sensitive to features used to differentiate among faces.
Article
Full-text available
Previous research with speeded-response interference tasks modeled on the Garner paradigm has demonstrated that task-irrelevant variations in either emotional expression or facial speech do not interfere with identity judgments, but irrelevant variations in identity do interfere with expression and facial speech judgments. Sex, like identity, is a relatively invariant aspect of faces. Drawing on a recent model of face processing according to which invariant and changeable aspects of faces are represented in separate neurological systems, we predicted asymmetric interference between sex and emotion classification. The results of Experiment 1, in which the Garner paradigm was employed, confirmed this prediction: Emotion classifications were influenced by the sex of the faces, but sex classifications remained relatively unaffected by facial expression. A second experiment, in which the difficulty of the tasks was equated, corroborated these findings, indicating that differences in processing speed cannot account for the asymmetric relationship between facial emotion and sex processing. A third experiment revealed the same pattern of asymmetric interference through the use of a variant of the Simon paradigm. To the extent that Garner interference and Simon interference indicate interactions at perceptual and response-selection stages of processing, respectively, a challenge for face processing models is to show how the same asymmetric pattern of interference could occur at these different stages. The implications of these findings for the functional independence of the different components of face processing are discussed.
Article
Full-text available
Do threatening or negative faces capture attention? The authors argue that evidence from visual search, spatial cuing, and flanker tasks is equivocal and that perceptual differences may account for effects attributed to emotional categories. Empirically, the authors examine the flanker task. Although they replicate previous results in which a positive face flanked by negative faces suffers more interference than a negative face flanked by positive faces, further results indicate that face perception is not necessary for the flanker-effect asymmetry and that the asymmetry also occurs with nonemotional stimuli. The authors conclude that the flanker-effect asymmetry with affective faces cannot be unambiguously attributed to emotional differences and may well be due to purely perceptual differences between the stimuli.
Article
Full-text available
The present paper reports three new experiments suggesting that the valence of a face cue can influence attentional effects in a cueing paradigm. Moreover, heightened trait anxiety resulted in increased attentional dwell-time on emotional facial stimuli, relative to neutral faces. Experiment 1 presented a cueing task, in which the cue was either an "angry", "happy", or "neutral" facial expression. Targets could appear either in the same location as the face (valid trials) or in a different location to the face (invalid trials). Participants did not show significant variations across the different cue types (angry, happy, neutral) in responding to a target on valid trials. However, the valence of the face did affect response times on invalid trials. Specifically, participants took longer to respond to a target when the face cue was "angry" or "happy" relative to neutral. In Experiment 2, the cue-target stimulus onset asynchrony (SOA) was increased and an overall inhibition of return (IOR) effect was found (i.e., slower responses on valid trials). However, the "angry" face cue eliminated the IOR effect for both high and low trait anxious groups. In Experiment 3, threat-related and jumbled facial stimuli reduced the magnitude of IOR for high, but not for low, trait-anxious participants.These results suggest that: (i) attentional bias in anxiety may reflect a difficulty in disengaging from threat-related and emotional stimuli, and (ii) threat-related and ambiguous cues can influence the magnitude of the IOR effect.
Article
Theories of visual attention deal with the limit on our ability to see (and later report) several things at once. These theories fall into three broad classes. Object-based theories propose a limit on the number of separate objects that can be perceived simultaneously. Discrimination-based theories propose a limit on the number of separate discriminations that can be made. Space-based theories propose a limit on the spatial area from which information can be taken up. To distinguish these views, the present experiments used small (less than 1 degree), brief, foveal displays, each consisting of two overlapping objects (a box with a line struck through it). It was found that two judgments that concern the same object can be made simultaneously without loss of accuracy, whereas two judgments that concern different objects cannot. Neither the similarity nor the difficulty of required discriminations, nor the spatial distribution of information, could account for the results. The experiments support a view in which parallel, preattentive processes serve to segment the field into separate objects, followed by a process of focal attention that deals with only one object at a time. This view is also able to account for results taken to support both discrimination-based and space-based theories.
Article
Emotional expressions are important cues that capture our attention automatically. Although a wide range of work has explored the role and influence of emotions on cognition and behavior, little is known about the way emotions influence motor actions. Moreover, considering how critical can be detecting emotional facial expressions in the environment, it is important to understand their impact even when they are not directly relevant to the task that we are performing. Our novel approach explores this issue from the attention and action perspective, using a task-irrelevant distractor paradigm, in which participants are asked to reach for a target while a non-target stimulus is also presented. We tested whether movement trajectory is influenced by irrelevant stimulus such as faces with or without emotional expressions. Results showed that reaching path veered towards faces with emotional expressions, in particular happiness, but not towards neutral expressions. This reinforces the view of emotions as attentional-capturing stimuli that are, however, also potential source of distraction for motor actions.
Article
Bayes factors have been advocated as superior to pp-values for assessing statistical evidence in data. Despite the advantages of Bayes factors and the drawbacks of pp-values, inference by pp-values is still nearly ubiquitous. One impediment to the adoption of Bayes factors is a lack of practical development, particularly a lack of ready-to-use formulas and algorithms. In this paper, we discuss and expand a set of default Bayes factor tests for ANOVA designs. These tests are based on multivariate generalizations of Cauchy priors on standardized effects, and have the desirable properties of being invariant with respect to linear transformations of measurement units. Moreover, these Bayes factors are computationally convenient, and straightforward sampling algorithms are provided. We cover models with fixed, random, and mixed effects, including random interactions, and do so for within-subject, between-subject, and mixed designs. We extend the discussion to regression models with continuous covariates. We also discuss how these Bayes factors may be applied in nonlinear settings, and show how they are useful in differentiating between the power law and the exponential law of skill acquisition. In sum, the current development makes the computation of Bayes factors straightforward for the vast majority of designs in experimental psychology.
Article
Transport and grasp kinematics were examined in a task in which subjects selectively reached to grasp a target object in the presence of non-target objects. In a variety of experiments significant interference effects were observed in temporal parameters, such as movement time, and spatial parameters, such as path. In general, the presence of non-targets slowed down the reach. Furthermore, reach paths were affected such that the hand veered away from near non-taroets o in reaches for far targets, even though the non-targets were not physical obstacles to the reaching hand. In contrast, the hand veered towards far non-targets in near reaches. We conclude that non-targets evoke competing responses, and the inhibitory mechanisms that resolve this competition are revealed in the reach path.
Article
The construct of associative prosopagnosia is strongly debated for two main reasons. The first is that, according to some authors, even patients with putative forms of associative visual agnosia necessarily present perceptual defects, that are the cause of their recognition impairment. The second is that in patients with right anterior temporal lobe (ATL) lesions (and sparing of the occipital and fusiform face areas), who can present a defect of familiar people recognition, with normal results on tests of face perception, the disorder is often multimodal, affecting voices (and to a lesser extent names) in addition to faces. The present review was prompted by the claim, recently advanced by some authors, that face recognition disorders observed in patients with right ATL lesions should be considered as an associative or amnestic form of prosopagnosia, because in them both face perception and retrieval of personal semantic knowledge from name are spared. In order to check this claim, we surveyed all the cases of patients who satisfied the criteria of associative prosopagnosia reported in the literature, to see if their defect was circumscribed to the visual modality or also affected other channels of people recognition. The review showed that in most patients the study had been limited to the visual modality, but that, when the other modalities of people recognition had been taken into account, the defect was often multimodal, affecting voice (and to a lesser extent name) in addition to face.
Article
Selective attention can be improved under conditions in which a high perceptual load is assumed to exhaust cognitive resources, leaving scarce resources for distractor processing. The present study examined whether perceptual load and acute stress share common attentional resources by manipulating perceptual and stress loads. Participants identified a target within an array of nontargets that were flanked by compatible or incompatible distractors. Attentional selectivity was measured by longer reaction times in response to the incompatible than to the compatible distractors. Participants in the stress group participated in a speech test that increased anxiety and threatened self-esteem. The effect of perceptual load interacted with the stress manipulation in that participants in the control group demonstrated an interference effect under the low perceptual load condition, whereas such interference disappeared under the high perceptual load condition. Importantly, the stress group showed virtually no interference under the low perceptual load condition, whereas substantial interference occurred under the high perceptual load condition. These results suggest that perceptual and stress related demands consume the same attentional resources.
Article
Interest in sex-related differences in psychological functioning has again come to the foreground with new findings about their possible functional basis in the brain. Sex differences may be one way how evolution has capitalized on the capacity of homologous brain regions to process social information between men and women differently. This paper focuses specifically on the effects of emotional valence, sex of the observed and sex of the observer on regional brain activations. We also discuss the effects of and interactions between environment, hormones, genes and structural differences of the brain in the context of differential brain activity patterns between men and women following exposure to seen expressions of emotion and in this context we outline a number of methodological considerations for future research. Importantly, results show that although women are better at recognizing emotions and express themselves more easily, men show greater responses to threatening cues (dominant, violent or aggressive) and this may reflect different behavioral response tendencies between men and women as well as evolutionary effects. We conclude that sex differences must not be ignored in affective research and more specifically in affective neuroscience.
Article
In two experiments, we explored whether emotional context influences imitative action tendencies. To this end, we examined how emotional pictures, presented as primes, affect imitative tendencies using a compatibility paradigm. In Experiment 1, when seen index finger movements (lifting or tapping) and pre-instructed finger movements (tapping) were the same (tapping-tapping, compatible trials), participants were faster than when they were different (lifting-tapping, incompatible trials). This compatibility effect was enhanced when the seen finger movement was preceded by negative primes compared with positive or neutral primes. In Experiment 2, using only negative and neutral primes, the influence of negative primes on the compatibility effect was replicated with participants performing two types of pre-instructed finger movements (tapping and lifting). This emotional modulation of the compatibility effect was independent of the participants' trait anxiety level. Moreover, the emotional modulation pertained primarily to the compatible conditions, suggesting facilitated imitation due to negatively valent primes rather than increased interference. We speculate that negative stimuli increase imitative tendencies as a natural response in potential flight-or-fight situations.
Article
The aim of this paper is to develop a theoretical model and a set of terms for understanding and discussing how we recognize familiar faces, and the relationship between recognition and other aspects of face processing. It is suggested that there are seven distinct types of information that we derive from seen faces; these are labelled pictorial, structural, visually derived semantic, identity-specific semantic, name, expression and facial speech codes. A functional model is proposed in which structural encoding processes provide descriptions suitable for the analysis of facial speech, for analysis of expression and for face recognition units. Recognition of familiar faces involves a match between the products of structural encoding and previously stored structural codes describing the appearance of familiar faces, held in face recognition units. Identity-specific semantic codes are then accessed from person identity nodes, and subsequently name codes are retrieved. It is also proposed that the cognitive system plays an active role in deciding whether or not the initial match is sufficiently close to indicate true recognition or merely a 'resemblance'; several factors are seen as influencing such decisions. This functional model is used to draw together data from diverse sources including laboratory experiments, studies of everyday errors, and studies of patients with different types of cerebral injury. It is also used to clarify similarities and differences between processes for object, word and face recognition.
Article
Prosopagnosia, the inability to recognize visually the faces of familiar persons who continue to be normally recognized through other sensory channels, is caused by bilateral cerebral lesions involving the visual system. Two patients with prosopagnosia generated frequent and large electrodermal skin conductance responses to faces of persons they had previously known but were now unable to recognize. They did not generate such responses to unfamiliar faces. The results suggest that an early step of the physiological process of recognition is still taking place in these patients, without their awareness but with an autonomic index.
Article
Theories of visual attention deal with the limit on our ability to see (and later report) several things at once. These theories fall into three broad classes. Object-based theories propose a limit on the number of separate objects that can be perceived simultaneously. Discrimination-based theories propose a limit on the number of separate discriminations that can be made. Space-based theories propose a limit on the spatial area from which information can be taken up. To distinguish these views, the present experiments used small (less than 1 degree), brief, foveal displays, each consisting of two overlapping objects (a box with a line struck through it). It was found that two judgments that concern the same object can be made simultaneously without loss of accuracy, whereas two judgments that concern different objects cannot. Neither the similarity nor the difficulty of required discriminations, nor the spatial distribution of information, could account for the results. The experiments support a view in which parallel, preattentive processes serve to segment the field into separate objects, followed by a process of focal attention that deals with only one object at a time. This view is also able to account for results taken to support both discrimination-based and space-based theories.
Article
The role of visual information and the precise nature of the representations used in the control of prehension movements has frequently been studied by having subjects reach for target objects in the absence of visual information. Such manipulations have often been described as preventing visual feedback; however, they also impose a working memory load not found in prehension movements with normal vision. In this study we examined the relationship between working memory and visuospatial attention using a prehension task. In this study six healthy, right-handed adult subjects reached for a wooden block under conditions of normal vision, or else with their eyes closed having first observed the placement of the target. Furthermore, the role of visuospatial attention was examined by studying the effect, on transport and grasp kinematics, of placing task-irrelevant "flanker" objects (a wooden cylinder) within the visual field on a proportion of trials. Our results clearly demonstrated that the position of flankers produced clear interference effects on both transport and grasp kinematics. Furthermore, interference effects were significantly greater when subjects reached to the remembered location of the target (i.e., with eyes closed). The finding that the position of flanker objects influences both transport and grasp components of the prehension movement is taken as support for the view that these components may not be independently computed and that subjects may prepare a coordinated movement in which both transport and grasp are specifically adapted to the task in hand. The finding that flanker effects occur primarily when reaching to the remembered location of the target object is interpreted as supporting the view that attentional processes do not work efficiently on working memory representations.
Article
Normal subjects (n = 64) were exposed either to pictures of snakes and spiders or to pictures of flowers and mushrooms in a differential conditioning paradigm in which one of the pictures signaled an electric shock. In a subsequent extinction series, these stimuli were presented backwardly masked by another stimulus for half of the subjects, whereas the other half received non-masked extinction. In support of a hypothesis that suggests that nonconscious information-processing mechanisms are sufficient to activate responses to fear-relevant stimuli, differential skin conductance response to masked conditioning and control stimuli was obvious only for subjects conditioned to fear-relevant stimuli. These results were replicated in a second experiment (n = 32), which also demonstrated that the effect was unaffected by which visual half-field was used for stimulus presentation.
Article
Two experiments examined the effects of facial expressions of emotion as conditioned stimuli (CSs) on human electrodermal conditioning and on a continuous measure of expectancy of the shock unconditioned stimulus. In Experiment 1, the CS+ was a picture of a person displaying an angry face and CS- was a neutral face. For half of the subjects, the expressions were depicted by males, for the other half by females. Male subjects showed larger skin conductance responses to pictures of males than did females. The responding of female subjects was the same regardless of the sex of the person in the picture. In Experiment 2, the CS+ and CS- were pictures of an angry or a happy face. For half of the subjects, the expressions were depicted by adult males, for the other half by preadolescent males. Subjects displayed greater differentiation when an adult male depicting anger was employed as the CS+ than when a preadolescent male depicting anger was the CS+. There were no differences when an adult or a child displayed happiness.
Article
Previous research has demonstrated that when a stimulus is to be ignored, the path of motion towards a target (saccade or manual reach) deviates away from the to-be-ignored stimulus. Path deviations in saccade and reaching tasks have, however, been observed in very different situations. In the saccade tasks subjects initially attended to a cue, then disengaged attention while saccading to a target. By contrast, in the selective reaching tasks attention was continuously withdrawn from the to-be-ignored stimulus, as this was irrelevant throughout the experiment. In the two experiments reported here, cues similar to those studied in saccade tasks are examined with selective reaching procedures. Experiment 1 shows that when a coloured light-emitting diode cue, upon which subjects engage and then subsequently disengage attention, is close to the responding hand, the hand deviates away from the cue. Experiment 2 confirms this cue avoidance by showing that, compared with central fixation alone, the hand veers away from a central cue. These results confirm that the path deviations observed in saccades can also be obtained in manual reaching movements. Such findings support the notion that eye and hand movements are both affected by inhibitory mechanisms of attention.
Article
Contrasting theories of visual attention emphasize selection by spatial location, visual features (such as motion or colour) or whole objects. Here we used functional magnetic resonance imaging (fMRI) to test key predictions of the object-based theory, which proposes that pre-attentive mechanisms segment the visual array into discrete objects, groups, or surfaces, which serve as targets for visual attention. Subjects viewed stimuli consisting of a face transparently superimposed on a house, with one moving and the other stationary. In different conditions, subjects attended to the face, the house or the motion. The magnetic resonance signal from each subject's fusiform face area, parahippocampal place area and area MT/MST provided a measure of the processing of faces, houses and visual motion, respectively. Although all three attributes occupied the same location, attending to one attribute of an object (such as the motion of a moving face) enhanced the neural representation not only of that attribute but also of the other attribute of the same object (for example, the face), compared with attributes of the other object (for example, the house). These results cannot be explained by models in which attention selects locations or features, and provide physiological evidence that whole objects are selected even when only one visual attribute is relevant.
Article
Event-related potentials (ERPs) were recorded while subjects were involved in three gender-processing tasks based on human faces and on human hands. In one condition all stimuli were only of one gender, preventing any gender discrimination. In a second condition, faces (or hands) of men and women were intermixed but the gender was irrelevant for the subject's task; hence gender discrimination was assumed to be incidental. In the third condition, the task required explicit gender discrimination; gender processing was therefore assumed to be intentional. Gender processing had no effect on the occipito-temporal negative potential at approximately 170 ms after stimulation (N170 component of the ERP), suggesting that the neural mechanisms involved in the structural encoding of faces are different from those involved in the extraction of gender-related facial features. In contrast, incidental and intentional processing of face (but not hand) gender affected the ERPs between 145 and 185 ms from stimulus onset at more anterior scalp locations. This effect was interpreted as evidence for the direct visual processing of faces as described in Bruce and Young's model [Bruce, V. & Young, A. (1986) Br. J. Psychol., 77, 305-327]. Additional gender discrimination effects were observed for both faces and hands at mid-parietal sites around 45-85 ms latency, in the incidental task only. This difference was tentatively assumed to reflect an early mechanism of coarse visual categorization. Finally, intentional (but not incidental) gender processing affected the ERPs during a later epoch starting from approximately 200 ms and ending at approximately 250 ms for faces, and approximately 350 ms for hands. This later effect might be related to attention-based gender categorization or to a more general categorization activity.
Article
We used event-related fMRI to assess whether brain responses to fearful versus neutral faces are modulated by spatial attention. Subjects performed a demanding matching task for pairs of stimuli at prespecified locations, in the presence of task-irrelevant stimuli at other locations. Faces or houses unpredictably appeared at the relevant or irrelevant locations, while the faces had either fearful or neutral expressions. Activation of fusiform gyri by faces was strongly affected by attentional condition, but the left amygdala response to fearful faces was not. Right fusiform activity was greater for fearful than neutral faces, independently of the attention effect on this region. These results reveal differential influences on face processing from attention and emotion, with the amygdala response to threat-related expressions unaffected by a manipulation of attention that strongly modulates the fusiform response to faces.
Article
We investigated the capability of emotional and nonemotional visual stimulation to capture automatic attention, an aspect of the interaction between cognitive and emotional processes that has received scant attention from researchers. Event-related potentials were recorded from 37 subjects using a 60-electrode array, and were submitted to temporal and spatial principal component analyses to detect and quantify the main components, and to source localization software (LORETA) to determine their spatial origin. Stimuli capturing automatic attention were of three types: emotionally positive, emotionally negative, and nonemotional pictures. Results suggest that initially (P1: 105 msec after stimulus), automatic attention is captured by negative pictures, and not by positive or nonemotional ones. Later (P2: 180 msec), automatic attention remains captured by negative pictures, but also by positive ones. Finally (N2: 240 msec), attention is captured only by positive and nonemotional stimuli. Anatomically, this sequence is characterized by decreasing activation of the visual association cortex (VAC) and by the growing involvement, from dorsal to ventral areas, of the anterior cingulate cortex (ACC). Analyses suggest that the ACC and not the VAC is responsible for experimental effects described above. Intensity, latency, and location of neural activity related to automatic attention thus depend clearly on the stimulus emotional content and on its associated biological importance.
Article
Two models of selective reaching have been proposed to account for deviations in movement trajectories in cluttered environments. The response vector model predicts movement trajectories should deviate towards or away from the location a distractor of little or large salience, respectively. In contrast, the response activation model predicts that a distractor with large salience should cause movement deviations towards it whereas a distractor with little salience should not influence the movement. The precuing technique was combined with the distractor interference paradigm to test these predictions. Results indicate that when the target was presented at the precued (salient) location, movements were unaffected by a distractor. Conversely, when the distractor was presented at the precued location while the target was presented at an uncued (non-salient) location, participants demonstrated increased reaction times and trajectory deviations towards the location of the distractor. These findings are consistent with the model of response activation.
Article
Recognition of facial expressions of emotions is very important for communication and social cognition. Neuroimaging studies showed that numerous brain regions participate in this complex function. To study spatiotemporal aspects of the neural representation of facial emotion recognition we recorded neuromagnetic activity in 12 healthy individuals by means of a whole head magnetoencephalography system. Source reconstructions revealed that several cortical and subcortical brain regions produced strong neural activity in response to emotional faces at latencies between 100 and 360 ms that were much stronger than those to neutral as well as to blurred faces. Orbitofrontal cortex and amygdala showed affect-related activity at short latencies already within 180 ms after stimulus onset. Some of the emotion-responsive regions were repeatedly activated during the stimulus presentation period pointing to the assumption that these reactivations represent indicators of a distributed interacting circuitry.
Article
In this review we examine how attention is involved in detecting faces, recognizing facial identity and registering and discriminating between facial expressions of emotion. The first section examines whether these aspects of face perception are "automatic", in that they are especially rapid, non-conscious, mandatory and capacity-free. The second section discusses whether limited-capacity selective attention mechanisms are preferentially recruited by faces and facial expressions. Evidence from behavioral, neuropsychological, neuroimaging and psychophysiological studies from humans and single-unit recordings from primates is examined and the neural systems involved in processing faces, emotion and attention are highlighted. Avenues for further research are identified.
Neurophysiological correlates of face gender processing in humans fMRI evidence for objects as the units of attentional selection
  • Y Mouchetant‐rostaing
  • M.-H Giard
  • S Bentin
  • P.-E Aguera
  • J Pernier
Mouchetant‐Rostaing, Y., Giard, M.-H., Bentin, S., Aguera, P.-E., & Pernier, J. (2000). Neurophysiological correlates of face gender processing in humans. European Journal of Neuroscience, 12(1), 303–310. doi:10.1046/j.1460- 760 9568.2000.00888.x O'Craven, K. M., Downing, P. E., & Kanwisher, N. (1999). fMRI evidence for objects as the units of attentional selection. Nature, 401(6753), 584–587. doi:10.1038/44134
Time course of 800 regional brain activations during facial emotion recognition in humans Selective reaching to grasp: Evidence for distractor 805 interference effects
  • M Streit
  • J Dammers
  • S Simsek-Kraues
  • J Brinkmeyer
  • W Wölwer
  • A S P Ioannides
  • L A Howard
  • S R Jackson
Streit, M., Dammers, J., Simsek-Kraues, S., Brinkmeyer, J., Wölwer, W., & Ioannides, A. (2003). Time course of 800 regional brain activations during facial emotion recognition in humans. Neuroscience Letters, 342(1–2), 101–104. doi:10.1016/S0304-3940(03)00274-X Tipper, S. P., Howard, L. A., & Jackson, S. R. (1997). Selective reaching to grasp: Evidence for distractor 805 interference effects. Visual Cognition, 4(1), 1–38. doi:10.1080/713756749