Article

The involvement of distinct visual channels in rapid attention towards fearful facial expressions

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper reports five experiments demonstrating that the low spatial frequency components of faces are critical to the production of rapid attentional responses towards fearful facial expressions. In our main experiments, low spatial frequency (LSF) or high spatial frequency (HSF) face pairs, consisting of one fearful and one neutral expression, were presented on a computer screen for a brief period. Participants were required to identify as quickly as possible the orientation of a bar target that immediately replaced one of the faces. Responses were faster when targets replaced the location of LSF fearful faces, compared with LSF neutral faces. By contrast, there were no differences between responses to targets replacing HSF fearful versus HSF neutral faces. This facilitation in spatial orienting occurred specifically with short time intervals between faces and target, and is consistent with a rapid processing of fear cues from LSF inputs that can serve to guide attention towards threat events.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, biases for responding to fearful facial expressions have been explored using a great number of paradigms, with mixed results. Studies of attentional cueing have shown both attentional capture and delayed disengagement by fearful faces [5,6], attentional capture only [7,3,8], or a complete absence of these effects [9,10,8,11]. Simple localisation tasks have shown that response times to detect fear expressions are faster compared to neutral faces [12,13], or no different [14,15]. ...
... However, biases for responding to fearful facial expressions have been explored using a great number of paradigms, with mixed results. Studies of attentional cueing have shown both attentional capture and delayed disengagement by fearful faces [5,6], attentional capture only [7,3,8], or a complete absence of these effects [9,10,8,11]. Simple localisation tasks have shown that response times to detect fear expressions are faster compared to neutral faces [12,13], or no different [14,15]. ...
... Simple localisation tasks have shown that response times to detect fear expressions are faster compared to neutral faces [12,13], or no different [14,15]. Some studies have shown detection advantages for fear expressions masked by continuous flash suppression, backward-masking and sandwich-masking techniques [16,17,18], while others have not [8,19]. It may be that paradigm-related differences can account for these mixed findings. ...
Article
The present study explores the threat bias for fearful facial expressions using saccadic latency, with a particular focus on the role of low-level facial information, including spatial frequency and contrast. In a simple localisation task, participants were presented with spatially-filtered versions of neutral, fearful, angry and happy faces. Together, our findings show that saccadic responses are not biased toward fearful expressions compared to neutral, angry or happy counterparts, regardless of their spatial frequency content. Saccadic response times are, however, significantly influenced by the spatial frequency and contrast of facial stimuli. We discuss the implications of these findings for the threat bias literature, and the extent to which image processing can be expected to influence behavioural responses to socially-relevant facial stimuli.
... 19 Biases for responding to fearful facial expressions have been explored using a great 20 number of paradigms, and have produced mixed results. Studies of attentional cueing 21 show both attentional capture and delayed disengagement by fearful faces [5,6], 22 attentional capture only [3,7,8], or an absence of these effects [8][9][10][11]; simple localisation 23 tasks show that response times to detect fear expressions are faster compared to neutral 24 faces [12,13], or no different [14,15]; some studies show detection advantages for fear 25 expressions masked by continuous flash suppression, backward-masking and 26 sandwich-masking techniques [16][17][18], while others do not [8,19]. It may be that 27 paradigm-related differences can account for these mixed findings. ...
... 19 Biases for responding to fearful facial expressions have been explored using a great 20 number of paradigms, and have produced mixed results. Studies of attentional cueing 21 show both attentional capture and delayed disengagement by fearful faces [5,6], 22 attentional capture only [3,7,8], or an absence of these effects [8][9][10][11]; simple localisation 23 tasks show that response times to detect fear expressions are faster compared to neutral 24 faces [12,13], or no different [14,15]; some studies show detection advantages for fear 25 expressions masked by continuous flash suppression, backward-masking and 26 sandwich-masking techniques [16][17][18], while others do not [8,19]. It may be that 27 paradigm-related differences can account for these mixed findings. ...
... 19 Biases for responding to fearful facial expressions have been explored using a great 20 number of paradigms, and have produced mixed results. Studies of attentional cueing 21 show both attentional capture and delayed disengagement by fearful faces [5,6], 22 attentional capture only [3,7,8], or an absence of these effects [8][9][10][11]; simple localisation 23 tasks show that response times to detect fear expressions are faster compared to neutral 24 faces [12,13], or no different [14,15]; some studies show detection advantages for fear 25 expressions masked by continuous flash suppression, backward-masking and 26 sandwich-masking techniques [16][17][18], while others do not [8,19]. It may be that 27 paradigm-related differences can account for these mixed findings. ...
Preprint
Full-text available
The present study explores the threat bias for fearful facial expressions using saccadic 1 latency as the response mode, with a particular focus on the role of low-level facial 2 information, including spatial frequency, physical contrast, and apparent, perceived 3 contrast. In a simple localisation task, participants were presented with spatially-filtered 4 versions of neutral, fearful, angry and happy faces. Faces were either composed of 5 naturally-occurring, expression-related differences in contrast, normalised for RMS 6 contrast, or normalised for their apparent, perceived contrast. Together, findings show 7 that saccadic responses are not biased toward fearful expressions compared to neutral, 8 angry or happy counterparts, regardless of their spatial frequency content. Saccadic 9 response times are, however, significantly influenced by the physical contrast of facial 10 stimuli, and the extent to which these are preserved or normalised at the physical (RMS 11 matched) and psychophysical (perceptually matched) level. We discuss the implications 12 of findings for the threat bias literature, and the extent to which image processing can 13 be expected to influence behavioural responses to socially-relevant facial stimuli. 14
... In the field of face processing, electrophysiological evidence with EEG has shown that the visual system prioritizes attention towards fearful faces compared to other expressions (Holmes, Green, & Vuilleumier, 2005;Santesso et al., 2008), in particular using a component linked to attention termed the N2pc. The N2pc, characterized as a larger negativity appearing over electrodes contralateral to the side of the attended stimulus compared to ipsilateral electrodes, occurs approx. ...
... Together, these results establish that involuntary capture by emotional faces is reliably modulated by the distance to the observer. These findings support previous N2pc studies that also found an enhanced response to fearful faces (Eimer & Kiss, 2007;Holmes et al., 2005;Santesso et al., 2008). In Experiment 2, we did not find an N2pc at the far distance. ...
... This is surprising given the previous studies that reported an N2pc for emotional faces, where distance was not manipulated. However, in these experiments, screen distances were usually around 60e70 cm away from the participant (Holmes et al., 2005;Santesso et al., 2008), corresponding essentially to close space. It is therefore possible that the N2pc is attenuated in far space but that this effect has not been observed due to the habitual location of computer screens within (or close to) peripersonal space. ...
Article
Attention is an important function that allows us to selectively enhance the processing of relevant stimuli in our environment. Fittingly, a number of studies have revealed that potentially threatening/fearful stimuli capture attention more efficiently. Interestingly, in separate fMRI studies, threatening stimuli situated close to viewers were found to enhance brain activity in fear-relevant areas more than stimuli that were further away. Despite these observations, few studies have examined the effect of personal distance on attentional capture by emotional stimuli. Using electroencephalography (EEG), the current investigation addressed this question by investigating attentional capture of emotional faces that were either looming/receding, or were situated at different distances from the viewer. In Experiment 1, participants carried out an incidental task while looming or receding fearful and neutral faces were presented bilaterally. A significant lateralised N170 and N2pc were found for a looming upright fearful face, however no significant components were found for a looming upright neutral face or inverted fearful and neutral faces. In Experiment 2, participants made gender judgements of emotional faces that appeared on a screen situated within or beyond peripersonal space (respectively 50 cm or 120 cm). Although response times did not differ, significantly more errors were made when faces appeared in near as opposed to far space. Importantly, ERPs revealed a significant N2pc for fearful faces presented in peripersonal distance, compared to the far distance. Our findings show that personal distance markedly affects neural responses to emotional stimuli, with increased attention towards fearful upright faces that appear in close distance.
... While LSF carry coarse information such as rough configurational cues, HSF carry fine-grained visual information, such as texture and contrast. In the case of faces, LSF are believed to be particularly important in the swift detection of facial emotional expressions (Bar et al., 2006;Holmes, Green, & Vuilleumier, 2005;Mendez-Bertolo et al., 2016;Schyns & Oliva, 1999). Conversely, the processing of HSF is thought to be relatively slower and support more detailed feature processing of faces (Goffaux, Hault, Michel, Vuong, & Rossion, 2005). ...
... This debate is supported by evidence showing: i) the involvement of other brain regions, such as the orbitofrontal cortex, in the fast discrimination of emotional stimuli (Kawasaki et al., 2001) and ii) the use of HSF in the identification of fearful expressions (Stein, Seymour, Hebart, & Sterzer, 2014). Nevertheless, converging evidence suggests that the processing of coarse LSF information relies on amygdala activity (Mendez-Bertolo et al., 2016;Vuilleumier et al., 2003) and mediates the fast extraction and attentional engagement to threat cues but is not important to more sustained attentional processing (Holmes et al., 2005;Lojowska et al., 2015;Park et al., 2013). Our results provide a significant advance to this literature by showing how the representation of ongoing bodily states influences visual threat processing. ...
... Cardiac signals may be particularly effective in modulating attentional capture when visual processing is largely dependent on amygdala processing and less when it relies on additional processing at higher-order visual areas, such as to stimuli containing HSF. Moreover, while HSF contain fine-grained visual information used in the conscious discrimination of fearful expressions (Stein et al., 2014), the information conveyed by LSF is rather coarse and may be perceptually more ambiguous (Holmes et al., 2005;Park, Vasey, Kim, Hu, & Thayer, 2016;Vuilleumier et al., 2003). Such relative perceptual ambiguity in explicit appraisal may also contribute to enhance amygdala mediated stimulus processing (Adolphs, 2013). ...
Article
Despite the growing consensus that the continuous dynamic cortical representations of internal bodily states shape the subjective experience of emotions, physiological arousal is typically considered only a consequence and rarely a determinant of the emotional experience. Recent experimental approaches study how afferent autonomic signals from the heart modulate the processing of sensory information by focusing on the phasic properties of arterial baroreceptor firing that is active during cardiac systole and quiescent during cardiac diastole. For example, baroreceptor activation has been shown to enhance the processing of threat-signalling stimuli. Here, we investigate the role of cardiac afferent signals in the rapid engagement and disengagement of attention to fear stimuli. In an adapted version of the emotional attentional cueing paradigm, we timed the presentation of cues, either fearful or neutral faces, to coincide with the different phases of the cardiac cycle. Moreover, we presented cues with different spatial ranges to investigate how these interoceptive signals influence the processing of visual information. Results revealed a selective enhancement of attentional engagement to low spatial frequency fearful faces presented during cardiac systole relative to diastole. No cardiac cycle effects were observed to high spatial frequency nor broad spatial frequency cues. These findings expand our mechanistic understanding of how body-brain interactions may impact the visual processing of fearful stimuli and contribute to the increased attentional capture of threat signals.
... Fearful facial expressions are especially salient to the human visual system relative to other expressions [1][2]. Expressions of fear capture and orient visual spatial attention [3][4][5], receive preferential allocation of attentional resources [6][7][8]3], and emerge faster under conditions of visual suppression [9][10]. This bias for fearful expressions also occurs in peripheral vision [11][12], and when observers report being unaware of having been presented with a face [13][14][15]. ...
... Statistically significant sidak-corrected comparisons showed that in order to perceptually match a reference face, upright high frequency fear expressions require 12.5, 27.3 and 20.52% less RMS contrast than neutral, angry and disgust faces (respectively). In the manipulated condition, control high frequency fear expressions require 6 Michelson contrast settings for images of facial expressions when face images are perceptually matched to a reference stimulus whose contrast is fixed at 10% Michelson contrast. Lower RMS settings denote less physical contrast required for perceptual matching, thus implying relatively better salience, and therefore higher apparent contrast. ...
Article
Full-text available
Fearful facial expressions tend to be more salient than other expressions. This threat bias is to some extent driven by simple low-level image properties, rather than the high-level emotion interpretation of stimuli. It might be expected therefore that different expressions will, on average, have different physical contrasts. However, studies tend to normalise stimuli for RMS contrast, potentially removing a naturally-occurring difference in salience. We assessed whether images of faces differ in both physical and apparent contrast across expressions. We measured physical RMS contrast and the Fourier amplitude spectra of 5 emotional expressions prior to contrast normalisation. We also measured expression-related differences in perceived contrast. Fear expressions have a steeper Fourier amplitude slope compared to neutral and angry expressions, and consistently significantly lower contrast compared to other faces. This effect is more pronounced at higher spatial frequencies. With the exception of stimuli containing only low spatial frequencies, fear expressions appeared higher in contrast than a physically matched reference. These findings suggest that contrast normalisation artificially boosts the perceived salience of fear expressions; an effect that may account for perceptual biases observed for spatially filtered fear expressions.
... Research examining the psychological and neural mechanisms of threat perception has generally focused on bottom-up processing of threat, reinforcing the view that the perception of threat is automatic, and not subject to endogenous processes (Vuilleumier & Driver, 2007). This view has led to development of several paradigms in which emotional stimuli are unexpected, distracting, or irrelevant to the task at hand (Armony & Dolan, 2002;Holmes, Green, & Vuilleumier, 2005 ;Keil, Moratti, Sabatinelli, Bradley, & Lang, 2005 ;Mogg et al., 2000;Mogg & Bradley, 1999a;Stormark & Hugdahl, 1996. For example, the dot-probe task simultaneously presents an emotional and a neutral stimulus peripherally, and one of these two stimuli is followed by an attentional probe (Holmes et al., 2005 ;Mogg et al., 2000;Mogg & Bradley, 1999a). ...
... This view has led to development of several paradigms in which emotional stimuli are unexpected, distracting, or irrelevant to the task at hand (Armony & Dolan, 2002;Holmes, Green, & Vuilleumier, 2005 ;Keil, Moratti, Sabatinelli, Bradley, & Lang, 2005 ;Mogg et al., 2000;Mogg & Bradley, 1999a;Stormark & Hugdahl, 1996. For example, the dot-probe task simultaneously presents an emotional and a neutral stimulus peripherally, and one of these two stimuli is followed by an attentional probe (Holmes et al., 2005 ;Mogg et al., 2000;Mogg & Bradley, 1999a). Faster responses to probes in threat-associated locations (valid trial) compared with neutral locations are interpreted as evidence of bias to threatening stimuli. ...
Chapter
The perception of threat is important for survival and is therefore perceptually prioritized. This prioritization has largely been studied as a stimulus-driven (i.e., bottom-up) process. However, we suggest that the process of perception starts before a stimulus is encountered. This chapter explores the impact of prestimulus biases on the perceptual prioritization of threatening stimuli, in normal function and in anxiety. First, we review how the bottom-up aspects of threat perception have been examined empirically before examining how threat-related endogenous (i.e., top-down) factors can guide perception. We highlight major theories related to top-down guided threat perception and discuss some conceptual and methodological pitfalls that can occur when neglecting emotional top-down factors in threat perception. Next, we review neurobiological and peripheral factors related to threat perception guided by top-down processes. Differences between top-down threat perception in anxiety and healthy function are explored. Finally, we discuss limitations and future directions for the field.
... Fearful facial expressions are particularly salient to the human visual system, receiving preferential allocation of attentional resources, and inhibiting this attention from relocating to different stimuli [1][2][3][4]. This attentional effect is also found when fearful faces appear in peripheral vision [5][6]. ...
... In terms of the threat bias for fearful faces, this means that fearful faces may be prioritised because of their emotional relevance, or their low-level image properties. The latter, low-level approach has been a particular focus within visual psychophysics, where studies have shown that it is specifically the low spatial frequency information in fearful faces that gives rise to the saliency effects associated with fearful expressions [1,[11][12]. Low frequency components of fear expressions are thought to undergo rapid processing via low-frequency-sensitive subcortical pathways that directly access the amygdala [11][12]. ...
Article
Full-text available
It has been argued that rapid visual processing for fearful face expressions is driven by the fact that effective contrast is higher in these faces compared to other expressions, when the contrast sensitivity function is taken into account. This proposal has been upheld by data from image analyses, but is yet to be tested at the behavioural level. The present study conducts a traditional contrast sensitivity task for face images of various facial expressions. Findings show that visual contrast thresholds do not differ for different facial expressions We re-conduct analysis of faces’ effective contrast, using the procedure developed by Hedger, Adams and Garner, and show that higher effective contrast in fearful face expressions relies on face images first being normalised for RMS contrast. When not normalised for RMS contrast, effective contrast in fear expressions is no different, or sometimes even lower, compared to other expressions. However, the effect of facial expression on detection in a backward masking study did not depend on the type of contrast normalisation used. These findings are discussed in relation to the implications of contrast normalisation on the salience of face expressions in behavioural and neurophysiological experiments, and also the extent that natural physical differences between facial stimuli are masked during stimulus standardisation and normalisation.
... Studies applying this paradigm often find a search advantage for angry faces, which is not related to participants' anxiety (the socalled anger-superiority effect) for both schematic faces (Fox et al., 2000;Hahn & Gronlund, 2007) and real faces (Horstmann & Bauland, 2006;Moriya, Koster, & De Raedt, 2014;Pinkham, Griffin, Baron, Sasson, & Gur, 2010). Third, even within the dot-probe paradigm, some studies report attentional biases towards threatening stimuli in unselected samples (Brosch, Sander, Pourtois, & Scherer, 2008;Holmes, Green, & Vuilleumier, 2005;S. Müller, Rothermund, & Wentura, 2016;Petrova, Wentura, & Bermeitinger, 2013). ...
... Some studies found a significant bias towards threatening stimuli in unselected samples or healthy control participants (Bocanegra, Huijding, & Zeelenberg, 2012;Brosch, Pourtois, Sander, & Vuilleumier, 2011;Holmes et al., 2005;S. Müller et al., 2016), but others did not (Cooper & Langton, 2006;Murphy, Downham, Cowen, & Harmer, 2008;Putman, 2011;Reinecke, Cooper, Favaron, Massey-Chase, & Harmer, 2011;Sigurjónsdóttir, Sigurðardóttir, Björnsson, & Kristjánsson, 2015;Stevens, Rist, & Gerlach, 2009). ...
Article
Dot-probe studies usually find an attentional bias towards threatening stimuli only in anxious participants. Here, we investigated under what conditions such a bias occurs in unselected samples. According to contingent-capture theory, an irrelevant cue only captures attention if it matches an attentional control setting. Therefore, we first tested the hypothesis that an attentional control setting tuned to threat must be activated in (non-anxious) individuals. In Experiment 1, we used a dot-probe task with a manipulation of attentional control settings (“threat”-set vs. control set). Surprisingly, we found an (anxiety-independent) attentional bias to angry faces that was not moderated by attentional control settings. Since we presented two stimuli (i.e., a target and a distractor) on the target screen in Experiment 1 (a necessity to realise the test of contingent capture), but most dot-probe studies only employ a single target, we conducted Experiment 2 to test the hypothesis that attentional bias in the general population is contingent on target competition. Participants performed a dot-probe task involving presentation of a stand-alone target or a target competing with a distractor. We found an (anxiety-independent) attentional bias towards angry faces in the latter but not the former condition. This suggests that attentional bias towards angry faces in unselected samples is not contingent on attentional control settings, but on target competition.
... People can selectively attend to either high-or low-spatial-frequency information, and often do so automatically, depending on presentation time, distance, and most importantly, the diagnosticity of the information (e.g., Schyns & Oliva, 1999; for a review of spatial frequencies and face processing, see Ruiz-Soler & Beltran, 2006). Second, it is often assumed that low and high spatial frequencies differ in their capacities to automatically trigger emotion-related processes (e.g., Bannerman, Hibbard, Chalmers, & Sahraie, 2012;Holmes, Green, & Vuilleumier, 2005;Vuilleumier, Armony, Driver, & Dolan, 2003). It is hypothesized that a fast, potentially subcortical processing route triggers the amygdala by means of magnocellular processing, with greater sensitivity for low spatial frequencies (Morris, Öhman, & Dolan, 1999;Vuilleumier et al., 2003; see also Tamietto & de Gelder, 2010; but see Pessoa & Adolphs, 2010). ...
... Likewise, nonconsciously presented emotional LSF information has been found to influence implicit (but not explicit) behavioral judgments (Laeng et al., 2010), and to elicit brain activity comparable to visible emotional faces (Prete, Capotosto, Zappasodi, Laeng, & Tommasi, 2015), corroborating the assumption that such information is capable to trigger emotion-related processes. Differential processing of HSF and LSF information has also been observed behaviorally in studies focusing on fast and early processes of perception, attention, and spontaneous judgments (Bannerman et al., 2012;Holmes et al., 2005). This evidence suggests that conditions, which promote automatic processing, such as short or nonconscious presentation durations, are advantageous to detect processing differences between high and low spatial frequencies (Langner, Becker, & Rinck, 2012;Rohr & Wentura, 2014), in accordance with the assumption that the supposed processing pathways are especially important for this kind of processing (i.e., fast and early, nonconscious; Barrett & Bar, 2009;Pourtois, Schettino, & Vuilleumier, 2013;Tamietto & de Gelder, 2010). ...
Article
This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement—that is, better long-term memory for emotional than for neutral stimuli—and the emotion-induced recognition bias—that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account—that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.
... Research examining the psychological and neural mechanisms of threat perception has generally focused on bottom-up processing of threat, reinforcing the view that the perception of threat is automatic, and not subject to endogenous processes (Vuilleumier & Driver, 2007). This view has led to development of several paradigms in which emotional stimuli are unexpected, distracting, or irrelevant to the task at hand (Armony & Dolan, 2002;Holmes, Green, & Vuilleumier, 2005 ;Keil, Moratti, Sabatinelli, Bradley, & Lang, 2005 ;Mogg et al., 2000;Mogg & Bradley, 1999a;Stormark & Hugdahl, 1996. For example, the dot-probe task simultaneously presents an emotional and a neutral stimulus peripherally, and one of these two stimuli is followed by an attentional probe (Holmes et al., 2005 ;Mogg et al., 2000;Mogg & Bradley, 1999a). ...
... This view has led to development of several paradigms in which emotional stimuli are unexpected, distracting, or irrelevant to the task at hand (Armony & Dolan, 2002;Holmes, Green, & Vuilleumier, 2005 ;Keil, Moratti, Sabatinelli, Bradley, & Lang, 2005 ;Mogg et al., 2000;Mogg & Bradley, 1999a;Stormark & Hugdahl, 1996. For example, the dot-probe task simultaneously presents an emotional and a neutral stimulus peripherally, and one of these two stimuli is followed by an attentional probe (Holmes et al., 2005 ;Mogg et al., 2000;Mogg & Bradley, 1999a). Faster responses to probes in threat-associated locations (valid trial) compared with neutral locations are interpreted as evidence of bias to threatening stimuli. ...
Article
Anxiety is characterized by the anticipation of aversive future events. The importance of prestimulus anticipatory factors, such as goals and expectations, is well-established in both visual perception and attention. Nevertheless, the prioritized perception of threatening stimuli in anxiety has been attributed to the automatic processing of these stimuli and the role of prestimulus factors has been neglected. The present review will focus on the role of top-down processes that occur before stimulus onset in the perceptual and attentional prioritization of threatening stimuli in anxiety. We will review both the cognitive and neuroscience literature, showing how top-down factors, and interactions between top-down and bottom-up factors may contribute to biased perception of threatening stimuli in normal function and anxiety. The shift in focus from stimulus-driven to endogenous factors and interactions between top-down and bottom-up factors in the prioritization of threat-related stimuli represents an important conceptual advance. In addition, it may yield important clues into the development and maintenance of anxiety, as well as inform novel treatments for anxiety.
... Previous dot-probe studies have shown a bias in spatial attention by comparing reaction times on trials in which a dot was presented at the location of a previously presented emotional stimulus when compared to trials in which the dot was presented at the location of a neutral stimulus. (Williams et al., 1996;Bradley et al., 1997;Carlson & Reinke, 2008, Holmes et al., 2005de Valk et al., 2015). Lastly, the dot-probe has also been used successfully to investigate attentional biases in bonobos and other great apes (e.g., Kret et al., 2016;Kret et al., 2018;Tomonaga & Imura, 2009). ...
Article
Full-text available
Unlabelled: Previous work has established that humans have an attentional bias towards emotional signals, and there is some evidence that this phenomenon is shared with bonobos, our closest relatives. Although many emotional signals are explicit and overt, implicit cues such as pupil size also contain emotional information for observers. Pupil size can impact social judgment and foster trust and social support, and is automatically mimicked, suggesting a communicative role. While an attentional bias towards more obvious emotional expressions has been shown, it is unclear whether this also extends to a more subtle implicit cue, like changes in pupil size. Therefore, the current study investigated whether attention is biased towards pupils of differing sizes in humans and bonobos. A total of 150 human participants (141 female), with a mean age of 19.13 (ranging from 18 to 32 years old), completed an online dot-probe task. Four female bonobos (6 to 17 years old) completed the dot-probe task presented via a touch screen. We used linear mixed multilevel models to examine the effect of pupil size on reaction times. In humans, our analysis showed a small but significant attentional bias towards dilated pupils compared to intermediate-sized pupils and intermediate-sized pupils when compared to small pupils. Our analysis did not show a significant effect in bonobos. These results suggest that the attentional bias towards emotions in humans can be extended to a subtle unconsciously produced signal, namely changes in pupil size. Due to methodological differences between the two experiments, more research is needed before drawing a conclusion regarding bonobos. Supplementary information: The online version contains supplementary material available at 10.1007/s42761-022-00146-1.
... In adults, emotions modulate Event Related brain Potentials (ERPs) when faces contain lower spatial frequencies [24], and LSF information is crucial to produce an increase in fMRI activation to fearful faces relative to neutral faces in the amygdala, a key subcortical structure in emotional processing, as well as in the visual cortical areas [25]. Differential sensitivity to HSF and LSF contents of emotional expressions has also been found in some behavioral tasks: emotion categorization [26] and attentional responses to fear [27] occur more rapidly for LSF faces than for HSF faces, whereas participants' ratings of fear intensity increase in the presence of HSF information [25]. It is claimed that distinct streams in the visual system of primates are selectively sensitive to different ranges of spatial frequencies [28,29]. ...
Article
Full-text available
Research has shown that adults are better at processing faces of the most represented ethnic group in their social environment compared to faces from other ethnicities, and that they rely more on holistic/configural information for identity discrimination in own-race than other-race faces. Here, we applied a spatial filtering approach to the investigation of trustworthiness perception to explore whether the information on which trustworthiness judgments are based differs according to face race. European participants (N = 165) performed an online-delivered pairwise preference task in which they were asked to select the face they would trust more within pairs randomly selected from validated White and Asian broad spectrum, low-pass filter and high-pass filter trustworthiness continua. Results confirmed earlier demonstrations that trustworthiness perception generalizes across face ethnicity, but discrimination of trustworthiness intensity relied more heavily on the LSF content of the images for own-race faces compared to other-race faces. Results are discussed in light of previous work on emotion discrimination and the hypothesis of overlapping perceptual mechanisms subtending social perception of faces.
... In the present study, we therefore tested whether masked (subliminal) fearful faces capture attention. In contrast to the classic dot-probe task, where emotional faces are used as cues and an onset stimulus is shown as target (e.g., a dot as location target or an orientated shape as discrimination target; see Mogg and Bradley, 1999;Holmes et al., 2005;Cooper and Langton, 2006;Brosch et al., 2008;Puls and Rothermund, 2018), we used two stimuli in the target display, instead of only one onset target (cf. Wentura, 2018, 2019). ...
Article
Full-text available
In two experiments, we tested whether fearful facial expressions capture attention in an awareness-independent fashion. In Experiment 1, participants searched for a visible neutral face presented at one of two positions. Prior to the target, a backward-masked and, thus, invisible emotional (fearful/disgusted) or neutral face was presented as a cue, either at target position or away from the target position. If negative emotional faces capture attention in a stimulus-driven way, we would have expected a cueing effect: better performance where fearful or disgusted facial cues were presented at target position than away from the target. However, no evidence of capture of attention was found, neither in behavior (response times or error rates), nor in event-related lateralizations (N2pc). In Experiment 2, we went one step further and used fearful faces as visible targets, too. Thereby, we sought to boost awareness-independent capture of attention by fearful faces. However, still, we found no significant attention-capture effect. Our results show that fearful facial expressions do not capture attention in an awareness-independent way. Results are discussed in light of existing theories.
... Thus, the strong activity elicited by HSF on the N170 could emphasize the analysis of face details required for a precise categorization of a face. Nevertheless, results also differ from previous studies which found no effect of spatial frequencies (Holmes et al., 2005) or larger amplitude for LSF compared to HSF during face processing (Goffaux et al., 2003;Pourtois et al., 2005;Halit et al., 2006). Inconsistencies across studies regarding differences in P100 amplitude or latencies according to SF might be related to differences in methodology either in the task and stimuli used or in the SF filtering choices and need to be understood by further investigations. ...
Article
Full-text available
Visual processing is thought to function in a coarse-to-fine manner. Low spatial frequencies (LSF), conveying coarse information, would be processed early to generate predictions. These LSF-based predictions would facilitate the further integration of high spatial frequencies (HSF), conveying fine details. The predictive role of LSF might be crucial in automatic face processing, where high performance could be explained by an accurate selection of clues in early processing. In the present study, we used a visual Mismatch Negativity (vMMN) paradigm by presenting an unfiltered face as standard stimulus, and the same face filtered in LSF or HSF as deviant, to investigate the predictive role of LSF vs. HSF during automatic face processing. If LSF are critical for predictions, we hypothesize that LSF deviants would elicit less prediction error (i.e., reduced mismatch responses) than HSF deviants. Results show that both LSF and HSF deviants elicited a mismatch response compared with their equivalent in an equiprobable sequence. However, in line with our hypothesis, LSF deviants evoke significantly reduced mismatch responses compared to HSF deviants, particularly at later stages. The difference in mismatch between HSF and LSF conditions involves posterior areas and right fusiform gyrus. Overall, our findings suggest a predictive role of LSF during automatic face processing and a critical involvement of HSF in the fusiform during the conscious detection of changes in faces.
... In line with this idea, Soares et al. (2014) found that snakes were more rapidly detected compared to spiders, potentially due to evolutionary mechanisms increasing our ability to detect more threatening and predatory stimuli. Facial stimuli also hold evolutionary value for social interactions, such as submissive or competitive behaviors (Öhman et al., 2012), and others have found that facial stimuli are prioritized regardless of task-relevant or attentional demands (Lavie et al., 2003;Reddy et al., 2004), leading to their rapid and efficient processing (Holmes et al., 2005(Holmes et al., , 2009Mogg et al., 2008;Öhman et al., 2012) due to dedicated neural networks for such stimuli (Pitcher et al., 2009;Pourtois et al., 2013). As such, it is possible that facial stimuli, and other biologically threatening stimuli, are processed differently than simple stimuli that have recently gained threatrelated attributes. ...
Article
Full-text available
Previous work suggests that threat-related stimuli are stored to a greater degree in working memory compared to neutral stimuli. However, most of this research has focused on stimuli with physically salient threat attributes (e.g., angry faces), failing to account for how a “neutral” stimulus that has acquired threat-related associations through differential aversive conditioning influences working memory. The current study examined how differentially conditioned safe (i.e., CS–) and threat (i.e., CS+) stimuli are stored in working memory relative to a novel, non-associated (i.e., N) stimuli. Participants (n = 69) completed a differential fear conditioning task followed by a change detection task consisting of three conditions (CS+, CS–, N) across two loads (small, large). Results revealed individuals successfully learned to distinguishing CS+ from CS– conditions during the differential aversive conditioning task. Our working memory outcomes indicated successful load manipulation effects, but no statistically significant differences in accuracy, response time (RT), or Pashler’s K measures of working memory capacity between CS+, CS–, or N conditions. However, we observed significantly reduced RT difference scores for the CS+ compared to CS– condition, indicating greater RT differences between the CS+ and N condition vs. the CS– and N condition. These findings suggest that differentially conditioned stimuli have little impact on behavioral outcomes of working memory compared to novel stimuli that had not been associated with previous safe of aversive outcomes, at least in healthy populations.
... Concerning humans, different studies suggest the existence of a preferential link between LSF information and the emotional system, particularly threat detection. This plausible preferential link was obtained on the basis of neuroimaging (Morris et al. 1999;Pourtois et al. 2005;Vuilleumier et al. 2003), neural-network modeling (Mermillod, in press a; Mermillod et al. 2009), and behavioral experiments (Bocanegra & Zeelenberg, 2009;Holmes et al. 2005;Mermillod et al., in press b). However, whereas these studies hint at a preferential link between LSF visual information and emotional processes, possibly occurring at the level of the amygdala, they do not constitute formal evidence for the subcortical pathway assumed in model. ...
Article
Recent application of theories of embodied or grounded cognition to the recognition and interpretation of facial expression of emotion has led to an explosion of research in psychology and the neurosciences. However, despite the accelerating number of reported findings, it remains unclear how the many component processes of emotion and their neural mechanisms actually support embodied simulation. Equally unclear is what triggers the use of embodied simulation versus perceptual or conceptual strategies in determining meaning. The present article integrates behavioral research from social psychology with recent research in neurosciences in order to provide coherence to the extant and future research on this topic. The roles of several of the brain's reward systems, and the amygdala, somatosensory cortices, and motor centers are examined. These are then linked to behavioral and brain research on facial mimicry and eye gaze. Articulation of the mediators and moderators of facial mimicry and gaze are particularly useful in guiding interpretation of relevant findings from neurosciences. Finally, a model of the processing of the smile, the most complex of the facial expressions, is presented as a means to illustrate how to advance the application of theories of embodied cognition in the study of facial expression of emotion.
... Concerning humans, different studies suggest the existence of a preferential link between LSF information and the emotional system, particularly threat detection. This plausible preferential link was obtained on the basis of neuroimaging (Morris et al. 1999;Pourtois et al. 2005;Vuilleumier et al. 2003), neural-network modeling (Mermillod, in press a; Mermillod et al. 2009), and behavioral experiments (Bocanegra & Zeelenberg, 2009;Holmes et al. 2005;Mermillod et al., in press b). However, whereas these studies hint at a preferential link between LSF visual information and emotional processes, possibly occurring at the level of the amygdala, they do not constitute formal evidence for the subcortical pathway assumed in model. ...
Article
The set of 30 stimulating commentaries on our target article helps to define the areas of our initial position that should be reiterated or else made clearer and, more importantly, the ways in which moderators of and extensions to the SIMS can be imagined. In our response, we divide the areas of discussion into (1) a clarification of our meaning of “functional,” (2) a consideration of our proposed categories of smiles, (3) a reminder about the role of top-down processes in the interpretation of smile meaning in SIMS, (4) an evaluation of the role of eye contact in the interpretation of facial expression of emotion, and (5) an assessment of the possible moderators of the core SIMS model. We end with an appreciation of the proposed extensions to the model, and note that the future of research on the problem of the smile appears to us to be assured.
... It has been well documented that salient emotional stimuli guided and directed attentional orienting (Armony & Dolan, 2002;Fox et al., 2001;Fox et al., 2002;Georgiou et al., 2005;Holmes et al., 2005;Koster et al., 2006;Mogg et al., 2000;Mogg & Bradley, 1999). Utilizing emotionally threatening stimuli (e.g., fearful or angry faces) as cues, researchers have shown that people were faster to detect targets replacing threatening cues in valid trials (attentional engagement) and were slower to disengage attention away from them in invalid trials (attentional disengagement) compared to neutral cues (Park et al., 2013;Pourtois & Vuilleumier, 2006; see Vuilleumier & Brosch, 2009, for a review). ...
Article
Full-text available
The current experiment examined the effect of fair-related stimuli on attentional orienting and the role of cardiac vagal tone indexed by heart rate variability (HRV). Neutral faces were associated with fair and unfair offers in the Ultimatum Game (UG). After the UG, participants performed the spatial cueing task in which targets were preceded by face cues that made fair or unfair offers in the UG. Participants showed faster attentional engagement to fair-related stimuli, which was more pronounced in individuals with lower resting HRV—indexing reduced cardiac vagal tone. Also, people showed delayed attentional disengagement from fair-related stimuli, which was not correlated with HRV. The current research provided initial evidence that fair-related social information influences spatial attention, which is associated with cardiac vagal tone. These results provide further evidence that the difficulty in attentional control associated with reduced cardiac vagal tone may extend to a broader social and moral context.
... Alternately, at least some behavioral studies based on visual search tasks have documented faster reaction times (RTs) in identifying angry faces within arrays of distractor faces compared to finding happy or neutral faces among distractors (Hansen & Hansen, 1988;Vuilleumier, 2002). Other behavior researchers have tested the "anger superiority" hypothesis using visual dot-probe tasks; some supporting evidence has indicated RTs to probes in locations previously occupied by a threatening cue are typically faster than RTs to probes in locations occupied by a neutral or nonthreatening alternative (Armony & Dolan, 2002;Holmes et al., 2005;Mogg & Bradley, 1999;Pourtois et al., 2004), especially among the highly anxious (Bar-Haim et al., 2007). ...
Article
Full-text available
Numerous investigators have tested contentions that angry faces capture early attention more completely than happy faces do in the context of other faces. However, syntheses of studies on early event‐related potentials related to the anger superiority hypothesis have yet to be conducted, particularly in relation to the N200 posterior‐contralateral (N2pc) component which provides a reliable electrophysiological index related to orienting of attention suitable for testing this hypothesis. Fifteen samples (N = 534) from 13 studies featuring the assessment of N2pc amplitudes during exposure to angry‐neutral and/or happy‐neutral facial expression arrays were included for meta‐analysis. Moderating effects of study design features and sample characteristics on effect size variability were also assessed. N2pc amplitudes elicited by affectively valenced expressions (angry and happy) were significantly more pronounced than those elicited by neutral expressions. However, the mean effect size difference between angry and happy expressions was ns. N2pc effect sizes were moderated by sample age, number of trials, and nature of facial images used (schematic vs. real) with larger effect sizes observed when samples were comparatively younger, more task trials were presented and schematic face arrays were used. N2pc results did not support anger superiority hypothesis. Instead, attentional resources allocated to angry versus happy facial expressions were similar in early stages of processing. As such, possible adaptive advantages of biases in orienting toward both anger and happy expressions warrant consideration in revisions of related theory.
... Also, fearful faces, as opposed to neutral or joyful faces, facilitate the orientation of attention onto their location (Brosch, Pourtois, Sander, & Vuilleumier, 2011;Cisler & Koster, 2010;Vogt, De Houwer, Koster, Van Damme, & Crombez, 2008). However, the capture of spatial attention by fearful faces is rapid but fleeting (Holmes, Green, & Vuilleumier, 2005;Torrence, Wylie, & Carlson, 2017), as opposed to joyful faces that hold it for longer (Fox, Russo, & Dutton, 2002;Torrence et al., 2017;Williams, Moss, Bradshaw, & Mattingley, 2005). In an array of faces, a fearful face is rapidly processed, but then attention seems to oscillate in avoidance of the face (Becker & Detweiler-Bedell, 2009); such deployment of attention, from early capture to successive redirection, would be functional to locate the actual source of threat. ...
Article
Peripersonal space (PPS) refers to the space surrounding the body. PPS is characterised by distinctive patterns of multisensory integration and sensory-motor interaction. In addition, facial expressions have been shown to modulate PPS representation. In this study we tested whether fearful faces lead to a different distribution of spatial attention, compared to neutral and joyful faces. Participants responded to tactile stimuli on the cheeks, while watching looming neutral, joyful (Experiment 1) or fearful (Experiment 2) faces of an avatar, appearing in far or near space. To probe spatial attention, when the tactile stimulus was delivered, a static ball briefly appeared central or peripheral in participant's vision, respectively ≈1° or ≈10° to the left or right of the face. With neutral and joyful faces, simple reactions to tactile stimuli were facilitated in near rather than in far space, replicating classic PPS effects, and in the presence of central rather than peripheral ball, suggesting that attention may be focused in the immediate surrounding of the face. However, when the face was fearful, response to tactile stimuli was modulated not only by the distance of the face from the participant, but also by the position of the ball. Specifically, in near space only, response to tactile stimuli was additionally facilitated by the peripheral compared to the central ball. These results suggest that as fearful faces come closer to the body, they promote a redirection of attention towards the periphery. Given the sensory-motor functions of PPS, this fear-evoked redirection of attention would enhance the defensive function of PPS specifically when it is most needed, i.e. when the source of threat is nearby, but its location has not yet been identified.
... Holmes et al 2005). N170 latency has been reported to increase as the intensity of fearful expressions increases(Leppanen et al 2007a), with no effect of intensity of fearful eye whites alone(Feng et al 2009)or the intensity of happy expressions (Leppanen et al 2007a). ...
Thesis
Facial emotion recognition and theory of mind abilities are important aspects of social cognition. Genes within the X chromosome may influence these abilities as males show increased vulnerability to impaired social cognition compared to females. An influence of a single nucleotide polymorphism (SNP), rs7055196 (found within the X-linked EFHC2 gene), on facial fear recognition abilities has recently been reported in Turner Syndrome. This thesis explores the influence of SNP rs7055196 on aspects of social cognition in healthy males. Males possessing the G allele showed poorer facial fear recognition accuracy compared to males possessing the A allele. This group difference in fear recognition accuracy was not due to a difference in gaze fixations made to the eye or mouth regions. Males possessing the G allele also showed smaller N170 amplitudes in response to faces compared to males possessing the A allele. These results suggest males possessing the A allele may use a more holistic / configural face processing mechanism compared to males possessing the G allele, and this difference may account for the difference in fear recognition accuracy between the groups. Males possessing the G allele were also less accurate at inferring others’ mental states during the Reading the Mind in the Eyes task, and showed reduced activity in the right superior temporal gyrus, left inferior parietal lobule and left cingulate gyrus during this task compared to males possessing the A allele. SNP rs7055196 may therefore also influence theory of mind abilities, with males possessing the A allele showing better theory of mind than those possessing the G allele. This result may reflect higher empathising abilities in the males possessing the A allele. These results suggest an influence of SNP rs7055196 on social cognitive abilities in males. This may help to explain the sex difference in vulnerability to impaired social cognition.
... This shows that the intensity of the facial cues that are relevant to trustworthiness judgments affects our perceptual dis-crimination abilities, and suggests that faces including more intense social cues enjoy a processing advantage over those including less intense cues. This finding is congruent with the widely reported attentional and processing advantage of angry (e.g., LoBue, 2009) or fearful/threatening (e.g., Holmes, Green, & Vuilleumier, 2005) faces over neutral ones. Unlike these earlier findings, though, the processing advantage in our data was not restricted to faces with negative valence, as it extended to extremely trustworthy, as well as untrustworthy, faces. ...
Article
One of the most important sources of social information is the human face, on whose appearance we easily form social judgments: Adults tend to attribute a certain personality to a stranger based on minimal facial cues, and after a short exposure time. Previous studies shed light on the cognitive and neural mechanisms underlying the ability to discriminate facial properties conveying social signals, but the underlying processes supporting individual differences remain poorly understood. In the current study, we explored whether differences in sensitivity to facial cues to trustworthiness and in representing such cues in a multidimensional space are associated with individual variability in social attitude, as measured by the extraversion/introversion dimension. Participants performed a task where they assessed the similarity between faces that varied in the level of trustworthiness, and multidimensional scaling analyses were performed to describe perceptual similarity in a multidimensional representational space. Extraversion scores impacted RTs, but not accuracy or face representation, making less extraverted individuals slower in detecting similarity of faces based on physical cues to trustworthiness. These findings are discussed from an ontogenetic perspective, where reduced social motivation might constrain perceptual attunement to social cues from faces, without affecting the structuring of the face representational space.
... A third issue is the stimuli presentation duration may not have been optimal for facilitating vigilance towards threatening faces. Although we presented stimuli at an SOA of 150 ms based on a review of the human literature (e.g., [18,26,27,29]) this duration may not have been optimal for chimpanzees. Vigilance towards threatening faces has been found at SOAs as short as 17 ms (i.e. ...
Article
Full-text available
Primates have evolved to rapidly detect and respond to danger in their environment. However, the mechanisms involved in attending to threatening stimuli are not fully understood. The dot-probe task is one of the most widely used experimental paradigms to investigate these mechanisms in humans. However, to date, few studies have been conducted in non-human primates. The aim of this study was to investigate whether the dot-probe task can measure attentional biases towards threatening faces in chimpanzees. Eight adult chimpanzees participated in a series of touch screen dot-probe tasks. We predicted faster response times towards chimpanzee threatening faces relative to neutral faces and faster response times towards faces of high threat intensity (scream) than low threat intensity (bared teeth). Contrary to prediction, response times for chimpanzee threatening faces relative to neutral faces did not differ. In addition, we found no difference in response times for faces of high and low threat intensity. In conclusion, we found no evidence that the touch screen dot-probe task can measure attentional biases specifically towards threatening faces in our chimpanzees. Methodological limitations of using the task to measure emotional attention in human and non-human primates, including stimulus threat intensity, emotional state, stimulus presentation duration and manual responding are discussed.
... This finding is consistent with Zhu and Liu (2014), who found that negative facial expressions were associated with significantly larger LPP amplitudes than were positive and neutral expressions. Previous studies (Holmes, Green & Vuilleumier, 2005;Whalen et al., 1998;Bradley, Mogg & Lee, 1997) reported bias in individual attention allocation when recognizing different facial expressions, . CC-BY 4.0 International license not peer-reviewed) is the author/funder. ...
Preprint
Full-text available
Faces play important roles in the social lives of humans. In addition to real faces, people also encounter numerous cartoon faces in daily life. These cartoon faces convey basic emotional states through facial expressions. Using a behavioral research methodology and event-related potentials (ERPs), we conducted a facial expression recognition experiment with 17 university students to compare the processing of cartoon faces with that of real faces. This study used face type (real vs. cartoon) and participant gender (male vs. female) as independent variables. Reaction time, recognition accuracy, and the amplitudes and latencies of emotion processing-related ERP components such as N170, vertex positive potential (VPP), and late positive potential (LPP) were used as dependent variables. The ERP results revealed that cartoon faces caused larger N170 and VPP amplitudes as well as a briefer N170 latency than did real faces; that real faces induced larger LPP amplitudes than did cartoon faces; and that angry faces induced larger LPP amplitudes than did happy faces. In addition, the results showed a significant difference in the brain regions associated with face processing as reflected in a right hemispheric advantage. The behavioral results showed that the reaction times for happy faces were shorter than those for angry faces; that females showed a higher facial expression recognition accuracy than did males; and that males showed a higher recognition accuracy for angry faces than happy faces. These results demonstrate differences in facial expression recognition and neurological processing between cartoon faces and real faces among adults. Cartoon faces showed a higher processing intensity and speed than real faces during the early processing stage. However, more attentional resources were allocated for real faces during the late processing stage.
... Therefore, it is not surprising that humans rapidly perceive emotion (in as little as 120-180 ms) in facial expressions (Eimer & Holmes, 2008;Prkachin, 2003;Stanners, Byrd, & Gabriel, 1985) even when they are not aware they are doing so (Dimberg, Thunberg, & Elmehed, 2000;Kiss & Eimer, 2008). Adults quickly detect an emotional face in a crowd (Becker, Anderson, Mortensen, Neufeld, & Neel, 2011;Hansen & Hansen, 1988;Pinkham, Griffin, Baron, Sasson, & Gur, 2010), and their responsiveness to angry or threat-related facial expressions is faster than that to other emotional faces (Hansen & Hansen, 1988;Holmes, Green, & Vuilleumier, 2005). Izard (2009) has proposed that basic emotions aid in the organization and motivation of rapid behavior in response to challenges in the environment, and research has found when viewing another's facial expression, adults' own emotional response is triggered very quickly (120 ms) (Eimer & Holmes, 2007;Tamietto et al., 2009;Vuilleumier & Pourtois, 2007). ...
... Much of this amygdalar input arrives via a relay in the pulvinar nucleus of the thalamus (Morris, Öhman, & Dolan, 1999). Several lines of evidence suggest that this might be the source of information about fearful facial expressions that initially reaches the amygdala (Holmes, Green, & Vuilleumier, 2005;Johnson, 2005;Méndez-Bértolo et al., 2016). ...
Article
The aim of this paper is to outline some of the parts of the brain to increase understanding of the aetiology of criminal behaviours. It goes without saying that any complete answer will encompass: evolutionary, genetic, biochemical, neuropsychological, and cognitive factors as well as social factors (familial and societal). Antisocial and social behaviours are underpinned by feeling, cognitions and actions, which are in turn, underpinned by the neurobiological actions in the brain. The daunting task of understanding the relations between brain function and offending is made potentially more tractable by the way in which the brain can be seen as being organised into discrete anatomical circuits, many of which have definable functions. The paper describes a number of these circuits in detail.
... Given that this effect did not replicate in the other experiments, it is most likely a spurious finding. Examination of the literature similarly suggests that biases at 100 ms are inconsistent, with some studies reporting a bias towards threat (Holmes, Green, & Vuilleumier, 2005;Cooper, & Langton, 2006) and some studies showing a bias away from threat (Koster et al., 2005;Mogg et al., 1997). Our findings are, therefore, consistent with the literature (e.g., Bar-Haim et al., 2007) showing that attentional biases to threat are not consistently observed in non-anxious participants on the dot-probe task, even though neural measures do reveal the existence of biases (Eimer & Kiss, 2007;Grimshaw et al., 2014;Holmes, Bradley, Nielsen, & Mogg, 2009;Kappenman et al., 2014Kappenman et al., , 2015. ...
Article
Full-text available
In a dot-probe task, two cues – one emotional and one neutral – are followed by a probe in one of their locations. Faster responses to probes co-located with the emotional stimulus are taken as evidence of attentional bias. Several studies indicate that such attentional bias measures have poor reliability, even though ERP studies show that people reliably attend to the emotional stimulus. This inconsistency might arise because the emotional stimulus captures attention briefly (as indicated by ERP), but cues appear for long enough that attention can be redistributed before the probe onset, causing RT measures of bias to vary across trials. We tested this hypothesis by manipulating SOA (stimulus onset asynchrony between onset of the cues and onset of the probe) in a dot-probe task using angry and neutral faces. Across three experiments, the internal reliability of behavioural biases was significantly greater than zero when probes followed faces by 100 ms, but not when the SOA was 300, 500, or 900 ms. Thus, the initial capture of attention shows some level of consistency, but this diminishes quickly. Even at the shortest SOA internal reliability estimates were poor, and not sufficient to justify the use of the task as an index of individual differences in attentional bias.
... Moreover, the relationship between cognitive bias and social anxiety is complex. It could be influenced by multiple factors, such as individual differences among participants, involvement of distinct visuals, and present time of stimuli (Holmes et al., 2005;Cooper and Langton, 2006;Massar et al., 2011). Future research could examine gender differences in the relationship between attentional bias and social anxiety employing variously threatening stimuli (such as words and videos) at both short and long stimulus durations. ...
Article
Full-text available
There is some research showing that social anxiety is related with attentional bias to threat. However, others fail to find this relationship and propose that gender differences may play a role. The aim of this study was to investigate the gender differences in the subcomponents of attentional bias to threat (hypervigilance and difficulty in disengaging) among children and adolescents with social anxiety. Overall, 181 youngsters aged between 10 and 14 participated in the current study. Images of disgusted faces were used as threat stimuli in an Exogenous Cueing Task was used to measure the subcomponents of attentional bias. Additionally, the Social Anxiety Scale for Children was used to measure social anxiety. The repeated measures ANOVA showed that male participants with high social anxiety showed difficulty in disengaging from threat, but this was not the case for female participants. Our results indicated that social anxiety is more related with attentional bias to threat among male children and adolescents than females. These findings suggested that developing gender-specific treatments for social anxiety may improve treatment effects.
... There are a few reasons for this. Happiness signals infants to approach [29,30] and gestures may invite infants in, whereby fear signals a threat and something that should be monitored and avoided [22,31]. It may also be the case that gesturing is not needed when signaling fear because the face conveys enough information. ...
Article
Throughout the social referencing literature, mothers were used as emoters and trained to express prototypical expressions. The concern with using trained expressions is that this may not be how mothers naturally convey emotional information to their infants. Half of the mothers were trained to present prototypical vocal and emotional expressions of fear, happiness, and neutrality as they delivered a social referencing message to a toy and then allowed the infant time to interact with it. The other half were instructed to naturally convey these emotions to their infants. Untrained mothers used more affect and gestures when communicating compared to untrained mothers. Older infants touched the toy most when hearing happiness and least when hearing fear, while younger infants did the opposite. Maternal training did not have an effect on infant interaction with the toys, which suggests that training may not be a necessary component of social referencing paradigms.
... Also, there were mixed reports on whether negative emotions at LSF elicited significantly greater activations in the amygdala (see also Morawetz et al., 2011). Despite that selective visual pathways associated with LSF and HSF information are debatable, converging evidence has shown that LSF information has been found to be more relevant for the processing of emotional information (Holmes et al., 2005;Bar et al., 2006;Laeng et al., 2010;Bannerman et al., 2012). ...
Article
Full-text available
The current research examines whether trait anxiety is associated with negative interpretation bias when resolving valence ambiguity of surprised faces. To further isolate the neuro-cognitive mechanism, we presented angry, happy, and surprised faces at broad spatial frequency (BSF), high spatial frequency (HSF), and low spatial frequency (LSF) and asked participants to determine the valence of each face. High trait anxiety was associated with more negative interpretations of BSF (i.e., intact) surprised faces. However, the modulation of trait anxiety on the negative interpretation of surprised faces disappeared at HSF and LSF. The current study provides evidence that trait anxiety modulates negative interpretations of BSF surprised faces. However, the negative interpretation of LSF surprised faces appears to be a robust default response that occurs regardless of individual differences in trait anxiety.
... Corroborating this view, behavioural studies have shown preferential attentional effects to threat-relevant stimuli (e.g. fearful faces) at low spatial frequencies (Bocanegra & Zeelenberg, 2009;Holmes, Green, & Vuilleumier, 2005). Using speeded visual identification or classification tasks, previous studies have directly tested the low spatial frequency advantage in processing speed for threat-relevant stimuli. ...
Article
In the current research, we sought to examine the role of spatial frequency on the detection of threat using a speeded visual search paradigm. Participants searched for threat-relevant (snakes or spiders) or non-threat-relevant (frogs or cockroaches) targets in an array of neutral (flowers or mushrooms) distracters, and we measured search performance with images filtered to contain different levels (high and low) of spatial frequency information. The results replicate previous work demonstrating more rapid detection of threatening versus non-threatening stimuli [e.g. LoBue, V. & DeLoache, J. S. (2008). Detecting the snake in the grass: Attention to fear-relevant stimuli by adults and young children. Psychological Science, 19, 284-289. doi:10.1111/j.1467-9280.2008.02081.x]. Most importantly, the results suggest that low spatial frequency or relatively coarse levels of visual information is sufficient for the rapid and accurate detection of threatening stimuli. Furthermore, the results also suggest that visual similarity between the stimuli used in the search tasks plays a significant role in speeded detection. The results are discussed in terms of the theoretical implications for the rapid detection of threat and methodological implications for properly accounting for similarity between the stimuli in visual search studies.
... Experimentally, this has, for example, been demonstrated with the dot-probe task. Previous dot-probe studies have shown that emotional signals induce a bias in spatial attention, in that participants respond faster to a presented dot (the target, henceforth, "probe") when it appears at the location of a previously presented emotion compared with neutral stimulus (13)(14)(15)(16)(17). Although in humans a bias toward threatening compared with neutral stimuli is most commonly observed, some studies also report increased attention toward positive versus neutral stimuli (18)(19)(20)(21). ...
Article
Full-text available
Significance Applying well-established psychological paradigms to our closest relatives represents a promising approach for providing insight into similarities and differences between humans and apes. Numerous articles have been published on the dot-probe task, showing that humans have an attentional bias toward emotions, especially when threatening. For social species like primates, efficiently responding to others’ emotions has great survival value. Observational research has shown that, compared with humans and chimpanzees, bonobos excel in regulating their own and others’ emotions, thereby preventing conflicts from escalating. The present study is an initial effort to apply a psychological test to the bonobo, and demonstrates that they, like humans, have heightened attention to emotional—compared with neutral—conspecifics, but are mostly drawn toward protective and affiliative emotions.
... Our choice to adopt a 500-ms stimulus presentation time was based on previous studies that traditionally have used this duration 21 . However, some studies 42,43 suggest that, in the dot-probe task used in a non-anxious population, an early attentional bias (hypervigilance) for threat stimuli is only visible with a shorter presentation time (e.g., in the order of 100 ms). Similar findings of reduced early attentional bias effects for threat stimuli with increasing cue presentation times have been reported with respect to the cuing task 44 . ...
Article
Full-text available
Previous studies on attentional bias towards emotional faces in individuals with autism spectrum disorders (ASD) provided mixed results. This might be due to differences in the examined attentional bias components and emotional expressions. This study assessed three bias components, hypervigilance, disengagement, and avoidance, using faces with a disgust, happy, or neutral expression in a dot-probe and external cuing task in 18 children with ASD and 21 typically developing (TD) children. The children with ASD initially displayed hypervigilance towards the disgust faces, followed by a general tendency to avoid looking back at the spatial location at which any face, irrespective of its emotional expression, had been presented. These results highlight the importance of differentiating between attentional bias components in research on ASD.
... Interestingly, cognitive evaluation of visual stimuli has been shown to be driven by a specific spatial frequency content [32,[39][40][41]. Recent behavioral and neuroimaging studies [31,[42][43][44][45][46][47][48][49][50] and computational data [51,52] suggest that emotional processing of visual stimuli may rely on the rapid processing of low spatial frequency information (LSF), especially in the case of threat. However, some behavioral studies report a relative flexibility in the use of spatial frequency information during the visual processing of emotional stimuli depending on a task's demands [1,32,41,53]. ...
Article
Full-text available
Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task's demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands.
... Concerning humans, different studies suggest the existence of a preferential link between LSF information and the emotional system, particularly threat detection. This plausible preferential link was obtained on the basis of neuroimaging (Morris et al. 1999;Pourtois et al. 2005;Vuilleumier et al. 2003), neural-network modeling (Mermillod, in press a; Mermillod et al. 2009), and behavioral experiments (Bocanegra & Zeelenberg, 2009;Holmes et al. 2005;Mermillod et al., in press b). However, whereas these studies hint at a preferential link between LSF visual information and emotional processes, possibly occurring at the level of the amygdala, they do not constitute formal evidence for the subcortical pathway assumed in model. ...
... Several studies have reported larger N170 amplitudes when viewing negativelyvalanced facial expressions such as fear and anger (e.g. Batty and Taylor, 2003;Leppänen et al., 2008;Pourtois et al., 2005;Stekelenburg and de Gelder, 2004), which has been interpreted as an innate attentional 'negativity bias' (Carretie et al., 2009;Holmes et al., 2005). The same controversy exists for emotional scenes, with some studies reporting a negativity bias for highly unpleasant threatening or fearful scenes in the time window of the N100. ...
Article
Full-text available
In the current study, electroencephalography (EEG) was recorded simultaneously with facial lectromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220 – 280 ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500 – 750 ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500 – 750 ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250 ms) than for scenes (500 ms) whereas for scenes activity changes were more pronouncedover the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes.
... Several studies have reported larger N170 amplitudes when viewing negativelyvalanced facial expressions such as fear and anger (e.g. Batty and Taylor, 2003;Leppänen et al., 2008;Pourtois et al., 2005;Stekelenburg and de Gelder, 2004), which has been interpreted as an innate attentional 'negativity bias' (Carretie et al., 2009;Holmes et al., 2005). The same controversy exists for emotional scenes, with some studies reporting a negativity bias for highly unpleasant threatening or fearful scenes in the time window of the N100. ...
Article
Full-text available
In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280 ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750 ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750 ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250 ms) than for scenes (500 ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes.
Article
Peripersonal space (PPS) represents the region of space surrounding the body. A pivotal function of PPS is to coordinate defensive responses to threat. We have previously shown that a centrally-presented, looming fearful face, signalling a potential threat in one's surroundings, modulates spatial processing by promoting a redirection of sensory resources away from the face towards the periphery, where the threat may be expected - but only when the face is presented in near, rather than far space. Here, we use electrophysiological measures to investigate the neural mechanism underlying this effect. Participants made simple responses to tactile stimuli delivered on the cheeks, while watching task-irrelevant neutral or fearful avatar faces, looming towards them either in near or far space. Simultaneously with the tactile stimulation, a ball with a checkerboard pattern (probe) appeared to the left or right of the avatar face. Crucially, this probe could either be close to the avatar face, and thus more central in the participant's vision, or further away from the avatar face, and thus more peripheral in the participant's vision. Electroencephalography was continuously recorded. Behavioural results confirmed that in near space only, and for fearful relative to neutral faces, tactile processing was facilitated by the peripheral compared to the central probe. This behavioural effect was accompanied by a reduction of the N1 mean amplitude elicited by the peripheral probe for fearful relative to neutral faces. Moreover, the faster the participants responded to tactile stimuli with the peripheral probe, relative to the central, the smaller was their N1. Together these results, suggest that fearful faces intruding into PPS may increase expectation of a visual event occurring in the periphery. This fear-induced effect would enhance the defensive function of PPS when it is most needed, i.e., when the source of threat is nearby, but its location remains unknown.
Article
The human brain has evolved a multifaceted fear system, allowing threat detection to enable rapid adaptive responses crucial for survival. Although many cortical and subcortical brain areas are believed to be involved in the survival circuits detecting and responding to threat, the amygdala has reportedly a crucial role in the fear system. Here, we review evidence demonstrating that fearful faces, a specific category of salient stimuli indicating the presence of threat in the surrounding, are preferentially processed in the fear system and in the connected sensory cortices, even when they are presented outside of awareness or are irrelevant to the task. In the visual domain, we discuss evidence showing in hemianopic patients that fearful faces, via a subcortical colliculo-pulvinar-amygdala pathway, have a privileged visual processing even in the absence of awareness and facilitate responses towards visual stimuli in the intact visual field. Moreover, evidence showing that somatosensory cortices prioritise fearful-related signals, to the extent that tactile processing is enhanced in the presence of fearful faces, will be also reported. Finally, we will review evidence revealing that fearful faces have a pivotal role in modulating responses in peripersonal space, in line with the defensive functional definition of PPS.
Article
Full-text available
Faces convey an assortment of emotional information via low and high spatial frequencies (LSFs and HSFs). However, there is no consensus on the role of particular spatial frequency (SF) information during facial fear processing. Comparison across studies is hampered by the high variability in cut-off values for demarcating the SF spectrum and by differences in task demands. We investigated which SF information is minimally required to rapidly detect briefly presented fearful faces in an implicit and automatic manner, by sweeping through an entire SF range without constraints of predefined cut-offs for LSFs and HSFs. We combined fast periodic visual stimulation with electroencephalography. We presented neutral faces at 6 Hz, periodically interleaved every 5th image with a fearful face, allowing us to quantify an objective neural index of fear discrimination at exactly 1.2 Hz. We started from a stimulus containing either only very low or very high SFs and gradually increased the SF content by adding higher or lower SF information, respectively, to reach the full SF spectrum over the course of 70 seconds. We found that faces require at least SF information higher than 5.93 cycles per image (cpi) to implicitly differentiate fearful from neutral faces. However, exclusive HSF faces, even in a restricted SF range between 94.82 and 189.63 cpi already carry the critical information to extract the emotional expression of the faces.
Article
The ability to identify facial expressions rapidly and accurately is central to human evolution. Previous studies have demonstrated that this ability relies to a large extent on the magnocellular, rather than parvocellular, visual pathway, which is biased toward processing low spatial frequencies. Despite the generally consistent finding, no study to date has investigated the reliability of this effect over time. In the present study, 40 participants completed a facial emotion identification task (fearful, happy, or neutral faces) using facial images presented at three different spatial frequencies (low, high, or broad spatial frequency), at two time points separated by one year. Bayesian statistics revealed an advantage for the magnocellular pathway in processing facial expressions; however, no effect for time was found. Furthermore, participants' RT patterns of results were highly stable over time. Our replication, together with the consistency of our measurements within subjects, underscores the robustness of this effect. This capacity, therefore, may be considered in a trait-like manner, suggesting that individuals may possess various ability levels for processing facial expressions that can be captured in behavioral measurements.
Preprint
Full-text available
It has been argued that rapid visual processing for fearful face expressions is driven by the fact that effective contrast is higher in these faces compared to other expressions, when the contrast sensitivity function is taken into account (Hedger, Garner, & Adams, 2015). This proposal has been upheld by data from image analyses, but is yet to be tested at the behavioural level. The present study conducts a traditional contrast sensitivity task for face images of various facial expressions. Findings show that visual contrast thresholds do not differ for different facial expressions We re-conduct analysis of faces’ effective contrast, using the procedure developed by Hedger, Garner, & Adams (2015), and show that higher effective contrast in fearful face expressions relies on face images first being normalised for RMS contrast. When not normalised for RMS contrast, effective contrast in fear expressions is no different, or sometimes even lower, compared to other expressions. These findings are discussed in relation to the implications of contrast normalisation on the salience of face expressions in behavioural and neurophysiological experiments, and also the extent that natural physical differences between facial stimuli are masked during stimulus standardisation and normalisation.
Article
Several lines of evidence suggest that angularity and curvilinearity are relied upon to infer the presence or absence of threat. This study examines whether angular shapes are more salient in threatening compared with nonthreatening emotionally neutral faces. The saliency of angular shapes was measured by the amount of local maxima in S(θ), a function that characterizes how the Fourier magnitude spectrum varies along specific orientations. The validity of this metric was tested and supported with images of threatening and nonthreatening real-world objects and abstract patterns that have predominantly angular or curvilinear features (Experiment 1). This metric was then applied to computer-generated faces that maximally correlate with threat (Experiment 2a) and to real faces that have been rated according to threat (Experiment 3). For computer-generated faces, angular shapes became increasingly salient as the threat level of the faces increased. For real faces, the saliency of angular shapes was not predictive of threat ratings after controlling for other well-established threat cues, however, other facial features related to angularity (e.g., brow steepness) and curvilinearity (e.g., round eyes) were significant predictors. The results offer preliminary support for angularity as a threat cue for emotionally neutral faces.
Article
This study investigated the characteristics of two distinct mechanisms of attention - stimulus enhancement and stimulus suppression - using an event-related potential (ERP) approach. Across three experiments, participants viewed sparse visual search arrays containing one target and one distractor. The main results of Experiments 1 and 2 revealed that whereas neural signals for stimuli that are not inherently salient could be directly suppressed without prior attentional enhancement, this was not the case for stimuli with motivational relevance (human faces). Experiment 3 showed that as task difficulty increased, so did the need for suppression of distractor stimuli. It also showed the preferential attentional enhancement of angry over neutral distractor faces, but only under conditions of high task difficulty, suggesting that the effects of distractor valence on attention are greatest when there are fewer available resources for distractor processing. The implications of these findings are considered in relation to contemporary theories of attention.
Article
Full-text available
The prompt recognition of pleasant and unpleasant odors is a crucial regulatory and adaptive need of humans. Reactive answers to unpleasant odors ensure survival in many threatening situations. Notably, although humans typically react to certain odors by modulating their distance from the olfactory source, the effect of odor pleasantness over the orienting of visuospatial attention is still unknown. To address this issue, we first trained participants to associate visual shapes with pleasant and unpleasant odors, and then we assessed the impact of this association on a visuospatial task. Results showed that the use of trained shapes as flankers modulates performance in a line bisection task. Specifically, it was found that the estimated midpoint was shifted away from the visual shape associated with the unpleasant odor, whereas it was moved toward the shape associated with the pleasant odor. This finding demonstrates that odor pleasantness selectively shifts human attention in the surrounding space. (PsycINFO Database Record
Thesis
Full-text available
L’expression faciale de peur constitue un important vecteur d’information sociale mais aussi environnementale. En condition naturelle, les visages apeurés apparaissent principalement dans notre champ visuel périphérique. Cependant, les mécanismes cérébraux qui sous-tendent la perception de l’expression faciale de peur en périphérie restent largement méconnus. Nous avons démontré, grâce à des études comportementales, des enregistrements magnétoencéphalographiques et intracrâniens, que la perception de l’expression faciale de peur est efficace en grande périphérie. La perception de la peur en périphérie génère une réponse rapide de l’amygdale et du cortex frontal, mais également une réponse plus tardive dans les aires visuelles occipitales et temporales ventrales. Le contrôle attentionnel est capable d’inhiber la réponse précoce à l’expression de peur, mais également d’augmenter les activités postérieures plus tardives liées à la perception des visages. Nos résultats montrent non seulement que les réseaux impliquées dans la perception de la peur sont adaptés à la vision périphrique, mais ils mettent également en avant une nouvelle forme d’investigation des mécanismes de traitement de l’expression faciale, pouvant conduire à une meilleure compréhension des mécanismes de traitement des messages sociaux dans des situations plus écologiques.
Article
The human perceptual system is operating an expedient processing within the early visual system. Low spatial frequency information is processed rapidly through magnocellular layers compared to high spatial frequency information, which are conveyed more slowly by the parvocellular layers. The purpose of the present paper is to assess whether low spatial frequency information elicit better emotional facial expression recognition in a classification task, relative to high spatial frequency and broad spatial frequency visual stimuli. At the behavioural level however, in support of the so-called coarse-to-fine bias (Parker, Lishman, & Hughes, 1997; Schyns & Oliva, 1994, 1997) obtained with non-emotional scenes, this perceptual bias may act in favour of high spatial frequency information, beyond 100 ms of visual presentation. Thus, these results point out some limits of recent studies from psychology and neuroimaging experiments supporting an automatic reflex instantiated by the Ledoux's subcortical pathway beyond 100 ms.
Article
Facial expression of fear is an important vector of social and environmental information. In natural conditions, the frightened faces appear mainly in our peripheral visual field. However, the brain mechanisms underlying perception of fear in the periphery remain largely unknown. We have demonstrated, through behavioral, magnetoencephalographic and intracranial studies that the perception of fear facial expression is efficient in large peripheral visual field. Fear perception in the periphery produces an early response in the amygdala and the frontal cortex, and a later response in the occipital and infero-temporal visual areas. Attentional control is able to inhibit the early response to fear expression and to increase the later temporo-occipital activities linked to face perception. Our results show that networks involved in fear perception are adapted to the peripheral vision. Moreover, they validate a new form of investigation of facial expression processing, which may lead to a better understanding of how we process social messages in more ecological situations.
Article
Full-text available
The question whether judgements of facial expression show the typical pattern of categorical perception was examined using three sets of 11 photographs, each constituting an 11-step continuum extending between two extreme protypical exemplars: angry-sad, happy-sad and angry-afraid, respectively. For each continuum, intermediate exemplars were created using a morphing procedure. Subjects first identified all faces in each continuum in terms of the extreme expressions, and then performed an ABX discrimination task on pairs of faces two steps (Experiments 1 and 2) or three steps (Experiment 3) apart. The classical categorical perception prediction that discrimination performance must peak around the point on the continuum at which identification reaches 50% was tested not on group means, as in earlier studies, but on a subject-by-subject basis. It was supported by the results for both adults (Experiment 1) and 9- to 10-year-children (Experiment 3). For adults, two noncategorical interpretations of the main finding were discarded by showing that it was not replicated with the same material presented upside down (Experiment 2).
Article
Full-text available
If face images are degraded by block averaging, there is a nonlinear decline in recognition accuracy as block size increases, suggesting that identification requires a critical minimum range of object spatial frequencies. The identification of faces was measured with equivalent Fourier low-pass filtering and block averaging preserving the same information and with high-pass transformations. In Experiment 1, accuracy declined and response time increased in a significant nonlinear manner in all cases as the spatial-frequency range was reduced. However, it did so at a faster rate for the quantized and high-passed images. A second experiment controlled for the differences in the contrast of the high-pass faces and found a reduced but significant and nonlinear decline in performance as the spatial-frequency range was reduced. These data suggest that face identification is preferentially supported by a band of spatial frequencies of approximately 8-16 cycles per face; contrast or line-based explanations were found to be inadequate. The data are discussed in terms of current models of face identification.
Article
Full-text available
Are categorization and visual processing independent, with categorization operating late, on an already perceived input, or are they intertwined, with the act of categorization flexibly changing (i.e. cognitively penetrating) the early perception of the stimulus? We examined this issue in three experiments by applying different categorization tasks (gender, expressive or not, which expression and identity) to identical face stimuli. Stimuli were hybrids: they combined a man or a woman with a particular expression at a coarse spatial scale with a face of the opposite gender with a different expression at the fine spatial scale. Results suggested that the categorization task changes the spatial scales preferentially used and perceived for rapid recognition. A perceptual set effect is shown whereby the scale preference of an important categorization (e.g. identity) transfers to resolve other face categorizations (e.g. expressive or not, which expression). Together, the results suggest that categorization can be closely bound to perception.
Article
Full-text available
The rapid detection of facial expressions of anger or threat has obvious adaptive value. In this study, we examined the efficiency of facial processing by means of a visual search task. Participants searched displays of schematic faces and were required to determine whether the faces displayed were all the same or whether one was different. Four main results were found: (1) When displays contained the same faces, people were slower in detecting the absence of a discrepant face when the faces displayed angry (or sad/angry) rather than happy expressions. (2) When displays contained a discrepant face people were faster in detecting this when the discrepant face displayed an angry rather than a happy expression. (3) Neither of these patterns for same and different displays was apparent when face displays were inverted, or when just the mouth was presented in isolation. (4) The search slopes for angry targets were significantly lower than for happy targets. These results suggest that detection of angry facial expressions is fast and efficient, although does not "pop-out" in the traditional sense.
Article
Full-text available
Localized amygdalar lesions in humans produce deficits in the recognition of fearful facial expressions. We used functional neuroimaging to test two hypotheses: (i) that the amygdala and some of its functionally connected structures mediate specific neural responses to fearful expressions; (ii) that the early visual processing of emotional faces can be influenced by amygdalar activity. Normal subjects were scanned using PET while they performed a gender discrimination task involving static grey-scale images of faces expressing varying degrees of fear or happiness. In support of the first hypothesis, enhanced activity in the left amygdala, left pulvinar, left anterior insula and bilateral anterior cingulate gyri was observed during the processing of fearful faces. Evidence consistent with the second hypothesis was obtained by a demonstration that amygdalar responses predict expression-specific neural activity in extrastriate cortex.
Article
Full-text available
Recent findings demonstrate that faces with an emotional expression tend to attract attention more than neutral faces, especially when having some threat-related value (anger or fear). These findings suggest that discrimination of emotional cues in faces can at least partly be extracted at preattentive or unconscious stages of processing, and then serve to enhance awareness and behavioural responses toward emotionally relevant stimuli. Functional neuroimaging results have begun to delineate brain regions whose response to threat-related expressions is independent of voluntary attention (e.g. amygdala and orbitofrontal cortex), and other regions whose response occurs only with attention (e.g. superior temporal and anterior cingulate cortex). Moreover, visual responses in the fusiform cortex are enhanced for emotional faces, consistent with their greater perceptual saliency. Recent data from event-related evoked potentials and neurophysiology also suggest that rapid processing of emotional information may not only occur in parallel to, but promote a more detailed perceptual analysis of, sensory inputs and thus bias competition for attention toward the representation of emotionally salient stimuli.
Article
Full-text available
One of the functions of automatic stimulus evaluation is to direct attention toward events that may have undesirable consequences for the perceiver's well-being. To test whether attentional resources are automatically directed away from an attended task to undesirable stimuli, Ss named the colors in which desirable and undesirable traits (e.g., honest, sadistic) appeared. Across 3 experiments, color-naming latencies were consistently longer for undesirable traits but did not differ within the desirable and undesirable categories. In Experiment 2, Ss also showed more incidental learning for undesirable traits, as predicted by the automatic vigilance (but not a perceptual defense) hypothesis. In Experiment 3, a diagnosticity (or base-rate) explanation of the vigilance effect was ruled out. The implications for deliberate processing in person perception and stereotyping are discussed.
Article
Full-text available
The amygdala is thought to play a crucial role in emotional and social behaviour. Animal studies implicate the amygdala in both fear conditioning and face perception. In humans, lesions of the amygdala can lead to selective deficits in the recognition of fearful facial expressions and impaired fear conditioning, and direct electrical stimulation evokes fearful emotional responses. Here we report direct in vivo evidence of a differential neural response in the human amygdala to facial expressions of fear and happiness. Positron-emission tomography (PET) measures of neural activity were acquired while subjects viewed photographs of fearful or happy faces, varying systematically in emotional intensity. The neuronal response in the left amygdala was significantly greater to fearful as opposed to happy expressions. Furthermore, this response showed a significant interaction with the intensity of emotion (increasing with increasing fearfulness, decreasing with increasing happiness). The findings provide direct evidence that the human amygdala is engaged in processing the emotional salience of faces, with a specificity of response to fearful facial expressions.
Article
Full-text available
If face images are degraded by block averaging, there is a nonlinear decline in recognition accuracy as block size increases, suggesting that identification requires a critical minimum range of object spatial frequencies. The identification of faces was measured with equivalent Fourier low-pass filtering and block averaging preserving the same information and with high-pass transformations. In Experiment 1, accuracy declined and response time increased in a significant nonlinear manner in all cases as the spatial-frequency range was reduced. However, it did so at a faster rate for the quantized and high-passed images. A second experiment controlled for the differences in the contrast of the high-pass faces and found a reduced but significant and nonlinear decline in performance as the spatial-frequency range was reduced. These data suggest that face identification is preferentially supported by a band of spatial frequencies of approximately 8-16 cycles per face; contrast or line-based explanations were found to be inadequate. The data are discussed in terms of current models of face identification.
Article
Full-text available
The paradigm of the fuzzy logical model of perception (FLMP) is extended to the domain of perception and recognition of facial affect. Two experiments were performed using a highly realistic computer-generated face varying on 2 features of facial affect. Each experiment used the same expanded factorial design, with 5 levels of brow deflection crossed with 5 levels of mouth deflection, as well as their corresponding half-face conditions, for a total stimulus set of 35 faces. Experiment 1 used a 2-alternative, forced-choice paradigm (either happy or angry), whereas Experiment 2 used 9 rating steps from happy to angry. Results indicate that participants evaluated and integrated information from both features to perceive affective expressions. Both choice probabilities and ratings showed that the influence of 1 feature was greater to the extent that the other feature was ambiguous. The FLMP fit the judgments from both experiments significantly better than an additive model. Our results question previous claims of categorical and holistic perception of affect.
Article
This study compared effects of inversion on perceptual processing of faces with distorted components (eyes and mouths) and faces distorted by altering spatial relations between components. In a rating task, participants inversion reduced the rated grotesqueness of spatially distorted faces but nor that of faces with altered components, In a comparison task, pairs of faces were shown side by side; participants judged whether they were identical or different. Inversion greatly reduced the rate at which participants responded within 3 s to pairs that differed spatially, but not pairs that differed componentially. Also, latencies for detecting spatial differences wen lengthened by inversion mon than latencies for detecting componential differences. Results support the hypothesis that inversion impairs encoding of spatial-relational information more than, or instead of, componential information, depending on the task.
Article
Localized amygdalar lesions in humans produce deficits in the recognition of fearful facial expressions. We used functional neuroimaging to test two hypotheses: (i) that the amygdala and some of its functionally connected structures mediate specific neural responses to fearful expressions; (ii) that the early visual processing of emotional faces can be influenced by amygdalar activity. Normal subjects were scanned using PET while they performed a gender discrimination task involving static grey-scale images of faces expressing varying degrees of fear or happiness. In support of the first hypothesis, enhanced activity in the left amygdala, left pulvinar, left anterior insula and bilateral anterior cingulate gyri was observed during the processing of fearful faces. Evidence consistent with the second hypothesis was obtained by a demonstration that amygdalar responses predict expression-specific neural activity in extrastriate cortex.
Article
Three studies investigated whether individuals preferentially allocate attention to the spatial location of threatening faces presented outside awareness. Pairs of face stimuli were briefly displayed and masked in a modified version of the dot-probe task. Each face pair consisted of an emotional (threat or happy) and neutral face. The hypothesis that preattentive processing of threat results in attention being oriented towards its location was supported in Experiments 1 and 3. In both studies, this effect was most apparent in the left visual field, suggestive of right hemisphere involvement. However, in Experiment 2 where awareness of the faces was less restricted (i.e. marginal threshold conditions), preattentive capture of attention by threat was not evident. There was evidence from Experiment 3 that the tendency to orient attention towards masked threat faces was greater in high than low trait anxious individuals.
Article
The study investigated the time course of attentional biases for emotional facial expressions in high and low trait anxious individuals. Threat, happy, and neutral face stimuli were presented at two exposure durations, 500 and 1250msec, in a forced-choice reaction time (RT) version of the dot probe task. There was clear evidence of an attentional bias favouring threatening facial expressions, but not emotional faces in general, in high trait anxiety. Increased dysphoria was associated with a tendency to avoid happy faces. No evidence was found of avoidance following initial vigilance for threat in this nonclinical sample. Methodological and theoretical implications of the results are discussed.
Article
Subjects performed an idiographic, computerised version of the modified Stroop colour-naming task after having undergone a film-induced mood manipulation designed to produce either anxiety, elation, or a neutral mood. The Stroop stimuli were words related either to the subject's positive current concerns (e.g. goals, interests), to the subject's negative current concerns (e.g. personal worries), or to neither. The results indicated that words strongly related to subject's positive as well as to negative current concerns produced significantly more Stroop interference than did words unrelated or weakly related to their current concerns. Although the films strongly influenced the subjects' moods in predicted directions initially, mood changes were largely not maintained throughout the experiment. Thus, it is not surprising that no significant interactions with word type were found. These results indicate that the “emotional Stroop effect” occurs in normal subjects as well as in anxious patients, and occurs with positive as well as with negative material of strong personal relevance.
Article
Subjects were required to detect either an angry or a happy target face in a stimulus array of 12 photographs. It was found with neutral distractor faces that those high in trait anxiety detected angry faces faster than did low trait-anxious subjects, but the two groups did not differ in their speed of detection of happy targets. In addition, high trait-anxious subjects detected happy target faces slower than low trait-anxious subjects when the distractor faces were angry. Comparable findings were obtained whether or not there was anxious mood induction. It was concluded that high trait-anxious individuals have facilitated detection and processing of environmental threat relative to low trait-anxious subjects, which enhance performance when the target is threatening, but which impair performance when the distractors are threatening.
Article
To date little evidence is available as to how emotional facial expression is decoded, specifically whether a bottom-up (data-driven) or a top-down (schema-driven) approach is more appropriate in explaining the decoding of emotions from facial expression. A study is reported (conducted with N = 20 subjects each in Germany and Italy), in which decoders judged emotions from photographs of facial expressions. Stimuli represented a selection of photographs depicting both single muscular movements (action units) in an otherwise neutral face, and combinations of such action units. Results indicate that the meaning of action units changes often with context; only a few single action units transmit specific emotional meaning, which they retain when presented in context. The results are replicated to a large degree across decoder samples in both nations, implying fundamental mechanisms of emotion decoding.
Article
Neuroimaging studies have shown differential amygdala responses to masked ("unseen") emotional stimuli. How visual signals related to such unseen stimuli access the amygdala is unknown. A possible pathway, involving the superior colliculus and pulvinar, is suggested by observations of patients with striate cortex lesions who show preserved abilities to localize and discriminate visual stimuli that are not consciously perceived ("blindsight"). We used measures of right amygdala neural activity acquired from volunteer subjects viewing masked fear-conditioned faces to determine whether a colliculo-pulvinar pathway was engaged during processing of these unseen target stimuli. Increased connectivity between right amygdala, pulvinar, and superior colliculus was evident when fear-conditioned faces were unseen rather than seen. Right amygdala connectivity with fusiform and orbitofrontal cortices decreased in the same condition. By contrast, the left amygdala, whose activity did not discriminate seen and unseen fear-conditioned targets, showed no masking-dependent changes in connectivity with superior colliculus or pulvinar. These results suggest that a subcortical pathway to the right amygdala, via midbrain and thalamus, provides a route for processing behaviorally relevant unseen visual events in parallel to a cortical route necessary for conscious identification.
Article
Two experiments evaluated differential predictions from two cognitive for- mulations of anxiety. According to one view, attentional biases for threat reect vulnerability to anxiety; and as threat inputs increase, high trait anxious individuals should become more vigilant, and low trait individuals more avoidant, of threat (Williams, Watts, MacLeod, & Mathews, 1988, 1997). However, according to a ``cognitive-motivational'' view, trait anxiety inuences the appraisal of stimulus threat value, rather than the direction of attentional bias, and both high and low trait anxious individuals should exhibit greater vigilance for high rather than mild threat stimuli (Mogg & Bradley, 1998). To test these predictions, two experiments examined the effect of manipulating stimulus threat value on the direction of attentional bias. The stimuli included high threat and mild threat pictorial scenes presented in a probe detection task. Results from both studies indicated a signi® cant main effect of stimulus threat value on attentional bias, as there was increased vigilance or reduced avoidance of threat, as threat value increased. This effect was found even within low trait anxious individuals, consistent with the ``cognitive-motivational'' view. Theoretical and clinical implications are discussed.
Article
[This book] is written for students of cognitive psychology, and also for clinicians and researchers in the areas of cognition, stress and emotional disorders. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Six images of human faces were quantised into isoluminant square-shaped pixels (16 grey levels) at eight different spatial levels of quantisation. The subjects had to identify the faces that were presented with different exposure durations (from 1 to 200 msec) and with one of two brightness conditions (variable brightness in Experiment 1 or isobrightness in Experiment 2). All finer quantisation levels led to better identification than the most coarse quantisation level (15 pixels per face in the horizontal dimension) at all exposure durations. The observation of an abrupt decrease in identification efficiency on moving from 18 or more pixels per face to 15 pixels per face and the approximate equality in identification efficiency within a broad range of quantisation levels above 18 pixels per face pose some problems for existing theories of face recognition. The implications of these findings for prototype-related, auto correlation and micro genetic accounts of face and pattern processing are discussed.
Article
The spatial frequencies most relied upon by subjects in a recall task for face recognition were found to lie in the midfrequency range. A linear systems analysis model cannot account for these masking data in terms of retinocortical processing limitations alone. In order to account for the greater disruption of the face recognition task by masks in the range of 2.2 cycles/deg, the existence of unequal filtering of spatial frequency components must be recognized. This unequal filtering may occur either during memory deposition or retrieval of the input stimulus in the recall task or at any time in between.
Article
Functional activity in the visual cortex was assessed using functional magnetic resonance imaging technology while participants viewed a series of pleasant, neutral, or unpleasant pictures. Coronal images at four different locations in the occipital cortex were acquired during each of eight 12-s picture presentation periods (on) and 12-s interpicture interval (off). The extent of functional activation was larger in the right than the left hemisphere and larger in the occipital than in the occipitoparietal regions during processing of all picture contents compared with the interpicture intervals. More importantly, functional activity was significantly greater in all sampled brain regions when processing emotional (pleasant or unpleasant) pictures than when processing neutral stimuli. In Experiment 2, a hypothesis that these differences were an artifact of differential eye movements was ruled out. Whereas both emotional and neutral pictures produced activity centered on the calcarine fissure (Area 17), only emotional pictures also produced sizable clusters bilaterally in the occipital gyrus, in the right fusiform gyrus, and in the right inferior and superior parietal lobules.
Article
by Richard B. Ivry and Lynn C. Robertson, MIT Press, 1998. $55.00 (315 pages) ISBN 0 262 09034 1.
Article
Book description: The first edition of The Cognitive Neurosciences helped to define the field. The second edition reflects the many advances that have taken place-particularly in imaging and recording techniques. From the molecular level up to that of human consciousness, the contributions cover one of the most fascinating areas of science--the relationship between the structural and physiological mechanisms of the brain/nervous system and the psychological reality of mind. The majority of the chapters in this edition of The Cognitive Neurosciences are new, and those from the first edition have been completely rewritten and updated. This major reference work is now available online as part of MIT CogNet, The Cognitive and Brain Sciences Community online.
Article
1. In order to examine the composition of the geniculostriate input to the superior colliculus, microelectrode recordings were undertaken in this structure of the rhesus monkey while parvocellular or magnocellular laminae of the LGN were reversibly inactivated by injecting minute quantities of lidocaine or MgCl2. 2. The inactivation of magnocellular laminae disrupted the visually driven activity of most cells in the topographically corresponding areas of the colliculus, but not in the superficial retinotectal recipient zone. The inactivation of parvocellular lamina had no effect on the visually driven activity of collicular cells. 3. Several controls were carried out to rule out the possibility of intervention with fibers of passage. We ascertained that the LGN injections did not affect the direct retinotectal pathway by comparing the effect of such inactivation with the effect produced by reversibly cooling visual cortex. These two manipulations yielded similar results: cells in the most superficial regions of the superior colliculus were unaffected by both cortical cooling and by magnocellular injections, while below this region the response of collicular cells was reduced or eliminated in both cases. 4. These results suggest that the indirect visual pathway to the superior colliculus via cortex is activated selectively by the broad-band system, which is relayed through magnocellular LGN. The color-opponent system does not appear to have a corticotectal input sufficient to drive collicular cells independently.
Article
We argue that it seems fruitful to regard the retino-geniculate-cortical pathway, and perhaps the visual pathways in general, as comprising distinct neuronal channels which begin with the major groupings of ganglion cells, and subserve distinct functions within the overall operation of the visual system. One problem for future work is to determine the extent and, equally importantly, the limitations of the idea of independently functioning neuronal channels operating within the visual system. Some evidence of those limitations is already available. Kulikowski and Tolhurst have provided evidence suggesting that pattern detection is mediated by the X-like system at high spatial frequencies and by the Y-like system at low frequencies, but that at intermediate frequencies, both systems are likely to contribute to this function. Again, there is already physiological and psychophysical evidence of inhibitory interaction between X- and Y-cell systems, which may contribute to their functioning. That is, although there is little evidence of excitatory interaction between W-, X- and Y-cell systems, at least up to the first cortical synapse, the functioning of, say, the X-cell system may depend on the inhibitory influences impinging on it from Y-cell activity. Further, it may prove to be the case that one cell 'system' may be involved in several distinct functions and considerable work may be required to establish whether or not these functions can be considered constituent parts of an overall function, such as 'ambient' or 'foveal' vision. In the following section we suggest a classification and terminology for visual neurones which may provide a framework for future work on these lines.
Article
In order to investigate the role of facial movement in the recognition of emotions, faces were covered with black makeup and white spots. Video recordings of such faces were played back so that only the white spots were visible. The results demonstrated that moving displays of happiness, sadness, fear, surprise, anger and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions. This indicated that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions. Normally illuminated dynamic displays of these expressions, however, were recognized more accurately than displays of moving spots. The relative effectiveness of upper and lower facial areas for the recognition of these six emotions was also investigated using normally illuminated and spots-only displays. In both instances the results indicated that different facial regions are more informative for different emitions. The movement patterns characterizing the various emotional expressions as well as common confusions between emotions are also discussed.
Article
The dependence of the activity of single cells in the superficial layers of the superior colliculus of the cat upon the spatial frequency and contrast of a moving sinusoidal grating was analysed quantitatively. The responses of about 65% of the units tested were found to be spatial-frequency dependent. The range of sensitivity was relatively broad and somewhat similar to that observed in the complex cells of the visual cortex. No responses of collicular units were found above 2 c/deg. The spatial-frequency sensitive units also showed a marked sensitivity to variations of grating contrast. There appeared to be no simple correlation between receptive field characteristics and cell sensitivity to either spatial frequency or contrast. Chronic ablation of the cortical areas known to project to the superior colliculus (areas 17, 18, 19 and Clare-Bishop) did not abolish the sensitivity of collicular units to spatial periodical stimuli. The findings suggest the existence of some internal organization of collicular units, able to perform an analysis, albeit a primitive one, of spatial information.
Article
One of the major problems of living in a rich visual environment is deciding which particular object or location should be chosen for complete processing or attention; that is, deciding which object is most salient at any particular time. The pulvinar has enlarged substantially during evolution, although little has previously been known about its function. Recent studies suggest that the pulvinar contains neurons that generate signals related to the salience of visual objects. This evidence includes: (1) anatomical and physiological observations of visual function; (2) augmented responses in the pulvinar for visual stimuli presented in important contexts; (3) suppression of activity for stimuli presented in irrelevant conditions; (4) thalamic modulation producing behavioral changes in cued attention paradigms; and (5) similar changes with visual distracter tasks.