[Show abstract][Hide abstract] ABSTRACT: As a social species in a constantly changing environment, humans rely heavily on the informational richness and communicative capacity of the face. Thus, understanding how the brain processes information about faces in real-time is of paramount importance. The N170 is a high temporal resolution electrophysiological index of the brain’s early response to visual stimuli that is reliably elicited in carefully controlled laboratory-based studies. Although the N170 has often been reported to be of greatest amplitude to faces, there has been debate regarding whether this effect might be an artifact of certain aspects of the controlled experimental stimulation schedules and materials. To investigate whether the N170 can be identified in more realistic conditions with highly variable and cluttered visual images and accompanying auditory stimuli we recorded EEG 'in the wild', while participants watched pop videos. Scene-cuts to faces generated a clear N170 response, and this was larger than the N170 to transitions where the videos cut to non-face stimuli. Within participants, wild-type face N170 amplitudes were moderately correlated to those observed in a typical laboratory experiment. Thus, we demonstrate that the face N170 is a robust and ecologically valid phenomenon and not an artifact arising as an unintended consequence of some property of the more typical laboratory paradigm.
Social Cognitive and Affective Neuroscience 12/2014; · 5.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.
[Show abstract][Hide abstract] ABSTRACT: The facial first impressions literature has focused on trait dimensions, with less research on how social categories (like gender) may influence first impressions of faces. Yet, social psychological studies have shown the importance of categories like gender in the evaluation of behaviour. We investigated whether face gender affects the positive or negative evaluation of faces in terms of first impressions. In 'STUDY 1', we manipulated facial gender stereotypicality, and in 'STUDY 2', facial trustworthiness or dominance, and examined the valence of resulting spontaneous descriptions of male and female faces. For both male and female participants, counter-stereotypical (masculine or dominant looking), female faces were perceived more negatively than facially stereotypical male or female faces. In 'STUDY 3', we examined how facial dominance and trustworthiness affected rated valence across 1,000 male and female ambient face images, and replicated the finding that dominance is more negatively evaluated for female faces. In 'STUDY 4', the same effect was found with short stimulus presentations. These findings integrate the facial first impressions literature with evaluative differences based on social categories.
[Show abstract][Hide abstract] ABSTRACT: First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.
Proceedings of the National Academy of Sciences 07/2014; · 9.81 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Although different brain regions are widely considered to be involved in the recognition of facial identity and expression, it remains unclear how these regions process different properties of the visual image. Here, we ask how surface-based reflectance information and edge-based shape cues contribute to the perception and neural representation of facial identity and expression. Contrast-reversal was used to generate images in which normal contrast relationships across the surface of the image were disrupted, but edge information was preserved. In a behavioural experiment, contrast-reversal significantly attenuated judgements of facial identity, but only had a marginal effect on judgements of expression. An fMR-adaptation paradigm was then used to ask how brain regions involved in the processing of identity and expression responded to blocks comprising all normal, all contrast-reversed, or a mixture of normal and contrast-reversed faces. Adaptation in the posterior superior temporal sulcus - a region directly linked with processing facial expression - was relatively unaffected by mixing normal with contrast-reversed faces. In contrast, the response of the fusiform face area - a region linked with processing facial identity - was significantly affected by contrast-reversal. These results offer a new perspective on the reasons underlying the neural segregation of facial identity and expression in which brain regions involved in processing invariant aspects of faces, such as identity, are very sensitive to surface-based cues, whereas regions involved in processing changes in faces, such as expression, are relatively dependent on edge-based cues.
[Show abstract][Hide abstract] ABSTRACT: Face-selective regions in the amygdala and posterior superior temporal sulcus (pSTS) are strongly implicated in the processing of transient facial signals, such as expression. Here, we measured neural responses in participants while they viewed dynamic changes in facial expression. Our aim was to explore how facial expression is represented in different face-selective regions. Short movies were generated by morphing between faces posing a neutral expression and a prototypical expression of a basic emotion (either anger, disgust, fear, happiness or sadness). These dynamic stimuli were presented in block design in the following four stimulus conditions: (1) same-expression change, same-identity (2) same-expression change, different-identity (3) different-expression change, same-identity (4) different-expression change, different-identity. So, within a same-expression change condition the movies would show the same change in expression whereas in the different-expression change conditions each movie would have a different change in expression. Facial identity remained constant during each movie but in the different identity conditions the facial identity varied between each movie in a block. The amygdala, but not the posterior STS, demonstrated a greater response to blocks in which each movie morphed from neutral to a different emotion category compared to blocks in which each movie morphed to the same emotion category. Neural adaptation in the amygdala was not affected by changes in facial identity. These results are consistent with a role of the amygdala in category-based representation of facial expressions of emotion.
[Show abstract][Hide abstract] ABSTRACT: The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face.
[Show abstract][Hide abstract] ABSTRACT: Background: Impairments in social cognition have been described in schizophrenia and relate to core symptoms of the disorder. Social cognition is subserved by a network of brain regions, many of which have been implicated in schizophrenia. We hypothesized that deficits in connectivity between components of this social brain network may underlie the social cognition impairments seen in the disorder. Methods: We investigated brain activation and connectivity in a group of individuals with schizophrenia making social judgments of approachability from faces (n = 20), compared with a group of matched healthy volunteers (n = 24), using functional magnetic resonance imaging. Effective connectivity from the amygdala was estimated using the psychophysiological interaction approach. Results: While making approachability judgments, healthy participants recruited a network of social brain regions including amygdala, fusiform gyrus, cerebellum, and inferior frontal gyrus bilaterally and left medial prefrontal cortex. During the approachability task, healthy participants showed increased connectivity from the amygdala to the fusiform gyri, cerebellum, and left superior frontal cortex. In comparison to controls, individuals with schizophrenia overactivated the right middle frontal gyrus, superior frontal gyrus, and precuneus and had reduced connectivity between the amygdala and the insula cortex. Discussion: We report increased activation of frontal and medial parietal regions during social judgment in patients with schizophrenia, accompanied by decreased connectivity between the amygdala and insula. We suggest that the increased activation of frontal control systems and association cortex may reflect a compensatory mechanism for impaired connectivity of the amygdala with other parts of the social brain networks in schizophrenia.
[Show abstract][Hide abstract] ABSTRACT: Although the processing of facial identity is known to be sensitive to the orientation of the face, it is less clear whether orientation sensitivity extends to the processing of facial expressions. To address this issue, we used functional MRI (fMRI) to measure the neural response to the Thatcher illusion. This illusion involves a local inversion of the eyes and mouth in a smiling face-when the face is upright, the inverted features make it appear grotesque, but when the face is inverted, the inversion is no longer apparent. Using an fMRI-adaptation paradigm, we found a release from adaptation in the superior temporal sulcus-a region directly linked to the processing of facial expressions-when the images were upright and they changed from a normal to a Thatcherized configuration. However, this release from adaptation was not evident when the faces were inverted. These results show that regions involved in processing facial expressions display a pronounced orientation sensitivity.
[Show abstract][Hide abstract] ABSTRACT: Borderline personality disorder (BPD) is a common and serious mental illness, associated with a high risk of suicide and self harm. Those with a diagnosis of BPD often display difficulties with social interaction and struggle to form and maintain interpersonal relationships. Here we investigated the ability of participants with BPD to make social inferences from faces.
20 participants with BPD and 21 healthy controls were shown a series of faces and asked to judge these according to one of six characteristics (age, distinctiveness, attractiveness, intelligence, approachability, trustworthiness). The number and direction of errors made (compared to population norms) were recorded for analysis.
Participants with a diagnosis of BPD displayed significant impairments in making judgements from faces. In particular, the BPD Group judged faces as less approachable and less trustworthy than controls. Furthermore, within the BPD Group there was a correlation between scores on the Childhood Trauma Questionnaire (CTQ) and bias towards judging faces as unapproachable.
Individuals with a diagnosis of BPD have difficulty making appropriate social judgements about others from their faces. Judging more faces as unapproachable and untrustworthy indicates that this group may have a heightened sensitivity to perceiving potential threat, and this should be considered in clinical management and treatment.
PLoS ONE 11/2013; 8(11):e73440. · 3.53 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala's response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design fMRI paradigm, in which we compared the response of face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces, and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared to the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion and demonstrate the critical importance of the choice of comparison condition to the pattern of results.
Social Cognitive and Affective Neuroscience 10/2013; · 5.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Facial stereotypes are cognitive representations of the facial characteristics of members of social groups. In this study, we examined the extent to which facial stereotypes for occupational groups were based on physiognomic cues to stereotypical social characteristics. In Experiment 1, participants rated the occupational stereotypicality of naturalistic face images. These ratings were then regressed onto independent ratings of the faces on 16 separate traits. These traits, particularly those relevant to the occupational stereotype, explained the majority of variance in occupational stereotypicality ratings. In Experiments 2 and 3, we used trait ratings to reconstruct stereotypical occupation faces from a separate set of images, using face averaging techniques. These reconstructed facial stereotypes were validated by separate groups of participants as conforming to the occupational stereotype. These results indicate that facial cues and group stereotypes are integrated through shared semantic content in the cognitive representations of groups.
Social Psychological and Personality Science. 09/2013; 4(5):615-623.
[Show abstract][Hide abstract] ABSTRACT: Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.
PLoS ONE 08/2013; 8(8):e70648. · 3.53 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Abstract Primary objective: To explore the relationships between verbal aggression, physical aggression and inappropriate sexual behaviour following acquired brain injury. Research design: Multivariate statistical modelling of observed verbal aggression, physical aggression and inappropriate sexual behaviour utilizing demographic, pre-morbid, injury-related and neurocognitive predictors. Methods and procedures: Clinical records of 152 participants with acquired brain injury were reviewed, providing an important data set as disordered behaviours had been recorded at the time of occurrence with the Brain Injury Rehabilitation Trust (BIRT) Aggression Rating Scale and complementary measures of inappropriate sexual behaviour. Three behavioural components (verbal aggression, physical aggression and inappropriate sexual behaviour) were identified and subjected to separate logistical regression modelling in a sub-set of 77 participants. Main outcomes and results: Successful modelling was achieved for both verbal and physical aggression (correctly classifying 74% and 65% of participants, respectively), with use of psychotropic medication and poorer verbal function increasing the odds of aggression occurring. Pre-morbid history of aggression predicted verbal but not physical aggression. No variables predicted inappropriate sexual behaviour. Conclusions: Verbal aggression, physical aggression and inappropriate sexual behaviour following acquired brain injury appear to reflect separate clinical phenomena rather than general behavioural dysregulation. Clinical markers that indicate an increased risk of post-injury aggression were not related to inappropriate sexual behaviour.
[Show abstract][Hide abstract] ABSTRACT: Reversing the luminance values of a face (contrast negation) is known to disrupt recognition. However, the effects of contrast negation are attenuated in chimeric images, in which the eye region is returned to positive contrast (S. Gilad, M. Meng, & P. Sinha, 2009, Role of ordinal contrast relationships in face encoding, Proceedings of the National Academy of Sciences, USA, Vol. 106, pp. 5353-5358). Here, we probe further the importance of the eye region for the representation of facial identity. In the first experiment, we asked to what extent the chimeric benefit is specific to the eye region. Our results showed a benefit for including a positive eye region in a contrast negated face, whereas chimeric faces in which only the forehead, nose, or mouth regions were returned to positive contrast did not significantly improve recognition. In Experiment 2, we confirmed that the presence of positive contrast eyes alone does not account for the improved recognition of chimeric face images. Rather, it is the integration of information from the positive contrast eye region and the surrounding negative contrast face that is essential for the chimeric benefit. In Experiment 3, we demonstrated that the chimeric benefit is dependent on a holistic representation of the face. Finally, in Experiment 4, we showed that the positive contrast eye region needs to match the identity of the contrast negated part of the image for the chimera benefit to occur. Together, these results show the importance of the eye region for holistic representations of facial identity. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Journal of Experimental Psychology Human Perception & Performance 05/2013; · 3.11 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Three experiments are presented that investigate the two-dimensional valence/trustworthiness by dominance model of social inferences from faces (Oosterhof & Todorov, 2008). Experiment 1 used image averaging and morphing techniques to demonstrate that consistent facial cues subserve a range of social inferences, even in a highly variable sample of 1000 ambient images (images that are intended to be representative of those encountered in everyday life, see Jenkins, White, Van Montfort, & Burton, 2011). Experiment 2 then tested Oosterhof and Todorov's two-dimensional model on this extensive sample of face images. The original two dimensions were replicated and a novel 'youthful-attractiveness' factor also emerged. Experiment 3 successfully cross-validated the three-dimensional model using face averages directly constructed from the factor scores. These findings highlight the utility of the original trustworthiness and dominance dimensions, but also underscore the need to utilise varied face stimuli: with a more realistically diverse set of face images, social inferences from faces show a more elaborate underlying structure than hitherto suggested.
[Show abstract][Hide abstract] ABSTRACT: Because moving depictions of face emotion have greater ecological validity than their static counterparts, it has been suggested that still photographs may not engage 'authentic' mechanisms used to recognize facial expressions in everyday life. To date, however, no neuroimaging studies have adequately addressed the question of whether the processing of static and dynamic expressions rely upon different brain substrates. To address this, we performed an functional magnetic resonance imaging (fMRI) experiment wherein participants made emotional expression discrimination and Sex discrimination judgements to static and moving face images. Compared to Sex discrimination, Emotion discrimination was associated with widespread increased activation in regions of occipito-temporal, parietal and frontal cortex. These regions were activated both by moving and by static emotional stimuli, indicating a general role in the interpretation of emotion. However, portions of the inferior frontal gyri and supplementary/pre-supplementary motor area showed task by motion interaction. These regions were most active during emotion judgements to static faces. Our results demonstrate a common neural substrate for recognizing static and moving facial expressions, but suggest a role for the inferior frontal gyrus in supporting simulation processes that are invoked more strongly to disambiguate static emotional cues.
[Show abstract][Hide abstract] ABSTRACT: Whether the brain represents facial expressions as perceptual continua or as emotion categories remains controversial. Here, we measured the neural response to morphed images to directly address how facial expressions of emotion are represented in the brain. We found that face-selective regions in the posterior superior temporal sulcus and the amygdala responded selectively to changes in facial expression, independent of changes in identity. We then asked whether the responses in these regions reflected categorical or continuous neural representations of facial expression. Participants viewed images from continua generated by morphing between faces posing different expressions such that the expression could be the same, could involve a physical change but convey the same emotion, or could differ by the same physical amount but be perceived as two different emotions. We found that the posterior superior temporal sulcus was equally sensitive to all changes in facial expression, consistent with a continuous representation. In contrast, the amygdala was only sensitive to changes in expression that altered the perceived emotion, demonstrating a more categorical representation. These results offer a resolution to the controversy about how facial expression is processed in the brain by showing that both continuous and categorical representations underlie our ability to extract this important social cue.
Proceedings of the National Academy of Sciences 12/2012; · 9.81 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The amount of time an individual spends gazing at images is longer if the depicted person is sexually appealing. Despite an increasing use of such response latencies as a diagnostic tool in applied forensic settings, the underlying processes that drive the seemingly robust effect of longer response latencies for sexually attractive targets remain unknown. In the current study, two alternative explanations are presented and tested using an adapted viewing time paradigm that disentangled task- and stimulus-specific processes. Heterosexual and homosexual male participants were instructed to rate the sexual attractiveness of target persons differing in sex and sexual maturation from four experimentally assigned perspectives-heterosexual and homosexual perspectives for both sexes. This vicarious viewing time paradigm facilitated the estimation of the independent contributions of task (assigned perspective) and stimuli to viewing time effects. Results showed a large task-based effect as well as a relatively smaller stimulus-based effect. This pattern suggests that, when viewing time measures are used for the assessment of sexual interest, it should be taken into consideration that response latency patterns can be biased by judging images from a selected perspective.
Archives of Sexual Behavior 12/2012; 41(6):1389-1401. · 3.53 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Neural models of human face perception propose parallel pathways. One pathway (including posterior superior temporal sulcus, pSTS) is responsible for processing changeable aspects of faces such as gaze and expression, and the other pathway (including the fusiform face area, FFA) is responsible for relatively invariant aspects such as identity. However, to be socially meaningful, changes in expression and gaze must be tracked across an individual face. Our aim was to investigate how this is achieved. Using functional magnetic resonance imaging, we found a region in pSTS that responded more to sequences of faces varying in gaze and expression in which the identity was constant compared with sequences in which the identity varied. To determine whether this preferential response to same identity faces was due to the processing of identity in the pSTS or was a result of interactions between pSTS and other regions thought to code face identity, we measured the functional connectivity between face-selective regions. We found increased functional connectivity between the pSTS and FFA when participants viewed same identity faces compared with different identity faces. Together, these results suggest that distinct neural pathways involved in expression and identity interact to process the changeable features of the face in a socially meaningful way.