Andrew W. Young

CUNY Graduate Center, New York, New York, United States

Are you Andrew W. Young?

Claim your profile

Publications (172)799.92 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions.
    Full-text · Article · Jan 2016 · NeuroImage
  • David M. Watson · Andrew W. Young · Timothy J. Andrews
    [Show abstract] [Hide abstract]
    ABSTRACT: Neuroimaging studies have revealed topographically organised patterns of response to different objects in the ventral visual pathway. These patterns are thought to be based on the form of the object. However, it is not clear what dimensions of object form are important. Here, we determined the extent to which spatial properties (energy across the image) could explain patterns of response in these regions. We compared patterns of fMRI response to images from different object categories presented at different retinal sizes. Although distinct neural patterns were evident to different object categories, changing the size (and thus the spatial properties) of the images had a significant effect on these patterns. Next, we used a computational approach to determine whether more fine-grained differences in the spatial properties can explain the patterns of neural response to different objects. We found that the spatial properties of the image were able to predict patterns of neural response, even when categorical factors were removed from the analysis. We also found that the effect of spatial properties on the patterns of response varies across the ventral visual pathway. These results show how spatial properties can be an important organizing principle in the topography of the ventral visual pathway.
    No preview · Article · Nov 2015 · NeuroImage
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: People readily make personality attributions to images of strangers' faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1000 highly varying “ambient image” face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance, and youthful-attractiveness. Interestingly, the facial Big Five judgments were found to separate to some extent: judgments of openness, extraversion, emotional stability, and agreeableness were mainly linked to facial first impressions of approachability, whereas conscientiousness judgments involved a combination of approachability and dominance. In a second study we used average face images to investigate which main cues are used by perceivers to make impressions of the Big Five, by extracting consistent cues to impressions from the large variation in the original images. When forming impressions of strangers from highly varying, naturalistic face photographs, perceivers mainly seem to rely on broad facial cues to approachability, such as smiling.
    Preview · Article · Oct 2015 · Frontiers in Psychology
  • Source
    Xiaoqian Yan · Timothy J Andrews · Andrew W Young
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (PsycINFO Database Record
    Full-text · Article · Oct 2015 · Journal of Experimental Psychology Human Perception & Performance

  • No preview · Article · Aug 2015 · Perception
  • Source
    Andrew I W James · Jan R Böhnke · Andrew W Young · Gary J Lewis
    [Show abstract] [Hide abstract]
    ABSTRACT: Understanding the underpinnings of behavioural disturbances following brain injury is of considerable importance, but little at present is known about the relationships between different types of behavioural disturbances. Here, we take a novel approach to this issue by using confirmatory factor analysis to elucidate the architecture of verbal aggression, physical aggression and inappropriate sexual behaviour using systematic records made across an eight-week observation period for a large sample (n = 301) of individuals with a range of brain injuries. This approach offers a powerful test of the architecture of these behavioural disturbances by testing the fit between observed behaviours and different theoretical models. We chose models that reflected alternative theoretical perspectives based on generalized disinhibition (Model 1), a difference between aggression and inappropriate sexual behaviour (Model 2), or on the idea that verbal aggression, physical aggression and inappropriate sexual behaviour reflect broadly distinct but correlated clinical phenomena (Model 3). Model 3 provided the best fit to the data indicating that these behaviours can be viewed as distinct, but with substantial overlap. These data are important both for developing models concerning the architecture of behaviour as well as for clinical management in individuals with brain injury. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
    Full-text · Article · Jul 2015 · Proceedings of the Royal Society B: Biological Sciences
  • [Show abstract] [Hide abstract]
    ABSTRACT: Converging evidence suggests that the fusiform gyrus is involved in the processing of both faces and words. We used fMRI to investigate the extent to which the representation of words and faces in this region of the brain is based on a common neural representation. In Experiment 1, a univariate analysis revealed regions in the fusiform gyrus that were only selective for faces and other regions that were only selective for words. However, we also found regions that showed both word-selective and face-selective responses, particularly in the left hemisphere. We then used a multivariate analysis to measure the pattern of response to faces and words. Despite the overlap in regional responses, we found distinct patterns of response to both faces and words in the left and right fusiform gyrus. In Experiment 2, fMR adaptation was used to determine whether information about familiar faces and names is integrated in the fusiform gyrus. Distinct regions of the fusiform gyrus showed adaptation to either familiar faces or familiar names. However, there was no adaptation to sequences of faces and names with the same identity. Taken together, these results provide evidence for distinct, but overlapping, neural representations for words and faces in the fusiform gyrus. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail:
    No preview · Article · Jul 2015 · Cerebral Cortex
  • [Show abstract] [Hide abstract]
    ABSTRACT: The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
    No preview · Article · Mar 2015 · Cortex
  • Source
    Patrick Johnston · Rebecca Molyneux · Andrew W. Young
    [Show abstract] [Hide abstract]
    ABSTRACT: As a social species in a constantly changing environment, humans rely heavily on the informational richness and communicative capacity of the face. Thus, understanding how the brain processes information about faces in real-time is of paramount importance. The N170 is a high-temporal resolution electrophysiological index of the brain’s early response to visual stimuli that is reliably elicited in carefully controlled laboratory-based studies. Although the N170 has often been reported to be of greatest amplitude to faces, there has been debate regarding whether this effect might be an artefact of certain aspects of the controlled experimental stimulation schedules and materials. To investigate whether the N170 can be identified in more realistic conditions with highly variable and cluttered visual images and accompanying auditory stimuli we recorded EEG ‘in the wild’, while participants watched pop videos. Scene-cuts to faces generated a clear N170 response, and this was larger than the N170 to transitions where the videos cut to non-face stimuli. Within participants, wild-type face N170 amplitudes were moderately correlated to those observed in a typical laboratory experiment. Thus, we demonstrate that the face N170 is a robust and ecologically valid phenomenon and not an artefact arising as an unintended consequence of some property of the more typical laboratory paradigm.
    Full-text · Article · Dec 2014 · Social Cognitive and Affective Neuroscience
  • [Show abstract] [Hide abstract]
    ABSTRACT: The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face.
    No preview · Article · Oct 2014 · Journal of Vision
  • Christopher A Longmore · Chang Hong Liu · Andrew W Young
    [Show abstract] [Hide abstract]
    ABSTRACT: For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.
    No preview · Article · Sep 2014 · Quarterly journal of experimental psychology (2006)
  • [Show abstract] [Hide abstract]
    ABSTRACT: The facial first impressions literature has focused on trait dimensions, with less research on how social categories (like gender) may influence first impressions of faces. Yet, social psychological studies have shown the importance of categories like gender in the evaluation of behaviour. We investigated whether face gender affects the positive or negative evaluation of faces in terms of first impressions. In 'STUDY 1', we manipulated facial gender stereotypicality, and in 'STUDY 2', facial trustworthiness or dominance, and examined the valence of resulting spontaneous descriptions of male and female faces. For both male and female participants, counter-stereotypical (masculine or dominant looking), female faces were perceived more negatively than facially stereotypical male or female faces. In 'STUDY 3', we examined how facial dominance and trustworthiness affected rated valence across 1,000 male and female ambient face images, and replicated the finding that dominance is more negatively evaluated for female faces. In 'STUDY 4', the same effect was found with short stimulus presentations. These findings integrate the facial first impressions literature with evaluative differences based on social categories.
    No preview · Article · Aug 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.
    No preview · Article · Jul 2014 · Proceedings of the National Academy of Sciences
  • Source
    Richard J Harris · Andrew W Young · Timothy J Andrews
    [Show abstract] [Hide abstract]
    ABSTRACT: Although different brain regions are widely considered to be involved in the recognition of facial identity and expression, it remains unclear how these regions process different properties of the visual image. Here, we ask how surface-based reflectance information and edge-based shape cues contribute to the perception and neural representation of facial identity and expression. Contrast-reversal was used to generate images in which normal contrast relationships across the surface of the image were disrupted, but edge information was preserved. In a behavioural experiment, contrast-reversal significantly attenuated judgements of facial identity, but only had a marginal effect on judgements of expression. An fMR-adaptation paradigm was then used to ask how brain regions involved in the processing of identity and expression responded to blocks comprising all normal, all contrast-reversed, or a mixture of normal and contrast-reversed faces. Adaptation in the posterior superior temporal sulcus - a region directly linked with processing facial expression - was relatively unaffected by mixing normal with contrast-reversed faces. In contrast, the response of the fusiform face area - a region linked with processing facial identity - was significantly affected by contrast-reversal. These results offer a new perspective on the reasons underlying the neural segregation of facial identity and expression in which brain regions involved in processing invariant aspects of faces, such as identity, are very sensitive to surface-based cues, whereas regions involved in processing changes in faces, such as expression, are relatively dependent on edge-based cues.
    Full-text · Article · Apr 2014 · NeuroImage
  • Richard J Harris · Andrew W Young · Timothy J Andrews
    [Show abstract] [Hide abstract]
    ABSTRACT: Face-selective regions in the amygdala and posterior superior temporal sulcus (pSTS) are strongly implicated in the processing of transient facial signals, such as expression. Here, we measured neural responses in participants while they viewed dynamic changes in facial expression. Our aim was to explore how facial expression is represented in different face-selective regions. Short movies were generated by morphing between faces posing a neutral expression and a prototypical expression of a basic emotion (either anger, disgust, fear, happiness or sadness). These dynamic stimuli were presented in block design in the following four stimulus conditions: (1) same-expression change, same-identity (2) same-expression change, different-identity (3) different-expression change, same-identity (4) different-expression change, different-identity. So, within a same-expression change condition the movies would show the same change in expression whereas in the different-expression change conditions each movie would have a different change in expression. Facial identity remained constant during each movie but in the different identity conditions the facial identity varied between each movie in a block. The amygdala, but not the posterior STS, demonstrated a greater response to blocks in which each movie morphed from neutral to a different emotion category compared to blocks in which each movie morphed to the same emotion category. Neural adaptation in the amygdala was not affected by changes in facial identity. These results are consistent with a role of the amygdala in category-based representation of facial expressions of emotion.
    No preview · Article · Jan 2014 · Neuropsychologia
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: Impairments in social cognition have been described in schizophrenia and relate to core symptoms of the disorder. Social cognition is subserved by a network of brain regions, many of which have been implicated in schizophrenia. We hypothesized that deficits in connectivity between components of this social brain network may underlie the social cognition impairments seen in the disorder. Methods: We investigated brain activation and connectivity in a group of individuals with schizophrenia making social judgments of approachability from faces (n = 20), compared with a group of matched healthy volunteers (n = 24), using functional magnetic resonance imaging. Effective connectivity from the amygdala was estimated using the psychophysiological interaction approach. Results: While making approachability judgments, healthy participants recruited a network of social brain regions including amygdala, fusiform gyrus, cerebellum, and inferior frontal gyrus bilaterally and left medial prefrontal cortex. During the approachability task, healthy participants showed increased connectivity from the amygdala to the fusiform gyri, cerebellum, and left superior frontal cortex. In comparison to controls, individuals with schizophrenia overactivated the right middle frontal gyrus, superior frontal gyrus, and precuneus and had reduced connectivity between the amygdala and the insula cortex. Discussion: We report increased activation of frontal and medial parietal regions during social judgment in patients with schizophrenia, accompanied by decreased connectivity between the amygdala and insula. We suggest that the increased activation of frontal control systems and association cortex may reflect a compensatory mechanism for impaired connectivity of the amygdala with other parts of the social brain networks in schizophrenia.
    Full-text · Article · Jan 2014 · Schizophrenia Bulletin
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Although the processing of facial identity is known to be sensitive to the orientation of the face, it is less clear whether orientation sensitivity extends to the processing of facial expressions. To address this issue, we used functional MRI (fMRI) to measure the neural response to the Thatcher illusion. This illusion involves a local inversion of the eyes and mouth in a smiling face-when the face is upright, the inverted features make it appear grotesque, but when the face is inverted, the inversion is no longer apparent. Using an fMRI-adaptation paradigm, we found a release from adaptation in the superior temporal sulcus-a region directly linked to the processing of facial expressions-when the images were upright and they changed from a normal to a Thatcherized configuration. However, this release from adaptation was not evident when the faces were inverted. These results show that regions involved in processing facial expressions display a pronounced orientation sensitivity.
    Preview · Article · Nov 2013 · Psychological Science
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Borderline personality disorder (BPD) is a common and serious mental illness, associated with a high risk of suicide and self harm. Those with a diagnosis of BPD often display difficulties with social interaction and struggle to form and maintain interpersonal relationships. Here we investigated the ability of participants with BPD to make social inferences from faces. 20 participants with BPD and 21 healthy controls were shown a series of faces and asked to judge these according to one of six characteristics (age, distinctiveness, attractiveness, intelligence, approachability, trustworthiness). The number and direction of errors made (compared to population norms) were recorded for analysis. Participants with a diagnosis of BPD displayed significant impairments in making judgements from faces. In particular, the BPD Group judged faces as less approachable and less trustworthy than controls. Furthermore, within the BPD Group there was a correlation between scores on the Childhood Trauma Questionnaire (CTQ) and bias towards judging faces as unapproachable. Individuals with a diagnosis of BPD have difficulty making appropriate social judgements about others from their faces. Judging more faces as unapproachable and untrustworthy indicates that this group may have a heightened sensitivity to perceiving potential threat, and this should be considered in clinical management and treatment.
    Full-text · Article · Nov 2013 · PLoS ONE
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala’s response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design functional magnetic resonance imaging paradigm, in which we compared the response to face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared with the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion, and demonstrate the critical importance of the choice of comparison condition to the pattern of results.
    Full-text · Article · Oct 2013 · Social Cognitive and Affective Neuroscience
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Facial stereotypes are cognitive representations of the facial characteristics of members of social groups. In this study, we examined the extent to which facial stereotypes for occupational groups were based on physiognomic cues to stereotypical social characteristics. In Experiment 1, participants rated the occupational stereotypicality of naturalistic face images. These ratings were then regressed onto independent ratings of the faces on 16 separate traits. These traits, particularly those relevant to the occupational stereotype, explained the majority of variance in occupational stereotypicality ratings. In Experiments 2 and 3, we used trait ratings to reconstruct stereotypical occupation faces from a separate set of images, using face averaging techniques. These reconstructed facial stereotypes were validated by separate groups of participants as conforming to the occupational stereotype. These results indicate that facial cues and group stereotypes are integrated through shared semantic content in the cognitive representations of groups.
    Full-text · Article · Sep 2013 · Social Psychological and Personality Science

Publication Stats

13k Citations
799.92 Total Impact Points


  • 1998-2015
    • CUNY Graduate Center
      New York, New York, United States
  • 1997-2015
    • The University of York
      • • Department of Psychology
      • • York Neuroimaging Centre (YNiC)
      York, England, United Kingdom
  • 1989-2007
    • Durham University
      • Department of Psychology
      Durham, England, United Kingdom
  • 2006
    • University of Hull
      • Department of Psychology
      Kingston upon Hull, England, United Kingdom
  • 2005
    • MRC Cognition and Brain Sciences Unit
      Cambridge, England, United Kingdom
  • 2004
    • King's College London
      • Department of Psychological Medicine
      London, ENG, United Kingdom
  • 2000
    • Cardiff University
      • School of Psychology
      Cardiff, Wales, United Kingdom
  • 1996
    • Leiden University
      Leyden, South Holland, Netherlands
  • 1995
    • University of Liverpool
      Liverpool, England, United Kingdom
  • 1994
    • The University of Edinburgh
      Edinburgh, Scotland, United Kingdom
  • 1977-1992
    • Lancaster University
      • Department of Psychology
      Lancaster, ENG, United Kingdom
  • 1976
    • University of Aberdeen
      • School of Psychology
      Aberdeen, Scotland, United Kingdom