Andrew W. Young

The University of York, York, England, United Kingdom

Are you Andrew W. Young?

Claim your profile

Publications (161)768.85 Total impact

  • Source
    Andrew I W James · Jan R Böhnke · Andrew W Young · Gary J Lewis
    [Show abstract] [Hide abstract]
    ABSTRACT: Understanding the underpinnings of behavioural disturbances following brain injury is of considerable importance, but little at present is known about the relationships between different types of behavioural disturbances. Here, we take a novel approach to this issue by using confirmatory factor analysis to elucidate the architecture of verbal aggression, physical aggression and inappropriate sexual behaviour using systematic records made across an eight-week observation period for a large sample (n = 301) of individuals with a range of brain injuries. This approach offers a powerful test of the architecture of these behavioural disturbances by testing the fit between observed behaviours and different theoretical models. We chose models that reflected alternative theoretical perspectives based on generalized disinhibition (Model 1), a difference between aggression and inappropriate sexual behaviour (Model 2), or on the idea that verbal aggression, physical aggression and inappropriate sexual behaviour reflect broadly distinct but correlated clinical phenomena (Model 3). Model 3 provided the best fit to the data indicating that these behaviours can be viewed as distinct, but with substantial overlap. These data are important both for developing models concerning the architecture of behaviour as well as for clinical management in individuals with brain injury. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
    Proceedings of the Royal Society B: Biological Sciences 07/2015; 282(1811). DOI:10.1098/rspb.2015.0711 · 5.05 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Converging evidence suggests that the fusiform gyrus is involved in the processing of both faces and words. We used fMRI to investigate the extent to which the representation of words and faces in this region of the brain is based on a common neural representation. In Experiment 1, a univariate analysis revealed regions in the fusiform gyrus that were only selective for faces and other regions that were only selective for words. However, we also found regions that showed both word-selective and face-selective responses, particularly in the left hemisphere. We then used a multivariate analysis to measure the pattern of response to faces and words. Despite the overlap in regional responses, we found distinct patterns of response to both faces and words in the left and right fusiform gyrus. In Experiment 2, fMR adaptation was used to determine whether information about familiar faces and names is integrated in the fusiform gyrus. Distinct regions of the fusiform gyrus showed adaptation to either familiar faces or familiar names. However, there was no adaptation to sequences of faces and names with the same identity. Taken together, these results provide evidence for distinct, but overlapping, neural representations for words and faces in the fusiform gyrus. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail:
    Cerebral Cortex 07/2015; DOI:10.1093/cercor/bhv147 · 8.67 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
    Cortex 03/2015; 69. DOI:10.1016/j.cortex.2015.03.002 · 5.13 Impact Factor
  • Source
    Patrick Johnston · Rebecca Molyneux · Andrew W. Young
    [Show abstract] [Hide abstract]
    ABSTRACT: As a social species in a constantly changing environment, humans rely heavily on the informational richness and communicative capacity of the face. Thus, understanding how the brain processes information about faces in real-time is of paramount importance. The N170 is a high-temporal resolution electrophysiological index of the brain’s early response to visual stimuli that is reliably elicited in carefully controlled laboratory-based studies. Although the N170 has often been reported to be of greatest amplitude to faces, there has been debate regarding whether this effect might be an artefact of certain aspects of the controlled experimental stimulation schedules and materials. To investigate whether the N170 can be identified in more realistic conditions with highly variable and cluttered visual images and accompanying auditory stimuli we recorded EEG ‘in the wild’, while participants watched pop videos. Scene-cuts to faces generated a clear N170 response, and this was larger than the N170 to transitions where the videos cut to non-face stimuli. Within participants, wild-type face N170 amplitudes were moderately correlated to those observed in a typical laboratory experiment. Thus, we demonstrate that the face N170 is a robust and ecologically valid phenomenon and not an artefact arising as an unintended consequence of some property of the more typical laboratory paradigm.
    Social Cognitive and Affective Neuroscience 12/2014; DOI:10.1093/scan/nsu136 · 7.37 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face.
    Journal of Vision 10/2014; 14(12). DOI:10.1167/14.12.9 · 2.39 Impact Factor
  • Christopher A Longmore · Chang Hong Liu · Andrew W Young
    [Show abstract] [Hide abstract]
    ABSTRACT: For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.
    Quarterly journal of experimental psychology (2006) 09/2014; 68(2):1-12. DOI:10.1080/17470218.2014.939666 · 2.13 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The facial first impressions literature has focused on trait dimensions, with less research on how social categories (like gender) may influence first impressions of faces. Yet, social psychological studies have shown the importance of categories like gender in the evaluation of behaviour. We investigated whether face gender affects the positive or negative evaluation of faces in terms of first impressions. In 'STUDY 1', we manipulated facial gender stereotypicality, and in 'STUDY 2', facial trustworthiness or dominance, and examined the valence of resulting spontaneous descriptions of male and female faces. For both male and female participants, counter-stereotypical (masculine or dominant looking), female faces were perceived more negatively than facially stereotypical male or female faces. In 'STUDY 3', we examined how facial dominance and trustworthiness affected rated valence across 1,000 male and female ambient face images, and replicated the finding that dominance is more negatively evaluated for female faces. In 'STUDY 4', the same effect was found with short stimulus presentations. These findings integrate the facial first impressions literature with evaluative differences based on social categories.
    08/2014; 106(2). DOI:10.1111/bjop.12085
  • [Show abstract] [Hide abstract]
    ABSTRACT: First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.
    Proceedings of the National Academy of Sciences 07/2014; 111(32). DOI:10.1073/pnas.1409860111 · 9.67 Impact Factor
  • Source
    Richard J Harris · Andrew W Young · Timothy J Andrews
    [Show abstract] [Hide abstract]
    ABSTRACT: Although different brain regions are widely considered to be involved in the recognition of facial identity and expression, it remains unclear how these regions process different properties of the visual image. Here, we ask how surface-based reflectance information and edge-based shape cues contribute to the perception and neural representation of facial identity and expression. Contrast-reversal was used to generate images in which normal contrast relationships across the surface of the image were disrupted, but edge information was preserved. In a behavioural experiment, contrast-reversal significantly attenuated judgements of facial identity, but only had a marginal effect on judgements of expression. An fMR-adaptation paradigm was then used to ask how brain regions involved in the processing of identity and expression responded to blocks comprising all normal, all contrast-reversed, or a mixture of normal and contrast-reversed faces. Adaptation in the posterior superior temporal sulcus - a region directly linked with processing facial expression - was relatively unaffected by mixing normal with contrast-reversed faces. In contrast, the response of the fusiform face area - a region linked with processing facial identity - was significantly affected by contrast-reversal. These results offer a new perspective on the reasons underlying the neural segregation of facial identity and expression in which brain regions involved in processing invariant aspects of faces, such as identity, are very sensitive to surface-based cues, whereas regions involved in processing changes in faces, such as expression, are relatively dependent on edge-based cues.
    NeuroImage 04/2014; 97(100). DOI:10.1016/j.neuroimage.2014.04.032 · 6.36 Impact Factor
  • Richard J Harris · Andrew W Young · Timothy J Andrews
    [Show abstract] [Hide abstract]
    ABSTRACT: Face-selective regions in the amygdala and posterior superior temporal sulcus (pSTS) are strongly implicated in the processing of transient facial signals, such as expression. Here, we measured neural responses in participants while they viewed dynamic changes in facial expression. Our aim was to explore how facial expression is represented in different face-selective regions. Short movies were generated by morphing between faces posing a neutral expression and a prototypical expression of a basic emotion (either anger, disgust, fear, happiness or sadness). These dynamic stimuli were presented in block design in the following four stimulus conditions: (1) same-expression change, same-identity (2) same-expression change, different-identity (3) different-expression change, same-identity (4) different-expression change, different-identity. So, within a same-expression change condition the movies would show the same change in expression whereas in the different-expression change conditions each movie would have a different change in expression. Facial identity remained constant during each movie but in the different identity conditions the facial identity varied between each movie in a block. The amygdala, but not the posterior STS, demonstrated a greater response to blocks in which each movie morphed from neutral to a different emotion category compared to blocks in which each movie morphed to the same emotion category. Neural adaptation in the amygdala was not affected by changes in facial identity. These results are consistent with a role of the amygdala in category-based representation of facial expressions of emotion.
    Neuropsychologia 01/2014; 56(100). DOI:10.1016/j.neuropsychologia.2014.01.005 · 3.30 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: Impairments in social cognition have been described in schizophrenia and relate to core symptoms of the disorder. Social cognition is subserved by a network of brain regions, many of which have been implicated in schizophrenia. We hypothesized that deficits in connectivity between components of this social brain network may underlie the social cognition impairments seen in the disorder. Methods: We investigated brain activation and connectivity in a group of individuals with schizophrenia making social judgments of approachability from faces (n = 20), compared with a group of matched healthy volunteers (n = 24), using functional magnetic resonance imaging. Effective connectivity from the amygdala was estimated using the psychophysiological interaction approach. Results: While making approachability judgments, healthy participants recruited a network of social brain regions including amygdala, fusiform gyrus, cerebellum, and inferior frontal gyrus bilaterally and left medial prefrontal cortex. During the approachability task, healthy participants showed increased connectivity from the amygdala to the fusiform gyri, cerebellum, and left superior frontal cortex. In comparison to controls, individuals with schizophrenia overactivated the right middle frontal gyrus, superior frontal gyrus, and precuneus and had reduced connectivity between the amygdala and the insula cortex. Discussion: We report increased activation of frontal and medial parietal regions during social judgment in patients with schizophrenia, accompanied by decreased connectivity between the amygdala and insula. We suggest that the increased activation of frontal control systems and association cortex may reflect a compensatory mechanism for impaired connectivity of the amygdala with other parts of the social brain networks in schizophrenia.
    Schizophrenia Bulletin 01/2014; 40(1):152-160. DOI:10.1093/schbul/sbt086 · 8.45 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Although the processing of facial identity is known to be sensitive to the orientation of the face, it is less clear whether orientation sensitivity extends to the processing of facial expressions. To address this issue, we used functional MRI (fMRI) to measure the neural response to the Thatcher illusion. This illusion involves a local inversion of the eyes and mouth in a smiling face-when the face is upright, the inverted features make it appear grotesque, but when the face is inverted, the inversion is no longer apparent. Using an fMRI-adaptation paradigm, we found a release from adaptation in the superior temporal sulcus-a region directly linked to the processing of facial expressions-when the images were upright and they changed from a normal to a Thatcherized configuration. However, this release from adaptation was not evident when the faces were inverted. These results show that regions involved in processing facial expressions display a pronounced orientation sensitivity.
    Psychological Science 11/2013; 25(1). DOI:10.1177/0956797613501521 · 4.43 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Borderline personality disorder (BPD) is a common and serious mental illness, associated with a high risk of suicide and self harm. Those with a diagnosis of BPD often display difficulties with social interaction and struggle to form and maintain interpersonal relationships. Here we investigated the ability of participants with BPD to make social inferences from faces. 20 participants with BPD and 21 healthy controls were shown a series of faces and asked to judge these according to one of six characteristics (age, distinctiveness, attractiveness, intelligence, approachability, trustworthiness). The number and direction of errors made (compared to population norms) were recorded for analysis. Participants with a diagnosis of BPD displayed significant impairments in making judgements from faces. In particular, the BPD Group judged faces as less approachable and less trustworthy than controls. Furthermore, within the BPD Group there was a correlation between scores on the Childhood Trauma Questionnaire (CTQ) and bias towards judging faces as unapproachable. Individuals with a diagnosis of BPD have difficulty making appropriate social judgements about others from their faces. Judging more faces as unapproachable and untrustworthy indicates that this group may have a heightened sensitivity to perceiving potential threat, and this should be considered in clinical management and treatment.
    PLoS ONE 11/2013; 8(11):e73440. DOI:10.1371/journal.pone.0073440 · 3.23 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala’s response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design functional magnetic resonance imaging paradigm, in which we compared the response to face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared with the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion, and demonstrate the critical importance of the choice of comparison condition to the pattern of results.
    Social Cognitive and Affective Neuroscience 10/2013; 9(11). DOI:10.1093/scan/nst162 · 7.37 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Facial stereotypes are cognitive representations of the facial characteristics of members of social groups. In this study, we examined the extent to which facial stereotypes for occupational groups were based on physiognomic cues to stereotypical social characteristics. In Experiment 1, participants rated the occupational stereotypicality of naturalistic face images. These ratings were then regressed onto independent ratings of the faces on 16 separate traits. These traits, particularly those relevant to the occupational stereotype, explained the majority of variance in occupational stereotypicality ratings. In Experiments 2 and 3, we used trait ratings to reconstruct stereotypical occupation faces from a separate set of images, using face averaging techniques. These reconstructed facial stereotypes were validated by separate groups of participants as conforming to the occupational stereotype. These results indicate that facial cues and group stereotypes are integrated through shared semantic content in the cognitive representations of groups.
    Social Psychological and Personality Science 09/2013; 4(5):615-623. DOI:10.1177/1948550612469820 · 2.56 Impact Factor
  • Source
    Cindy C Hagan · Will Woods · Sam Johnson · Gary G R Green · Andrew W Young
    [Show abstract] [Hide abstract]
    ABSTRACT: Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.
    PLoS ONE 08/2013; 8(8):e70648. DOI:10.1371/journal.pone.0070648 · 3.23 Impact Factor
  • Source
    Andrew I W James · Andrew W Young
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Primary objective: To explore the relationships between verbal aggression, physical aggression and inappropriate sexual behaviour following acquired brain injury. Research design: Multivariate statistical modelling of observed verbal aggression, physical aggression and inappropriate sexual behaviour utilizing demographic, pre-morbid, injury-related and neurocognitive predictors. Methods and procedures: Clinical records of 152 participants with acquired brain injury were reviewed, providing an important data set as disordered behaviours had been recorded at the time of occurrence with the Brain Injury Rehabilitation Trust (BIRT) Aggression Rating Scale and complementary measures of inappropriate sexual behaviour. Three behavioural components (verbal aggression, physical aggression and inappropriate sexual behaviour) were identified and subjected to separate logistical regression modelling in a sub-set of 77 participants. Main outcomes and results: Successful modelling was achieved for both verbal and physical aggression (correctly classifying 74% and 65% of participants, respectively), with use of psychotropic medication and poorer verbal function increasing the odds of aggression occurring. Pre-morbid history of aggression predicted verbal but not physical aggression. No variables predicted inappropriate sexual behaviour. Conclusions: Verbal aggression, physical aggression and inappropriate sexual behaviour following acquired brain injury appear to reflect separate clinical phenomena rather than general behavioural dysregulation. Clinical markers that indicate an increased risk of post-injury aggression were not related to inappropriate sexual behaviour.
    Brain Injury 08/2013; 27(10). DOI:10.3109/02699052.2013.804200 · 1.81 Impact Factor
  • Source
    Mladen Sormaz · Timothy J Andrews · Andrew W Young
    [Show abstract] [Hide abstract]
    ABSTRACT: Reversing the luminance values of a face (contrast negation) is known to disrupt recognition. However, the effects of contrast negation are attenuated in chimeric images, in which the eye region is returned to positive contrast (S. Gilad, M. Meng, & P. Sinha, 2009, Role of ordinal contrast relationships in face encoding, Proceedings of the National Academy of Sciences, USA, Vol. 106, pp. 5353-5358). Here, we probe further the importance of the eye region for the representation of facial identity. In the first experiment, we asked to what extent the chimeric benefit is specific to the eye region. Our results showed a benefit for including a positive eye region in a contrast negated face, whereas chimeric faces in which only the forehead, nose, or mouth regions were returned to positive contrast did not significantly improve recognition. In Experiment 2, we confirmed that the presence of positive contrast eyes alone does not account for the improved recognition of chimeric face images. Rather, it is the integration of information from the positive contrast eye region and the surrounding negative contrast face that is essential for the chimeric benefit. In Experiment 3, we demonstrated that the chimeric benefit is dependent on a holistic representation of the face. Finally, in Experiment 4, we showed that the positive contrast eye region needs to match the identity of the contrast negated part of the image for the chimera benefit to occur. Together, these results show the importance of the eye region for holistic representations of facial identity. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 05/2013; 39(6). DOI:10.1037/a0032449 · 3.36 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Three experiments are presented that investigate the two-dimensional valence/trustworthiness by dominance model of social inferences from faces (Oosterhof & Todorov, 2008). Experiment 1 used image averaging and morphing techniques to demonstrate that consistent facial cues subserve a range of social inferences, even in a highly variable sample of 1000 ambient images (images that are intended to be representative of those encountered in everyday life, see Jenkins, White, Van Montfort, & Burton, 2011). Experiment 2 then tested Oosterhof and Todorov's two-dimensional model on this extensive sample of face images. The original two dimensions were replicated and a novel 'youthful-attractiveness' factor also emerged. Experiment 3 successfully cross-validated the three-dimensional model using face averages directly constructed from the factor scores. These findings highlight the utility of the original trustworthiness and dominance dimensions, but also underscore the need to utilise varied face stimuli: with a more realistically diverse set of face images, social inferences from faces show a more elaborate underlying structure than hitherto suggested.
    Cognition 01/2013; 127(1):105-118. DOI:10.1016/j.cognition.2012.12.001 · 3.63 Impact Factor
  • Patrick Johnston · Angela Mayes · Matthew Hughes · Andrew W Young
    [Show abstract] [Hide abstract]
    ABSTRACT: Because moving depictions of face emotion have greater ecological validity than their static counterparts, it has been suggested that still photographs may not engage 'authentic' mechanisms used to recognize facial expressions in everyday life. To date, however, no neuroimaging studies have adequately addressed the question of whether the processing of static and dynamic expressions rely upon different brain substrates. To address this, we performed an functional magnetic resonance imaging (fMRI) experiment wherein participants made emotional expression discrimination and Sex discrimination judgements to static and moving face images. Compared to Sex discrimination, Emotion discrimination was associated with widespread increased activation in regions of occipito-temporal, parietal and frontal cortex. These regions were activated both by moving and by static emotional stimuli, indicating a general role in the interpretation of emotion. However, portions of the inferior frontal gyri and supplementary/pre-supplementary motor area showed task by motion interaction. These regions were most active during emotion judgements to static faces. Our results demonstrate a common neural substrate for recognizing static and moving facial expressions, but suggest a role for the inferior frontal gyrus in supporting simulation processes that are invoked more strongly to disambiguate static emotional cues.
    Cortex 01/2013; 49(9). DOI:10.1016/j.cortex.2013.01.002 · 5.13 Impact Factor

Publication Stats

12k Citations
768.85 Total Impact Points


  • 1997–2015
    • The University of York
      • • Department of Psychology
      • • York Neuroimaging Centre (YNiC)
      York, England, United Kingdom
  • 1989–2007
    • Durham University
      • Department of Psychology
      Durham, England, United Kingdom
  • 2002–2006
    • University of Hull
      • Department of Psychology
      Kingston upon Hull, England, United Kingdom
    • Birkbeck, University of London
      Londinium, England, United Kingdom
  • 2005
    • MRC Cognition and Brain Sciences Unit
      Cambridge, England, United Kingdom
  • 2004
    • King's College London
      • Department of Psychological Medicine
      London, ENG, United Kingdom
  • 1998–2000
    • CUNY Graduate Center
      New York City, New York, United States
  • 1996
    • Leiden University
      Leyden, South Holland, Netherlands
  • 1990–1995
    • University of Liverpool
      • School of Psychology
      Liverpool, England, United Kingdom
  • 1994
    • The University of Edinburgh
      Edinburgh, Scotland, United Kingdom
  • 1992
    • Utrecht University
      Utrecht, Utrecht, Netherlands
  • 1977–1992
    • Lancaster University
      • Department of Psychology
      Lancaster, ENG, United Kingdom
  • 1991
    • University of Nottingham
      • School of Psychology
      Nottingham, ENG, United Kingdom
  • 1976
    • University of Aberdeen
      • School of Psychology
      Aberdeen, Scotland, United Kingdom