Neural correlates of processing facial identity based on features versus their spacing

Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ont., Canada.
Neuropsychologia (Impact Factor: 3.3). 05/2007; 45(7):1438-51. DOI: 10.1016/j.neuropsychologia.2006.11.016
Source: PubMed


Adults' expertise in recognizing facial identity involves encoding subtle differences among faces in the shape of individual facial features (featural processing) and in the spacing among features (a type of configural processing called sensitivity to second-order relations). We used fMRI to investigate the neural mechanisms that differentiate these two types of processing. Participants made same/different judgments about pairs of faces that differed only in the shape of the eyes and mouth, with minimal differences in spacing (featural blocks), or pairs of faces that had identical features but differed in the positions of those features (spacing blocks). From a localizer scan with faces, objects, and houses, we identified regions with comparatively more activity for faces, including the fusiform face area (FFA) in the right fusiform gyrus, other extrastriate regions, and prefrontal cortices. Contrasts between the featural and spacing conditions revealed distributed patterns of activity differentiating the two conditions. A region of the right fusiform gyrus (near but not overlapping the localized FFA) showed greater activity during the spacing task, along with multiple areas of right frontal cortex, whereas left prefrontal activity increased for featural processing. These patterns of activity were not related to differences in performance between the two tasks. The results indicate that the processing of facial features is distinct from the processing of second-order relations in faces, and that these functions are mediated by separate and lateralized networks involving the right fusiform gyrus, although the FFA as defined from a localizer scan is not differentially involved.

Download full-text


Available from: Terri Loraine Lewis
  • Source
    • "When Yovel and Duchaine (2006) tested their participants with other faces wearing no make-up, they found that prosopagnosics showed a reduced sensitivity to both types of information (). Note, though, that their face stimuli without make-up were criticized for having configural modifications beyond natural limits (as discussed inMaurer et al., 2007). It was also shown that prosopagnosics obtained significantly lower recognition scores than controls for both featural and configural information in another study using blurred (disrupted featural information with intact configural information) and scrambled (disrupted configural information with intact featural information) face stimuli (Lobmaier et al., 2010). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Congenital prosopagnosia, the innate impairment in recognizing faces, is a very heterogeneous disorder with different phenotypical manifestations. To investigate the nature of prosopagnosia in more detail, we tested 16 prosopagnosics and 21 controls with an extended test battery addressing various aspects of face recognition. Our results show that prosopagnosics exhibited significant impairments in several face recognition tasks: impaired holistic processing (they were tested amongst others with the Cambridge Face Memory Test (CFMT)) as well as reduced processing of configural information of faces. This test battery also revealed some new findings. While controls recognized moving faces better than static faces, prosopagnosics did not exhibit this effect. Furthermore, prosopagnosics had significantly impaired gender recognition—which is shown on a groupwise level for the first time in our study. There was no difference between groups in the automatic extraction of face identity information or in object recognition as tested with the Cambridge Car Memory Test. In addition, a methodological analysis of the tests revealed reduced reliability for holistic face processing tests in prosopagnosics. To our knowledge, this is the first study to show that prosopagnosics showed a significantly reduced reliability coefficient (Cronbach’s alpha) in the CFMT compared to the controls. We suggest that compensatory strategies employed by the prosopagnosics might be the cause for the vast variety of response patterns revealed by the reduced test reliability. This finding raises the question whether classical face tests measure the same perceptual processes in controls and prosopagnosics.
    Preview · Article · Jan 2016 · i-Perception
  • Source
    • "The low temporal resolution of fMRI may not have been sufficiently sensitive to capture asymmetries that occur more intensely under conditions of high temporal constraints (Blanca, Zalabardo, Gari-Criado, & Siles, 1994; Peyrin, Mermillod, Chokron, & Marendaz, 2006). Another explanation is that asymmetry may occur in other cortical areas that were not scanned (Maurer et al., 2007; Renzi, Schiavi, Carbon, Vecchi, Silvanto, & Cattaneo, 2013). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We present clinical and neurophysiological studies that show brain areas that are involved in face perception and how the right and left hemispheres perform holistic and analytic processing, depending on spatial frequency information. The hemispheric specialization of spatial frequency in face recognition is then reviewed and discussed. The limitations of previous work and suggestions for further investigations are discussed. Our conclusion is that functional sensorial asymmetries may be the basis for high-level cognitive asymmetries. Keywords: face recognition, hemispheric specialization, holist and analytic processing, spatial frequency.
    Full-text · Article · Dec 2014 · Psychology and Neuroscience
  • Source
    • "The enhanced processing of single regions (smiling mouth and angry eyes) at later categorization stages (P3b and LPP, and explicit recognition) is probably due more to their diagnostic value (see Calder et al., 2000; Calvo et al., 2014) than to saliency: Both the smiling mouth and the angry eyes facilitated categorization, and both are highly diagnostic of their respective expressions, yet the angry eyes are not salient. Altogether, our lateralization effects for N170 and EPN are consistent with those found by fMRI (e.g., Maurer et al., 2007), EEG (Scott and Nelson, 2006), and TMS (Renzi et al., 2013) research on face identity discrimination. Previous studies using these techniques have found an enhanced neural activity at several areas of the right or the left hemisphere during the processing of configural or featural aspects of the faces, respectively (see 1. Introduction). "
    [Show abstract] [Hide abstract]
    ABSTRACT: This study investigated the neurocognitive mechanisms underlying the role of the eye and the mouth regions in the recognition of facial happiness, anger, and surprise. To this end, face stimuli were shown in three formats (whole face, upper half visible, and lower half visible) and behavioral categorization, computational modeling, and ERP (event-related potentials) measures were combined. N170 (150-180ms post-stimulus; right hemisphere) and EPN (early posterior negativity; 200-300ms; mainly, right hemisphere) were modulated by expression of whole faces, but not by separate halves. This suggests that expression encoding (N170) and emotional assessment (EPN) require holistic processing, mainly in the right hemisphere. In contrast, the mouth region of happy faces enhanced left temporo-occipital activity (150-180ms), and also the LPC (late positive complex; centro-parietal) activity (350-450ms) earlier than the angry eyes (450-600ms) or other face regions. Relatedly, computational modeling revealed that the mouth region of happy faces was also visually salient by 150ms following stimulus onset. This suggests that analytical or part-based processing of the salient smile occurs early (150-180ms) and lateralized (left), and is subsequently used as a shortcut to identify the expression of happiness (350-450ms). This would account for the happy face advantage in behavioral recognition tasks when the smile is visible.
    Full-text · Article · Feb 2014 · NeuroImage
Show more