Article

Neural correlates of processing facial identity based on features versus their spacing

Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ont., Canada.
Neuropsychologia (Impact Factor: 3.45). 05/2007; 45(7):1438-51. DOI: 10.1016/j.neuropsychologia.2006.11.016
Source: PubMed

ABSTRACT Adults' expertise in recognizing facial identity involves encoding subtle differences among faces in the shape of individual facial features (featural processing) and in the spacing among features (a type of configural processing called sensitivity to second-order relations). We used fMRI to investigate the neural mechanisms that differentiate these two types of processing. Participants made same/different judgments about pairs of faces that differed only in the shape of the eyes and mouth, with minimal differences in spacing (featural blocks), or pairs of faces that had identical features but differed in the positions of those features (spacing blocks). From a localizer scan with faces, objects, and houses, we identified regions with comparatively more activity for faces, including the fusiform face area (FFA) in the right fusiform gyrus, other extrastriate regions, and prefrontal cortices. Contrasts between the featural and spacing conditions revealed distributed patterns of activity differentiating the two conditions. A region of the right fusiform gyrus (near but not overlapping the localized FFA) showed greater activity during the spacing task, along with multiple areas of right frontal cortex, whereas left prefrontal activity increased for featural processing. These patterns of activity were not related to differences in performance between the two tasks. The results indicate that the processing of facial features is distinct from the processing of second-order relations in faces, and that these functions are mediated by separate and lateralized networks involving the right fusiform gyrus, although the FFA as defined from a localizer scan is not differentially involved.

Download full-text

Full-text

Available from: Terri Loraine Lewis, Aug 16, 2015
0 Followers
 · 
114 Views
  • Source
    • "The low temporal resolution of fMRI may not have been sufficiently sensitive to capture asymmetries that occur more intensely under conditions of high temporal constraints (Blanca, Zalabardo, Gari-Criado, & Siles, 1994; Peyrin, Mermillod, Chokron, & Marendaz, 2006). Another explanation is that asymmetry may occur in other cortical areas that were not scanned (Maurer et al., 2007; Renzi, Schiavi, Carbon, Vecchi, Silvanto, & Cattaneo, 2013). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We present clinical and neurophysiological studies that show brain areas that are involved in face perception and how the right and left hemispheres perform holistic and analytic processing, depending on spatial frequency information. The hemispheric specialization of spatial frequency in face recognition is then reviewed and discussed. The limitations of previous work and suggestions for further investigations are discussed. Our conclusion is that functional sensorial asymmetries may be the basis for high-level cognitive asymmetries. Keywords: face recognition, hemispheric specialization, holist and analytic processing, spatial frequency.
    Psychology and Neuroscience 12/2014; 7(4):503-511. DOI:10.3922/j.psns.2014.4.09
  • Source
    • "The enhanced processing of single regions (smiling mouth and angry eyes) at later categorization stages (P3b and LPP, and explicit recognition) is probably due more to their diagnostic value (see Calder et al., 2000; Calvo et al., 2014) than to saliency: Both the smiling mouth and the angry eyes facilitated categorization, and both are highly diagnostic of their respective expressions, yet the angry eyes are not salient. Altogether, our lateralization effects for N170 and EPN are consistent with those found by fMRI (e.g., Maurer et al., 2007), EEG (Scott and Nelson, 2006), and TMS (Renzi et al., 2013) research on face identity discrimination. Previous studies using these techniques have found an enhanced neural activity at several areas of the right or the left hemisphere during the processing of configural or featural aspects of the faces, respectively (see 1. Introduction). "
    [Show abstract] [Hide abstract]
    ABSTRACT: This study investigated the neurocognitive mechanisms underlying the role of the eye and the mouth regions in the recognition of facial happiness, anger, and surprise. To this end, face stimuli were shown in three formats (whole face, upper half visible, and lower half visible) and behavioral categorization, computational modeling, and ERP (event-related potentials) measures were combined. N170 (150-180ms post-stimulus; right hemisphere) and EPN (early posterior negativity; 200-300ms; mainly, right hemisphere) were modulated by expression of whole faces, but not by separate halves. This suggests that expression encoding (N170) and emotional assessment (EPN) require holistic processing, mainly in the right hemisphere. In contrast, the mouth region of happy faces enhanced left temporo-occipital activity (150-180ms), and also the LPC (late positive complex; centro-parietal) activity (350-450ms) earlier than the angry eyes (450-600ms) or other face regions. Relatedly, computational modeling revealed that the mouth region of happy faces was also visually salient by 150ms following stimulus onset. This suggests that analytical or part-based processing of the salient smile occurs early (150-180ms) and lateralized (left), and is subsequently used as a shortcut to identify the expression of happiness (350-450ms). This would account for the happy face advantage in behavioral recognition tasks when the smile is visible.
    NeuroImage 02/2014; 92. DOI:10.1016/j.neuroimage.2014.01.048 · 6.36 Impact Factor
  • Source
    • "In the scrambled faces, by changing both the original metric distances between the features (the second-order configuration) and the typical disposition of facial features (i.e., eyes above the nose, which is above the mouth; the first-order configuration), we assured that featural information was indeed manipulated independently of any other sources of configural information. Based on previous studies (Maurer et al., 2007; Rossion et al., 2000), we expect the RH to be more efficient in the analysis of facial configurations and the LH to be superior in the processing of facial features. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated the lateralized processing of featural and configural information in face recognition in two divided visual field studies. In Experiment 1, participants matched the identity of a cue face containing either featural (scrambled faces) or configural (blurred faces) information with an intact test face presented subsequently either in the right visual field (RVF) or in the left visual field (LVF). Unilateral presentation was controlled by monitoring eye movements. The results show an advantage of the left hemisphere (LH) over the right hemisphere (RH) for featural processing and a specialization of the RH for configural compared to featural processing. In Experiment 2, we focused on configural processing and its relationship to familiarity. Either learned or novel test faces were presented in the LVF or the RVF. Participants recognized learned faces better when presented in the LVF than in the RVF, suggesting that the RH has an advantage in the recognition of learned faces. Because the recognition of familiar faces relies strongly on configural information (Butt le & Raymond, 2003), we argue that the advantage of the RH over the LH in configural processing is a function of familiarity.
    Swiss Journal of Psychology 01/2014; 73(4-4):215-224. DOI:10.1024/1421-0185/a000140 · 0.57 Impact Factor
Show more