The Neural Basis of the Behavioral Face-Inversion Effect

McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA.
Current Biology (Impact Factor: 9.92). 01/2006; 15(24):2256-62. DOI: 10.1016/j.cub.2005.10.072
Source: PubMed

ABSTRACT Two of the most robust markers for "special" face processing are the behavioral face-inversion effect (FIE)-the disproportionate drop in recognition of upside-down (inverted) stimuli relative to upright faces-and the face-selective fMRI response in the fusiform face area (FFA). However, the relationship between these two face-selective markers is unknown. Here we report that the behavioral FIE is closely associated with the fMRI response in the FFA, but not in other face-selective or object-selective regions. The FFA and the face-selective region in the superior temporal sulcus (f_STS), but not the occipital face-selective region (OFA), showed a higher response to upright than inverted faces. However, only in the FFA was this fMRI-FIE positively correlated across subjects with the behavioral FIE. Second, the FFA, but not the f_STS, showed greater neural sensitivity to differences between faces when they were upright than inverted, suggesting a possible neural mechanism for the behavioral FIE. Although a similar trend was found in the occipital face area (OFA), it was less robust than the FFA. Taken together, our data suggest that among the face-selective and object-selective regions, the FFA is a primary neural source of the behavioral FIE.

  • Source
    • "Functional Magnetic Resonance Imaging (fMRI) studies have shown that these regions play a critical role in the recognition of facial identity. For instance, OFA and FFA fMRI activity is correlated with behavioral measures of face recognition ability (Yovel and Kanwisher, 2005; Kriegeskorte et al., 2007; Furl et al., 2011). In addition, brain injuries encompassing at least one of these regions often results in severe face recognition deficits (i.e., acquired prosopagnosia) (Barton, 2008; Rossion, 2008). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to identify faces is mediated by a network of cortical and subcortical brain regions in humans. It is still a matter of debate which regions represent the functional substrate of congenital prosopagnosia (CP), a condition characterized by a lifelong impairment in face recognition, and affecting around 2.5% of the general population. Here, we used functional Magnetic Resonance Imaging (fMRI) to measure neural responses to faces, objects, bodies, and body-parts in a group of seven CPs and ten healthy control participants. Using multi-voxel pattern analysis (MVPA) of the fMRI data we demonstrate that neural activity within the "core" (i.e., occipital face area and fusiform face area) and "extended" (i.e., anterior temporal cortex) face regions in CPs showed reduced discriminability between faces and objects. Reduced differentiation between faces and objects in CP was also seen in the right parahippocampal cortex. In contrast, discriminability between faces and bodies/body-parts and objects and bodies/body-parts across the ventral visual system was typical in CPs. In addition to MVPA analysis, we also ran traditional mass-univariate analysis, which failed to show any group differences in face and object discriminability. In sum, these findings demonstrate (i) face-object representations impairments in CP which encompass both the "core" and "extended" face regions, and (ii) superior power of MVPA in detecting group differences.
    Frontiers in Human Neuroscience 11/2014; 8. DOI:10.3389/fnhum.2014.00925 · 2.90 Impact Factor
  • Source
    • "The amplitude increase with inversion, also termed the N170 " face inversion effect " (FIE), is thus believed to reflect the disruption of early holistic processing stages specific to human faces and has been used as a hallmark of face specificity. At the neuronal level, this increase has been explained by the recruitment, in addition to face-sensitive neurons, of objectsensitive neurons (Itier and Taylor, 2002; Rossion et al., 1999; Sadeh and Yovel, 2010; Yovel and Kanwisher, 2005), other face-sensitive neurons tuned to the inverted orientation (Eimer et al., 2010), or eyesensitive neurons (Itier et al., 2007). Itier et al. (2007) showed that, in contrast to intact faces, inversion of eyeless faces elicited a much reduced N170 FIE and thus proposed that eyes played an important role in this early face specific phenomenon, an idea reinforced by the replication of this finding in later studies (Itier et al., 2011; Kloth et al., 2013; Nemrodov and Itier, 2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Eyes are central to face processing however their role in early face encoding as reflected by the N170 ERP component is unclear. Using eye tracking to enforce fixation on specific facial features, we found that the N170 was larger for fixation on the eyes compared to fixation on the forehead, nasion, nose or mouth, which all yielded similar amplitudes. This eye sensitivity was seen in both upright and inverted faces and was lost in eyeless faces, demonstrating it was due to the presence of eyes at fovea. Upright eyeless faces elicited largest N170 at nose fixation. Importantly, the N170 face inversion effect (FIE) was strongly attenuated in eyeless faces when fixation was on the eyes but was less attenuated for nose fixation and was normal when fixation was on the mouth. These results suggest the impact of eye removal on the N170 FIE is a function of the angular distance between the fixated feature and the eye location. We propose the Lateral Inhibition, Face Template and Eye Detector based (LIFTED) model which accounts for all the present N170 results including the FIE and its interaction with eye removal. Although eyes elicit the largest N170 response, reflecting the activity of an eye detector, the processing of upright faces is holistic and entails an inhibitory mechanism from neurons coding parafoveal information onto neurons coding foveal information. The LIFTED model provides a neuronal account of holistic and featural processing involved in upright and inverted faces and offers precise predictions for further testing.
    NeuroImage 08/2014; 97:81–94. DOI:10.1016/j.neuroimage.2014.04.042 · 6.36 Impact Factor
  • Source
    • "These results are consistent with the hypothesis that the FFA functions as an important neural locus of configural processing (Liu et al., 2010; Schiltz & Rossion, 2006), though it is not the only area underlying this special process (Rhodes et al., 2009). The FFA has been shown to be sensitive to both first-order (Liu et al., 2010; Yovel & Kanwisher, 2005) and secondorder configural information (Rhodes et al., 2009) and holistic face template (Betts & Wilson, 2010; James et al., 2013). Our results cannot discern which aspect of configural information the FFA is sensitive to, because they were all disrupted in scrambled faces and preserved in blurred faces. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.
    Cognitive neuroscience 05/2014; 5(3-4):1-8. DOI:10.1080/17588928.2014.912207 · 2.38 Impact Factor
Show more


Available from