Liu J, Harris A, Kanwisher N. Stages of processing in face perception: an MEG study

Department of Brain and Cognitive Sciences, NE20-443, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA.
Nature Neuroscience (Impact Factor: 16.1). 10/2002; 5(9):910-6. DOI: 10.1038/nn909
Source: PubMed


Here we used magnetoencephalography (MEG) to investigate stages of processing in face perception in humans. We found a face-selective MEG response occurring only 100 ms after stimulus onset (the 'M100'), 70 ms earlier than previously reported. Further, the amplitude of this M100 response was correlated with successful categorization of stimuli as faces, but not with successful recognition of individual faces, whereas the previously-described face-selective 'M170' response was correlated with both processes. These data suggest that face processing proceeds through two stages: an initial stage of face categorization, and a later stage at which the identity of the individual face is extracted.

7 Reads
    • "For example, the peak of the M170/N170, was shown to be delayed for inverted as compared to upright faces (Itier et al., 2006). Additionally, the M170 amplitude, unlike the amplitude of the N170, was shown to be larger for famous or personally familiar individuals (Kloth et al., 2006), or even for the unfamiliar faces that were nonetheless shown to be successfully identified by participants (Liu et al., 2002). Moreover, although the amplitude of M170 was found to be larger not only for faces, but also for face components, scrambling the inner components of the face with or without the face contour reduced "
    [Show abstract] [Hide abstract]
    ABSTRACT: Deciphering the social meaning of facial displays is a highly complex neurological process. The M170, an event related field component of MEG recording, like its EEG counterpart N170, was repeatedly shown to be associated with structural encoding of faces. However, the scope of information encoded during the M170 time window is still being debated. We investigated the neuronal origin of facial processing of integrated social rank cues (SRCs) and emotional facial expressions (EFEs) during the M170 time interval. Participants viewed integrated facial displays of emotion (happy, angry, neutral) and SRCs (indicated by upward, downward, or straight head tilts). We found that the activity during the M170 time window is sensitive to both EFEs and SRCs. Specifically, highly prominent activation was observed in response to SRC connoting dominance as compared to submissive or egalitarian head cues. Interestingly, the processing of EFEs and SRCs appeared to rely on different circuitry. Our findings suggest that vertical head tilts are processed not only for their sheer structural variance, but as social information. Exploring the temporal unfolding and brain localization of non-verbal cues processing may assist in understanding the functioning of the social rank biobehavioral system.
    Neuropsychologia 09/2015; 78. DOI:10.1016/j.neuropsychologia.2015.09.030 · 3.30 Impact Factor
  • Source
    • "Bruce and Young (1986) omitted face detection from their cognitive model because there was no evidence at the time that faces required special analysis, and because the relative timing of face detection versus face recognition was unclear. There is now considerable evidence that faces are processed separately from objects: face-selective responses have been recorded at a single cell level (Foldiak, Xiao, Keysers, Edwards & Perrett, 2004; Tsao, Moeller & Freiwald, 2008), as well as through neuroimaging (Kanwisher, McDermott & Chun, 1997; Liu, Harris & Kanwisher, 2002; McCarthy, Puce & Gore, 1997) and event-related potentials (Bentin, Allison , Puce, Perez & McCarthy, 1996; Botzel, Schulze & Stodieck, 1995; Jeffreys, 1989). Evidence from transcranial magnetic stimulation (Pitcher, Charles, Devlin, Walsh & Duchaine, 2009) and neuropsychology (Duchaine et al., 2006; Moscovitch, Winocur & Behrmann, 1997) further supports the view that face processing and object processing are dissociable. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.
    Developmental Science 05/2015; 15(12). DOI:10.1111/desc.12311 · 3.89 Impact Factor
  • Source
    • "Important to understanding the retrieval dynamics in this behavioral paradigm is the shift we observed in the dominant coding of information in evoked responses from 200 ms to 400 ms poststimulus . Information in the visual system up to 200 ms post-stimulus may hew closely to the form of the stimulus that was presented (Tanaka and Curran, 2001; VanRullen and Thorpe, 2001; Liu et al., 2002; Schiff et al., 2006; Rossion and Jacques, 2008). This is consistent with our finding that spatial patterns of activity evoked by different exemplars within a category were relatively distinct and that individual fractals were better classified at this time bin. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Electrophysiological data disclose rich dynamics in patterns of neural activity evoked by sensory objects. Retrieving objects from memory reinstates components of this activity. In humans, the temporal structure of this retrieved activity remains largely unexplored, and here we address this gap using the spatiotemporal precision of magnetoencephalography (MEG). In a sensory preconditioning paradigm, 'indirect' objects were paired with 'direct' objects to form associative links, and the latter were then paired with rewards. Using multivariate analysis methods we examined the short-time evolution of neural representations of indirect objects retrieved during reward-learning about direct objects. We found two components of the evoked representation of the indirect stimulus, 200 ms apart. The strength of retrieval of one, but not the other, representational component correlated with generalization of reward learning from direct to indirect stimuli. We suggest the temporal structure within retrieved neural representations may be key to their function.
    eLife Sciences 01/2015; 4(4). DOI:10.7554/eLife.04919 · 9.32 Impact Factor
Show more

Preview (2 Sources)

7 Reads
Available from