Article

Liu J, Harris A, Kanwisher N. Stages of processing in face perception: an MEG study

Department of Brain and Cognitive Sciences, NE20-443, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA.
Nature Neuroscience (Impact Factor: 16.1). 10/2002; 5(9):910-6. DOI: 10.1038/nn909
Source: PubMed

ABSTRACT

Here we used magnetoencephalography (MEG) to investigate stages of processing in face perception in humans. We found a face-selective MEG response occurring only 100 ms after stimulus onset (the 'M100'), 70 ms earlier than previously reported. Further, the amplitude of this M100 response was correlated with successful categorization of stimuli as faces, but not with successful recognition of individual faces, whereas the previously-described face-selective 'M170' response was correlated with both processes. These data suggest that face processing proceeds through two stages: an initial stage of face categorization, and a later stage at which the identity of the individual face is extracted.

    • "For example, the peak of the M170/N170, was shown to be delayed for inverted as compared to upright faces (Itier et al., 2006). Additionally, the M170 amplitude, unlike the amplitude of the N170, was shown to be larger for famous or personally familiar individuals (Kloth et al., 2006), or even for the unfamiliar faces that were nonetheless shown to be successfully identified by participants (Liu et al., 2002). Moreover, although the amplitude of M170 was found to be larger not only for faces, but also for face components, scrambling the inner components of the face with or without the face contour reduced "
    [Show abstract] [Hide abstract]
    ABSTRACT: Deciphering the social meaning of facial displays is a highly complex neurological process. The M170, an event related field component of MEG recording, like its EEG counterpart N170, was repeatedly shown to be associated with structural encoding of faces. However, the scope of information encoded during the M170 time window is still being debated. We investigated the neuronal origin of facial processing of integrated social rank cues (SRCs) and emotional facial expressions (EFEs) during the M170 time interval. Participants viewed integrated facial displays of emotion (happy, angry, neutral) and SRCs (indicated by upward, downward, or straight head tilts). We found that the activity during the M170 time window is sensitive to both EFEs and SRCs. Specifically, highly prominent activation was observed in response to SRC connoting dominance as compared to submissive or egalitarian head cues. Interestingly, the processing of EFEs and SRCs appeared to rely on different circuitry. Our findings suggest that vertical head tilts are processed not only for their sheer structural variance, but as social information. Exploring the temporal unfolding and brain localization of non-verbal cues processing may assist in understanding the functioning of the social rank biobehavioral system.
    No preview · Article · Sep 2015 · Neuropsychologia
  • Source
    • "Bruce and Young (1986) omitted face detection from their cognitive model because there was no evidence at the time that faces required special analysis, and because the relative timing of face detection versus face recognition was unclear. There is now considerable evidence that faces are processed separately from objects: face-selective responses have been recorded at a single cell level (Foldiak, Xiao, Keysers, Edwards & Perrett, 2004; Tsao, Moeller & Freiwald, 2008), as well as through neuroimaging (Kanwisher, McDermott & Chun, 1997; Liu, Harris & Kanwisher, 2002; McCarthy, Puce & Gore, 1997) and event-related potentials (Bentin, Allison , Puce, Perez & McCarthy, 1996; Botzel, Schulze & Stodieck, 1995; Jeffreys, 1989). Evidence from transcranial magnetic stimulation (Pitcher, Charles, Devlin, Walsh & Duchaine, 2009) and neuropsychology (Duchaine et al., 2006; Moscovitch, Winocur & Behrmann, 1997) further supports the view that face processing and object processing are dissociable. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.
    Full-text · Article · May 2015 · Developmental Science
  • Source
    • "Important to understanding the retrieval dynamics in this behavioral paradigm is the shift we observed in the dominant coding of information in evoked responses from 200 ms to 400 ms poststimulus . Information in the visual system up to 200 ms post-stimulus may hew closely to the form of the stimulus that was presented (Tanaka and Curran, 2001; VanRullen and Thorpe, 2001; Liu et al., 2002; Schiff et al., 2006; Rossion and Jacques, 2008). This is consistent with our finding that spatial patterns of activity evoked by different exemplars within a category were relatively distinct and that individual fractals were better classified at this time bin. "
    [Show abstract] [Hide abstract]
    ABSTRACT: eLife digest Seeing an object triggers a complex and carefully orchestrated dance of brain activity. The spatial pattern of the brain activity encoding the object can change multiple times even within the first second of seeing the object. These rapid changes appear to be a core feature of how the brain understands and processes objects. Yet little is known about how these patterns unfold through time when we remember an object. Remembering, or retrieving information about objects, is how we use our knowledge of the world to make good decisions. It is not clear whether, during remembering, there are rapid changes in the patterns similar to those that happen when directly seeing an object. Mapping brain activity during remembering could help us understand how stored information can guide decisions. Using recently developed methods in brain imaging and statistics, Kurth-Nelson et al. found that two distinct patterns of brain activity appeared when viewing particular objects. One occurred around 200 milliseconds after viewing an object, and the other appeared a bit later, by about 400 milliseconds. Later, when remembering the object, these patterns reappeared in the brain, but at different points in time. Furthermore, these two patterns had distinct roles in learning associated with the objects to guide later decisions. This work shows that rapid changes in the pattern of neuronal activity are central to how stored information is retrieved and used to make decisions. DOI: http://dx.doi.org/10.7554/eLife.04919.002
    Full-text · Article · Jan 2015 · eLife Sciences
Show more