Article

Common and distinct brain activation to viewing dynamic sequences of face and hand movements

Department of Psychology and Krasnow Institute for Advanced Study, George Mason University, MSN 3F5, Fairfax, VA 22030, USA.
NeuroImage (Impact Factor: 6.36). 10/2007; 37(3):966-73. DOI: 10.1016/j.neuroimage.2007.05.058
Source: PubMed

ABSTRACT The superior temporal sulcus (STS) and surrounding lateral temporal and inferior parietal cortices are an important part of a network involved in the processing of biological movement. It is unclear whether the STS responds to the movement of different body parts uniformly, or if the response depends on the body part that is moving. Here we examined brain activity to recognizing sequences of face and hand movements as well as radial grating motion, controlling for differences in movement dynamics between stimuli. A region of the right posterior STS (pSTS) showed common activation to both face and hand motion, relative to radial grating motion, with no significant difference between responses to face and hand motion in this region. Distinct responses to face motion relative to hand motion were observed in the right mid-STS, while the right posterior inferior temporal sulcus (pITS) and inferior parietal lobule (IPL) showed greater responses to hand motion relative to face motion. These findings indicate that while there may be distinct processing of different body part motion in lateral temporal and inferior parietal cortices, the response of the pSTS is not body part specific. This region may provide input to other parts of a network involved with processing human actions with a high-level visual description of biological motion.

Download full-text

Full-text

Available from: David Crewther, Jul 30, 2015
0 Followers
 · 
106 Views
  • Source
    • "In the sensory realm, additional studies have reported that nearby or overlapping cortical regions (including the medial temporal gyrus and the STS) are activated during face processing, especially during the encoding of eye gaze direction (Puce et al. 1998; Pelphrey et al. 2005; Engell and Haxby 2007; Ethofer et al. 2011) and facial expression (Haxby et al. 2000; Winston et al. 2004; Engell and Haxby 2007; Said et al. 2011), and also the visual interpretation of biological motion (Puce et al. 1996; Beauchamp et al. 2003; Thompson et al. 2007; Fox et al. 2009; Jastorff and Orban 2009; Pinsk et al. 2009; Furl et al. 2011; Julian et al. 2012; Avidan et al. 2014). Presumably, all these processes involve interpretations of actions in other people, based on comparison with internal representations of analogous experiences in the observer. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Previous studies have attributed multiple diverse roles to the posterior superior temporal cortex (STC), both visually driven and cognitive, including part of the default mode network (DMN). Here, we demonstrate a unifying property across this multimodal region. Specifically, the lateral intermediate (LIM) portion of STC showed an unexpected feature: a progressively decreasing fMRI response to increases in visual stimulus size (or number). Such responses are reversed in sign, relative to well-known responses in classic occipital temporal visual cortex. In LIM, this "reversed" size function was present across multiple object categories and retinotopic eccentricities. Moreover, we found a significant interaction between the LIM size function and the distribution of subjects' attention. These findings suggest that LIM serves as a part of the DMN. Further analysis of functional connectivity, plus a meta-analysis of previous fMRI results, suggests that LIM is a heterogeneous area including different subdivisions. Surprisingly, analogous fMRI tests in macaque monkeys did not reveal a clear homolog of LIM. This interspecies discrepancy supports the idea that self-referential thinking and theory of mind are more prominent in humans, compared with monkeys. © The Author 2014. Published by Oxford University Press.
    Cerebral Cortex 12/2014; DOI:10.1093/cercor/bhu290 · 8.67 Impact Factor
  • Source
    • "The deviation detection on the right could be more tightly integrated into a system responsive to social and affective signals (Puce et al., 2003), for which an inventory of categories such as phonemes that are combinatorically arranged is not required. For example, the right-hemisphere sensitivity to smaller stimulus deviations could be related to processing of emotion or visual attention stimuli (Puce et al., 1998, 2000, 2003; Wheaton et al., 2004; Thompson et al., 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The visual mismatch negativity (vMMN), deriving from the brain's response to stimulus deviance, is thought to be generated by the cortex that represents the stimulus. The vMMN response to visual speech stimuli was used in a study of the lateralization of visual speech processing. Previous research suggested that the right posterior temporal cortex has specialization for processing simple non-speech face gestures, and the left posterior temporal cortex has specialization for processing visual speech gestures. Here, visual speech consonant-vowel (CV) stimuli with controlled perceptual dissimilarities were presented in an electroencephalography (EEG) vMMN paradigm. The vMMNs were obtained using the comparison of event-related potentials (ERPs) for separate CVs in their roles as deviant vs. their roles as standard. Four separate vMMN contrasts were tested, two with the perceptually far deviants (i.e., "zha" or "fa") and two with the near deviants (i.e., "zha" or "ta"). Only far deviants evoked the vMMN response over the left posterior temporal cortex. All four deviants evoked vMMNs over the right posterior temporal cortex. The results are interpreted as evidence that the left posterior temporal cortex represents speech contrasts that are perceived as different consonants, and the right posterior temporal cortex represents face gestures that may not be perceived as different CVs.
    Frontiers in Human Neuroscience 07/2013; 7:371. DOI:10.3389/fnhum.2013.00371 · 2.90 Impact Factor
  • Source
    • "One study reports decoding of dynamic expressions from human STS (Said et al., 2010) while other studies suggest that this region may integrate form and motion information during face perception (Puce et al., 2003). Although the posterior STS is sometimes faceselective , motion-sensitivity in posterior STS is not specific to faces (Thompson et al., 2007). Non-face biological motion representation in the posterior STS has been widely studied (Giese and Poggio, 2003) and right hemisphere temporal lobe lesions anterior to MT +/V5 show impaired biological motion perception (Vaina and Gross, 2004). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Humans adeptly use visual motion to recognize socially relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We used functional magnetic resonance imaging and facial stimuli to localize motion-sensitive areas [motion in faces (Mf) areas], which responded more to dynamic faces compared with static faces, and face-selective areas, which responded selectively to faces compared with objects and places. Using multivariate analysis, we found that information about both dynamic and static facial expressions could be robustly decoded from Mf areas. By contrast, face-selective areas exhibited relatively less facial expression information. Classifiers trained with expressions from one motion type (dynamic or static) showed poor generalization to the other motion type, suggesting that Mf areas employ separate and nonconfusable neural codes for dynamic and static presentations of the same expressions. We also show that some of the motion sensitivity elicited by facial stimuli was not specific to faces but could also be elicited by moving dots, particularly in fundus of the superior temporal and middle superior temporal polysensory/lower superior temporal areas, confirming their already well established low-level motion sensitivity. A different pattern was found in anterior STS, which responded more to dynamic than to static faces but was not sensitive to dot motion. Overall, we show that emotional expressions are mostly represented outside of face-selective cortex, in areas sensitive to motion. These regions may play a fundamental role in enhancing recognition of facial expression despite the complex stimulus changes associated with motion.
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 11/2012; 32(45):15952-62. DOI:10.1523/JNEUROSCI.1992-12.2012 · 6.75 Impact Factor
Show more