Article

Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nature Neuroscience, 12(6), 718-724

Laboratory of Integrative Neuroscience and Cognition, Georgetown University Medical Center, Washington, DC, USA.
Nature Neuroscience (Impact Factor: 14.98). 06/2009; 12(6):718-24. DOI: 10.1038/nn.2331
Source: PubMed

ABSTRACT Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production.

Download full-text

Full-text

Available from: Josef P Rauschecker, Aug 28, 2015
1 Follower
 · 
164 Views
  • Source
    • "Parallel processing is most prominently known from the vertebrate visual system (Livingston and 55 Hubel 1988), where color and shape of a stimulus are analyzed in parallel with a possible motion of 56 the stimulus. Similar distribution of stimulus features on different pathways has been described in the 57 auditory (Rauschecker and Scott, 2009) and the somatosensory systems (Gasser and Erlanger, 1929; 58 Reed et al., 2005). In insects, parallel pathways were described both in vision (Ribi and Scheel, 1981; 59 Fischbach and Dittrich, 1989; Strausfeld et al., 2006; Paulk et al., 2009, 2008) and audition 60 (Helversen and Helversen, 1995). "
    [Show abstract] [Hide abstract]
    ABSTRACT: To rapidly process biologically relevant stimuli, sensory systems have developed a broad variety of coding mechanisms like parallel processing and coincidence detection. Parallel processing (e.g. in the visual system), increases both computational capacity and processing speed by simultaneously coding different aspects of the same stimulus. Coincidence detection is an efficient way to integrate information from different sources. Coincidence has been shown to promote associative learning and memory or stimulus feature detection (e.g. in auditory delay lines). Within the dual olfactory pathway of the honeybee both of these mechanisms might be implemented by uniglomerular projection neurons (PNs) that transfer information from the primary olfactory centers, the antennal lobe (AL), to a multimodal integration center, the mushroom body (MB). PNs from anatomically distinct tracts respond to the same stimulus space, but have different physiological properties, characteristics that are prerequisites for parallel processing of different stimulus aspects. However, the PN pathways also display mirror-imaged like anatomical trajectories that resemble neuronal coincidence detectors as known from auditory delay lines. To investigate temporal processing of olfactory information, we recorded PN odor responses simultaneously from both tracts and measured coincident activity of PNs within and between tracts. Our results show that coincidence levels are different within each of the two tracts. Coincidence also occurs between tracts, but to a minor extent compared to coincidence within tracts. Taken together our findings support the relevance of spike timing in coding of olfactory information (temporal code).
    Frontiers in Physiology 07/2015; 6(208). DOI:10.3389/fphys.2015.00208 · 3.50 Impact Factor
    • "Findings from visuo-motor learning (den Ouden et al. 2010) and models of speech learning (Hickok and Poeppel 2007; Rauschecker and Scott 2009) also propose an interplay of prediction and feedback signals between auditory, motor, and association areas. This interpretation also fits with the concept of an efference copy of expected perceptual or action outcomes as an " online " top-down model, with parallel " offline " use of this model in mental imagery (Grush 2004; Rauschecker and Scott 2009). Our data thus match these combined concepts both regarding regions of change and parallelism of changes across Imagery and Listen tasks. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Skill learning results in changes to brain function, but at the same time individuals strongly differ in their abilities to learn specific skills. Using a 6-week piano-training protocol and pre- and post-fMRI of melody perception and imagery in adults, we dissociate learning-related patterns of neural activity from pre-training activity that predicts learning rates. Fronto-parietal and cerebellar areas related to storage of newly learned auditory-motor associations increased their response following training; in contrast, pre-training activity in areas related to stimulus encoding and motor control, including right auditory cortex, hippocampus, and caudate nuclei, was predictive of subsequent learning rate. We discuss the implications of these results for models of perceptual and of motor learning. These findings highlight the importance of considering individual predisposition in plasticity research and applications. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
    Cerebral Cortex 07/2015; DOI:10.1093/cercor/bhv138 · 8.67 Impact Factor
  • Source
    • "Regions in yellow, including posterior parts of temporal cortex and more anterior region of ventral prefrontal cortex, are not recruited for easier intelligible speech, but increase activity when the processing load increases. These two patterns highlight a " core " speech processing network that is active for more basic auditory sentence processing (Davis and Johnsrude, 2003; Peelle et al., 2010a; Rauschecker and Scott, 2009), and an expanded associative network that is differentially engaged as linguistic demands increase (Peelle, 2012; Wingfield and Grossman, 2006). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The functional neuroanatomy of speech processing has been investigated using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) for more than 20 years. However, these approaches have relatively poor temporal resolution and/or challenges of acoustic contamination due to the constraints of echoplanar fMRI. Furthermore, these methods are contraindicated because of safety concerns in longitudinal studies and research with children (PET) or in studies of patients with metal implants (fMRI). High-density diffuse optical tomography (HD-DOT) permits presenting speech in a quiet acoustic environment, has excellent temporal resolution relative to the hemodynamic response, and provides noninvasive and metal-compatible imaging. However, the performance of HD-DOT in imaging the brain regions involved in speech processing is not fully established. In the current study, we use an auditory sentence comprehension task to evaluate the ability of HD-DOT to map the cortical networks supporting speech processes. Using sentences with two levels of linguistic complexity, along with a control condition consisting of unintelligible noise-vocoded speech, we recovered a hierarchical organization of the speech network that matches the results of previous fMRI studies. Specifically, hearing intelligible speech resulted in increased activity in bilateral temporal cortex and left frontal cortex, with syntactically complex speech leading to additional activity in left posterior temporal cortex and left inferior frontal gyrus. These results demonstrate the feasibility of using HD-DOT to map spatially distributed brain networks supporting higher-order cognitive faculties such as spoken language. Copyright © 2015. Published by Elsevier Inc.
    NeuroImage 05/2015; 117. DOI:10.1016/j.neuroimage.2015.05.058 · 6.36 Impact Factor
Show more