Article

Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nature Neuroscience, 12(6), 718-724

Laboratory of Integrative Neuroscience and Cognition, Georgetown University Medical Center, Washington, DC, USA.
Nature Neuroscience (Impact Factor: 16.1). 06/2009; 12(6):718-24. DOI: 10.1038/nn.2331
Source: PubMed

ABSTRACT

Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production.

Download full-text

Full-text

Available from: Josef P Rauschecker
  • Source
    • "In principle, within a complex neuronal network, the primary auditory cortices (A1) extract the acoustic features from the signal (Hickok et al., 2004; Scott & Johnsrude, 2003) whereas some non-primary auditory areas as the superior temporal gyrus (STG) and the superior temporal sulcus (STS) encode the acoustic features onto phonological representations (Hickok et al., 2004; Scott & Johnsrude 2003; Rauschecker & Scott 2009). At this level of analysis, some disagreements remain in predicting the lateralization of the involved structures between who supports the bilateral analysis of the signal (Hickok et al., 2004, 2007; Scott & Johnsrude, 2003) and who assumes the left hemispheric dominance for the phonological mapping (Scott & Johnsrude, 2003; Rauschecker & Scott 2009) and the categorization processes (Obleser & Eisner 2009). However, beyond some localizationist divergences, robust evidences have widely suggested that the cortical organization of the auditory fields relies on topographical principles suitable to explain how the brain processes phonemes (Romani et al., 1982; Talavage et al., 2004; Saenz & Langers 2014). "
    [Show abstract] [Hide abstract]
    ABSTRACT: By exploiting the N1 component of the auditory event related potentials (AEPs) we measured and localized the processing involving the spectrotemporal and the abstract featural representation of the Salento Italian five vowels system. Findings showed two distinct N1 sub-components: The N1a peaking at 125-135 ms, localized in the primary auditory cortex (BA41) bilaterally, and the N1b peaking at 145-155 ms and localized in the superior temporal gyrus (BA22) with a strong leftward lateralization. Crucially, while high vowels elicited higher amplitudes than non-high vowels both in the N1a and N1b, back vowels generated later responses than non-back vowels in the N1b only. Overall, these findings suggest a hierarchical processing where from the N1a to the N1b the acoustic analysis shift progressively toward the computation and representation of phonological features. Introduction Speech comprehension requires accurate perceptual capacities, which consist in the processing of rapid sequential information embedded in the acoustic signal and in its decoding onto abstract units of representation. It is assumed that the mapping principles exploited by the human brain to construct a sound percept are determined by bottom-up acoustic properties that are affected by top-down features based on abstract featural information relating to articulator positions (Stevens 2002). Such features, called distinctive features, would represent the primitives for phonological computation and representation (Halle 2002). Therefore, one of the central aspects for understanding the speech processing mechanisms is to discover how these phonetic and phonological operations are implemented at a neuronal level to shape the mental representations of the speech sounds.
    Full-text · Chapter · Feb 2016
  • Source
    • "In contrast , the " what " pathway was proposed to origi - nate from anterior lateral belt areas and to project toward the temporal pole . Human functional imaging studies support the " where " part of the hypothesis in that auditory spatial tasks tend to activate parietal and pre - frontal areas that also are activated during visual spatial tasks ( Rauschecker and Scott , 2009 ; Recanzone and Cohen , 2010 "
    [Show abstract] [Hide abstract]
    ABSTRACT: The auditory system derives locations of sound sources from spatial cues provided by the interaction of sound with the head and external ears. Those cues are analyzed in specific brainstem pathways and then integrated as cortical representation of locations. The principal cues for horizontal localization are interaural time differences (ITDs) and interaural differences in sound level (ILDs). Vertical and front/back localization rely on spectral-shape cues derived from direction-dependent filtering properties of the external ears. The likely first sites of analysis of these cues are the medial superior olive (MSO) for ITDs, lateral superior olive (LSO) for ILDs, and dorsal cochlear nucleus (DCN) for spectral-shape cues. Localization in distance is much less accurate than that in horizontal and vertical dimensions, and interpretation of the basic cues is influenced by additional factors, including acoustics of the surroundings and familiarity of source spectra and levels. Listeners are quite sensitive to sound motion, but it remains unclear whether that reflects specific motion detection mechanisms or simply detection of changes in static location. Intact auditory cortex is essential for normal sound localization. Cortical representation of sound locations is highly distributed, with no evidence for point-to-point topography. Spatial representation is strictly contralateral in laboratory animals that have been studied, whereas humans show a prominent right-hemisphere dominance. © 2015 Elsevier B.V. All rights reserved.
    Full-text · Article · Dec 2015 · Handbook of Clinical Neurology
    • "The core–belt–parabelt hierarchic model has become a useful framework for studying functional organization of the human auditory cortex (Hackett, 2003, 2007, 2008; Rauschecker and Scott, 2009). In humans, the auditory core region has been localized to the posteromedial two-thirds of the first transverse gyrus of Heschl (HG) on the superior temporal plane. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This chapter provides an overview of current invasive recording methodology and experimental paradigms used in the studies of human auditory cortex. Invasive recordings can be obtained from neurosurgical patients undergoing clinical electrophysiologic evaluation for medically refractory epilepsy or brain tumors. This provides a unique research opportunity to study the human auditory cortex with high resolution both in time (milliseconds) and space (millimeters) and to generate valuable information about its organization and function. A historic overview presents the development of the experimental approaches from the pioneering works of Wilder Penfield to modern day. Practical issues regarding research subject population, stimulus presentation, data collection, and analysis are discussed for acute (intraoperative) and chronic experiments. Illustrative examples are provided from experimental paradigms, including studies of spectrotemporal processing, functional connectivity, and functional lesioning in human auditory cortex. © 2015 Elsevier B.V. All rights reserved.
    No preview · Article · Dec 2015 · Handbook of Clinical Neurology
Show more