Article

Serial and Parallel Processing in the Human Auditory Cortex: A Magnetoencephalographic Study

Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki 444-8585, Japan.
Cerebral Cortex (Impact Factor: 8.67). 02/2006; 16(1):18-30. DOI: 10.1093/cercor/bhi080
Source: PubMed

ABSTRACT

Although anatomical, histochemical and electrophysiological findings in both animals and humans have suggested a parallel
and serial mode of auditory processing, precise activation timings of each cortical area are not well known, especially in
humans. We investigated the timing of arrival of signals to multiple cortical areas using magnetoencephalography in humans.
Following click stimuli applied to the left ear, activations were found in six cortical areas in the right hemisphere: the
posteromedial part of Heschl's gyrus (HG) corresponding to the primary auditory cortex (PAC), the anterolateral part of the
HG region on or posterior to the transverse sulcus, the posterior parietal cortex (PPC), posterior and anterior parts of the
superior temporal gyrus (STG), and the planum temporale (PT). The mean onset latencies of each cortical activity were 17.1,
21.2, 25.3, 26.2, 30.9 and 47.6 ms respectively. These results suggested a serial model of auditory processing along the medio-lateral
axis of the supratemporal plane and, in addition, implied the existence of several parallel streams running postero-superiorly
(from the PAC to the belt region and then to the posterior STG, PPC or PT) and anteriorly (PAC–belt–anterior STG).

Download full-text

Full-text

Available from: Atsuko Gunji, Oct 13, 2014
  • Source
    • "As for the N1 generators, pioneering N1m investigations on tones, clicks, and bursts encoding suggested that the N1m cortical origins were located in primary auditory areas as the lower bank of the lateral sulcus (Diesch et al., 1996; Pantev et al., 1995). Recently, it has been shown that the N1m may be also originated in the supra temporal gyrus (STG) and in the planum temporale (Inui et al., 2006) suggesting a crucial role for the final (May & Tiitinen 2010) rather than for the initial stages (Näätänen & Picton 1987) of the sensorial data processing. MEG data have shown that the effects of the vowel spectral shape on the auditory activity are reflected in the N1m amplitude and latency modulations (Diesch et al. 1996; Diesch & Luce, 1997, 2000; Eulitz et al., 2004; Roberts et al., 2004; Mäkelä et al., 2003; Obleser et al., 2003a, 2004a; Shestakova et al., 2004; Scharinger et al., 2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: By exploiting the N1 component of the auditory event related potentials (AEPs) we measured and localized the processing involving the spectrotemporal and the abstract featural representation of the Salento Italian five vowels system. Findings showed two distinct N1 sub-components: The N1a peaking at 125-135 ms, localized in the primary auditory cortex (BA41) bilaterally, and the N1b peaking at 145-155 ms and localized in the superior temporal gyrus (BA22) with a strong leftward lateralization. Crucially, while high vowels elicited higher amplitudes than non-high vowels both in the N1a and N1b, back vowels generated later responses than non-back vowels in the N1b only. Overall, these findings suggest a hierarchical processing where from the N1a to the N1b the acoustic analysis shift progressively toward the computation and representation of phonological features. Introduction Speech comprehension requires accurate perceptual capacities, which consist in the processing of rapid sequential information embedded in the acoustic signal and in its decoding onto abstract units of representation. It is assumed that the mapping principles exploited by the human brain to construct a sound percept are determined by bottom-up acoustic properties that are affected by top-down features based on abstract featural information relating to articulator positions (Stevens 2002). Such features, called distinctive features, would represent the primitives for phonological computation and representation (Halle 2002). Therefore, one of the central aspects for understanding the speech processing mechanisms is to discover how these phonetic and phonological operations are implemented at a neuronal level to shape the mental representations of the speech sounds.
    Full-text · Chapter · Feb 2016
  • Source
    • "More specifically, assuming a 10-ms signal delay from cochlea to cortex (Liegeois Chauvel et al., 1991), the earliest onset latencies in the core, belt and parabelt were 17, 33 and 51 ms, respectively. These agree well with non-invasive results from the human auditory cortex , where corresponding serial activation occurs in the 17–48 ms range (Inui et al., 2006). The mean delay between response onset and maximum firing rate was 32 ms. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Incoming sounds are represented in the context of preceding events, and this requires a memory mechanism that integrates information over time. Here, it was demonstrated that response adaptation, the suppression of neural responses due to stimulus repetition, might reflect a computational solution that auditory cortex uses for temporal integration. Adaptation is observed in single-unit measurements as two-tone forward masking effects and as stimulus-specific adaptation (SSA). In non-invasive observations, the amplitude of the auditory N1m response adapts strongly with stimulus repetition, and it is followed by response recovery (the so-called mismatch response) to rare deviant events. The current computational simulations described the serial core-belt-parabelt structure of auditory cortex, and included synaptic adaptation, the short-term, activity-dependent depression of excitatory corticocortical connections. It was found that synaptic adaptation is sufficient for columns to respond selectively to tone pairs and complex tone sequences. These responses were defined as combination sensitive, thus reflecting temporal integration, when a strong response to a stimulus sequence was coupled with weaker responses both to the time-reversed sequence and to the isolated sequence elements. The temporal complexity of the stimulus seemed to be reflected in the proportion of combination-sensitive columns across the different regions of the model. Our results suggest that while synaptic adaptation produces facilitation and suppression effects, including SSA and the modulation of the N1m response, its functional significance may actually be in its contribution to temporal integration. This integration seems to benefit from the serial structure of auditory cortex. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
    Full-text · Article · Mar 2015 · European Journal of Neuroscience
  • Source
    • "Dotted lines on sagittal views indicate the height of the axial slice. [Doeller et al., 2003; Inui et al., 2006; Opitz et al., 1999; Sch€ onwiesner et al., 2007; Yvert et al., 2001]. Consistent with sensor-level data and previous findings, MMNm was larger on the right hemisphere [Paavilainen et al., 1991; Recasens et al., 2014]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Our auditory system is able to encode acoustic regularity of growing levels of complexity to model and predict incoming events. Recent evidence suggests that early indices of deviance detection in the time range of the middle-latency responses (MLR) precede the mismatch negativity (MMN), a well-established error response associated with deviance detection. While studies suggest that only the MMN, but not early deviance-related MLR, underlie complex regularity levels, it is not clear whether these two mechanisms interplay during scene analysis by encoding nested levels of acoustic regularity, and whether neuronal sources underlying local and global deviations are hierarchically organized. We registered magnetoencephalographic evoked fields to rapidly presented four-tone local sequences containing a frequency change. Temporally integrated local events, in turn, defined global regularities, which were infrequently violated by a tone repetition. A global magnetic mismatch negativity (MMNm) was obtained at 140-220 ms when breaking the global regularity, but no deviance-related effects were shown in early latencies. Conversely, Nbm (45-55 ms) and Pbm (60-75 ms) deflections of the MLR, and an earlier MMNm response at 120-160 ms, responded to local violations. Distinct neuronal generators in the auditory cortex underlay the processing of local and global regularity violations, suggesting that nested levels of complexity of auditory object representations are represented in separated cortical areas. Our results suggest that the different processing stages and anatomical areas involved in the encoding of auditory representations, and the subsequent detection of its violations, are hierarchically organized in the human auditory cortex. Hum Brain Mapp, 2014. © 2014 Wiley Periodicals, Inc.
    Full-text · Article · Nov 2014 · Human Brain Mapping
Show more

We use cookies to give you the best possible experience on ResearchGate. Read our cookies policy to learn more.