Article

Hemispheric shifts of sound representation in auditory cortex with conceptual listening.

Leibniz-Institute for Neurobiology, Brenneckestrasse 6, 39118 Magdeburg, Germany.
Cerebral Cortex (Impact Factor: 8.31). 06/2005; 15(5):578-87. DOI: 10.1093/cercor/bhh159
Source: PubMed

ABSTRACT The weak field specificity and the heterogeneity of neuronal filters found in any given auditory cortex field does not substantiate the view that such fields are merely descriptive maps of sound features. But field mechanisms were previously shown to support behaviourally relevant classification of sounds. Here the prediction was tested in human auditory cortex (AC) that classification-tasks rather than the stimulus class per se determine which auditory cortex area is recruited. By presenting the same set of frequency-modulations we found that categorization of their pitch direction (rising versus falling) increased functional magnetic resonance imaging activation in right posterior AC compared with stimulus exposure and in contrast to left posterior AC dominance during categorization of their duration (short versus long). Thus, top-down influences appear to select not only auditory cortex areas but also the hemisphere for specific processing.

0 Bookmarks
 · 
30 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM) tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (f mod) of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI). SAM tone parameters were chosen to evoke an integrated (1-stream), a segregated (2-stream), or an ambiguous percept by adjusting the f mod difference between A and B tones (Δf mod). The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on Δf mod between A and B SAM tones. The effect of Δf mod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of the sequences.
    Frontiers in neuroscience. 01/2014; 8:119.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Regions along the superior temporal sulci and in the anterior temporal lobes have been found to be involved in voice processing. It has even been argued that parts of the temporal cortices serve as voice-selective areas. Yet, evidence for voice-selective activation in the strict sense is still missing. The current fMRI study aimed at assessing the degree of voice-specific processing in different parts of the superior and middle temporal cortices. To this end, voices of famous persons were contrasted with widely different categories, which were sounds of animals and musical instruments. The argumentation was that only brain regions with statistically proven absence of activation by the control stimuli may be considered as candidates for voice-selective areas. Neural activity was found to be stronger in response to human voices in all analyzed parts of the temporal lobes except for the middle and posterior STG. More importantly, the activation differences between voices and the other environmental sounds increased continuously from the mid-posterior STG to the anterior MTG. Here, only voices but not the control stimuli excited an increase of the BOLD response above a resting baseline level. The findings are discussed with reference to the function of the anterior temporal lobes in person recognition and the general question on how to define selectivity of brain regions for a specific class of stimuli or tasks. In addition, our results corroborate recent assumptions about the hierarchical organization of auditory processing building on a processing stream from the primary auditory cortices to anterior portions of the temporal lobes.
    Frontiers in Human Neuroscience 01/2014; 8:499. · 2.90 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes-even in absence of changes in overall signal level-these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations.
    Frontiers in Neuroscience 06/2014; 8:132.