Primary auditory cortex of cats: Feature detection or something else?

Department of Physiology, Hebrew University - Hadassah Medical School, and the Interdisciplinary Center for Neural Computation, The Hebrew University, 12272, 91120, Jerusalem, Israel.
Biological Cybernetics (Impact Factor: 1.71). 12/2003; 89(5):397-406. DOI: 10.1007/s00422-003-0445-3
Source: PubMed


Neurons in sensory cortices are often assumed to be "feature detectors", computing simple and then successively more complex features out of the incoming sensory stream. These features are somehow integrated into percepts. Despite many years of research, a convincing candidate for such a feature in primary auditory cortex has not been found. We argue that feature detection is actually a secondary issue in understanding the role of primary auditory cortex. Instead, the major contribution of primary auditory cortex to auditory perception is in processing previously derived features on a number of different timescales. We hypothesize that, as a result, neurons in primary auditory cortex represent sounds in terms of auditory objects rather than in terms of feature maps. According to this hypothesis, primary auditory cortex has a pivotal role in the auditory system in that it generates the representation of auditory objects to which higher auditory centers assign properties such as spatial location, source identity, and meaning.

8 Reads
    • "t long - term learning effects shape what features are picked up by our auditory system . Further , although our description focuses on the temporal / sequential cues of auditory stream segregation , we also considered stream segregation by spectral / concurrent cues . As for finding sound units within a real - istic auditory scene , together with Nelken et al . ( 2003 ) , we main - tain that sound is analyzed on multiple time scales in parallel , thus allowing parallel formation of regularities based on different units . There exist some computational models capable of segment - ing continuous sounds ( Coath , Brader , Fusi , & Denham , 2005 ; Kiebel et al . , 2009 ) . One exciting future direction w"
    [Show abstract] [Hide abstract]
    ABSTRACT: Communication by sounds requires that the communication channels (i.e. speech/speakers and other sound sources) had been established. This allows to separate concurrently active sound sources, to track their identity, to assess the type of message arriving from them, and to decide whether and when to react (e.g., reply to the message). We propose that these functions rely on a common generative model of the auditory environment. This model predicts upcoming sounds on the basis of representations describing temporal/sequential regularities. Predictions help to identify the continuation of the previously discovered sound sources to detect the emergence of new sources as well as changes in the behavior of the known ones. It produces auditory event representations which provide a full sensory description of the sounds, including their relation to the auditory context and the current goals of the organism. Event representations can be consciously perceived and serve as objects in various cognitive operations. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
    Brain and Language 07/2015; 148:1-22. DOI:10.1016/j.bandl.2015.05.003 · 3.22 Impact Factor
  • Source
    • "This estimation is based on the depth of the included recordings and on the histological reconstruction of the electrode's tracks. The auditory latencies were typically 10 –20 ms, which are also characteristic of A1 (Malmierca 2003; Nelken et al. 2003; Ojima and Murakami 2002). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Processing of temporal information is key in auditory processing. In this study we recorded single unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged versus idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement as compared to idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrated that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task.
    Journal of Neurophysiology 08/2013; 110(9). DOI:10.1152/jn.00381.2013 · 2.89 Impact Factor
  • Source
    • "To establish perceptual invariance, the auditory system faces the dichotomy of either adapting its acoustic representations when perceptual errors call for learning and plasticity, or preserving a stable acoustic representation despite changes of the behavioral context. Much of the low-level (bottom-up) acoustic processing seems to take place at subcortical stages (Nelken et al., 2003; Palmer, 2007), with evidence of higher-order, non-acoustic (top-down) cognitive (Griffiths et al., 2004; Gutschalk et al., 2005; Snyder et al., 2006; Nelken & Bar-Yosef, 2008) and multisensory (Fu et al., 2003; Lakatos et al., 2007; Ghazanfar et al., 2008; Kayser et al., 2008, 2010) processing in the auditory cortex (AC). As the AC occupies a central position within the acoustic and nonacoustic processing pathways (Aertsen et al., 1981; Edeline et al., 2001; Edeline, 2003; Fritz et al., 2003, 2005a,b; Brosch et al., 2005; Schroeder & Foxe, 2005; Polley et al., 2006; Elhilali et al., 2007; King et al., 2007; Riecke et al., 2007; Atiani et al., 2009), it is expected to play a role in stable perception of the auditory environment, but evidence is lacking. "
    [Show abstract] [Hide abstract]
    ABSTRACT: It is unclear whether top-down processing in the auditory cortex (AC) interferes with its bottom-up analysis of sound. Recent studies indicated non-acoustic modulations of AC responses, and that attention changes a neuron's spectrotemporal tuning. As a result, the AC would seem ill-suited to represent a stable acoustic environment, which is deemed crucial for auditory perception. To assess whether top-down signals influence acoustic tuning in tasks without directed attention, we compared monkey single-unit AC responses to dynamic spectrotemporal sounds under different behavioral conditions. Recordings were mostly made from neurons located in primary fields (primary AC and area R of the AC) that were well tuned to pure tones, with short onset latencies. We demonstrated that responses in the AC were substantially modulated during an auditory detection task and that these modulations were systematically related to top-down processes. Importantly, despite these significant modulations, the spectrotemporal receptive fields of all neurons remained remarkably stable. Our results suggest multiplexed encoding of bottom-up acoustic and top-down task-related signals at single AC neurons. This mechanism preserves a stable representation of the acoustic environment despite strong non-acoustic modulations.
    European Journal of Neuroscience 03/2013; 37:1830-1842. DOI:10.1111/ejn.12180 · 3.18 Impact Factor
Show more

Similar Publications


8 Reads
Available from