Article

Primary auditory cortex of cats: Feature detection or something else?

Department of Physiology, Hebrew University - Hadassah Medical School, and the Interdisciplinary Center for Neural Computation, The Hebrew University, 12272, 91120, Jerusalem, Israel.
Biological Cybernetics (Impact Factor: 1.71). 12/2003; 89(5):397-406. DOI: 10.1007/s00422-003-0445-3
Source: PubMed

ABSTRACT

Neurons in sensory cortices are often assumed to be "feature detectors", computing simple and then successively more complex features out of the incoming sensory stream. These features are somehow integrated into percepts. Despite many years of research, a convincing candidate for such a feature in primary auditory cortex has not been found. We argue that feature detection is actually a secondary issue in understanding the role of primary auditory cortex. Instead, the major contribution of primary auditory cortex to auditory perception is in processing previously derived features on a number of different timescales. We hypothesize that, as a result, neurons in primary auditory cortex represent sounds in terms of auditory objects rather than in terms of feature maps. According to this hypothesis, primary auditory cortex has a pivotal role in the auditory system in that it generates the representation of auditory objects to which higher auditory centers assign properties such as spatial location, source identity, and meaning.

Full-text preview

Available from: psu.edu
  • Source
    • "This means that the local memory mechanism can be the same in all hierarchical layers and still explain the observations that higher layers bind spectral information over longer time intervals. STP, as suggested byNelken et al. (2003), is hence also in this regard a possible mechanism for the local memory. We also derived a learning algorithm for a context-dependent MLP network with STP, and we showed that STP in continuous-time ANNs implicitly leads to context-sensitive responses that are robust against variation in the temporal rate at which stimuli are presented. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.
    Full-text · Article · Dec 2015 · Neural Computation
    • "t long - term learning effects shape what features are picked up by our auditory system . Further , although our description focuses on the temporal / sequential cues of auditory stream segregation , we also considered stream segregation by spectral / concurrent cues . As for finding sound units within a real - istic auditory scene , together with Nelken et al . ( 2003 ) , we main - tain that sound is analyzed on multiple time scales in parallel , thus allowing parallel formation of regularities based on different units . There exist some computational models capable of segment - ing continuous sounds ( Coath , Brader , Fusi , & Denham , 2005 ; Kiebel et al . , 2009 ) . One exciting future direction w"
    [Show abstract] [Hide abstract]
    ABSTRACT: Communication by sounds requires that the communication channels (i.e. speech/speakers and other sound sources) had been established. This allows to separate concurrently active sound sources, to track their identity, to assess the type of message arriving from them, and to decide whether and when to react (e.g., reply to the message). We propose that these functions rely on a common generative model of the auditory environment. This model predicts upcoming sounds on the basis of representations describing temporal/sequential regularities. Predictions help to identify the continuation of the previously discovered sound sources to detect the emergence of new sources as well as changes in the behavior of the known ones. It produces auditory event representations which provide a full sensory description of the sounds, including their relation to the auditory context and the current goals of the organism. Event representations can be consciously perceived and serve as objects in various cognitive operations. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
    No preview · Article · Jul 2015 · Brain and Language
  • Source
    • "This estimation is based on the depth of the included recordings and on the histological reconstruction of the electrode's tracks. The auditory latencies were typically 10 –20 ms, which are also characteristic of A1 (Malmierca 2003; Nelken et al. 2003; Ojima and Murakami 2002). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Processing of temporal information is key in auditory processing. In this study we recorded single unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged versus idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement as compared to idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrated that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task.
    Full-text · Article · Aug 2013 · Journal of Neurophysiology
Show more