Differential neural coding of acoustic flutter within primate auditory cortex. Nat Neurosci

Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
Nature Neuroscience (Impact Factor: 16.1). 07/2007; 10(6):763-71. DOI: 10.1038/nn1888
Source: PubMed


A sequence of acoustic events is perceived either as one continuous sound or as a stream of temporally discrete sounds (acoustic flutter), depending on the rate at which the acoustic events repeat. Acoustic flutter is perceived at repetition rates near or below the lower limit for perceiving pitch, and is akin to the discrete percepts of visual flicker and tactile flutter caused by the slow repetition of sensory stimulation. It has been shown that slowly repeating acoustic events are represented explicitly by stimulus-synchronized neuronal firing patterns in primary auditory cortex (AI). Here we show that a second neural code for acoustic flutter exists in the auditory cortex of marmoset monkeys (Callithrix jacchus), in which the firing rate of a neuron is a monotonic function of an acoustic event's repetition rate. Whereas many neurons in AI encode acoustic flutter using a dual temporal/rate representation, we find that neurons in cortical fields rostral to AI predominantly use a monotonic rate code and lack stimulus-synchronized discharges. These findings indicate that the neural representation of acoustic flutter is transformed along the caudal-to-rostral axis of auditory cortex.

Download full-text


Available from: Daniel Bendor, Sep 03, 2014
  • Source
    • "A large proportion of neurons in marmoset A1 showed preferential responses to amplitude- or frequency-modulated tones and, interestingly, some of these neurons could only be driven by temporally modulated tones but not by unmodulated pure tones (Liang et al., 2002). Neurons in marmoset auditory cortex are also found to be responsive to periodic click train stimuli, by either stimulus synchronized or unsynchronized discharges in both A1 (Lu et al., 2001) and rostral fields (Bendor and Wang, 2007, 2008). "
    [Show abstract] [Hide abstract]
    ABSTRACT: A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.
    Frontiers in Systems Neuroscience 12/2013; 7:114. DOI:10.3389/fnsys.2013.00114
  • Source
    • "These stimuli were band-pass filtered from 2–4kHz, with 1kHz smoothing. The design of these stimuli followed those that elicit a percept of “acoustic flutter” and are used to assess temporal processing in the auditory system distinctly from pitch [51-53]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct - either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals' pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.
    PLoS ONE 08/2013; 8(8):e73372. DOI:10.1371/journal.pone.0073372 · 3.23 Impact Factor
  • Source
    • "Even higher rates of phase-locked activity have been observed in population responses elicited by click trains in primary auditory cortex of monkeys (Steinschneider et al., 1998) and humans (Brugge et al., 2009; Nourski and Brugge, 2011). Generally, however, the upper limit for phase-locking has been reported to occur at much lower rates (e.g., Lu et al., 2001; Liang et al., 2002; Bendor and Wang, 2007). There are at least two possible reasons for this disparity. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing.
    Hearing research 06/2013; 305(1). DOI:10.1016/j.heares.2013.05.013 · 2.97 Impact Factor
Show more