Experience-induced Malleability in Neural Encoding of Pitch, Timbre, and Timing

Auditory Neuroscience Lab, Department of Communication Sciences, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
Annals of the New York Academy of Sciences (Impact Factor: 4.38). 07/2009; 1169(1). DOI: 10.1111/j.1749-6632.2009.04549.x
Source: PubMed


Speech and music are highly complex signals that have many shared acoustic features. Pitch, Timbre, and Timing can be used as overarching perceptual categories for describing these shared properties. The acoustic cues contributing to these percepts also have distinct subcortical representations which can be selectively enhanced or degraded in different populations. Musically trained subjects are found to have enhanced subcortical representations of pitch, timbre, and timing. The effects of musical experience on subcortical auditory processing are pervasive and extend beyond music to the domains of language and emotion. The sensory malleability of the neural encoding of pitch, timbre, and timing can be affected by lifelong experience and short-term training. This conceptual framework and supporting data can be applied to consider sensory learning of speech and music through a hearing aid or cochlear implant.

19 Reads
  • Source
    • "used to quantify the magnitude of voice " pitch " encoding captured in brainstem FFRs (Banai et al., 2009; Bidelman et al., 2011b; Bidelman and Krishnan, 2010; Song et al., 2012), while its upper harmonics reflect the encoding of speech " timbre " (Bidelman and Krishnan, 2010; Bidelman et al., 2014b; Kraus et al., 2009; Krishnan, 2002; Skoe and Kraus, 2010). Our previous work has shown that these " pitch- " and " timbre- " related metrics predict an individual's success in discriminating (Bidelman and Krishnan, 2010) and identifying (Bidelman et al., 2014a,b) speech information. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Simultaneous recording of brainstem and cortical event-related brain potentials (ERPs) may offer a valuable tool for understanding the early neural transcription of behaviorally-relevant sounds and the hierarchy of signal processing operating at multiple levels of the auditory system. To date, dual recordings have been challenged by technological and physiological limitations including different optimal parameters necessary to elicit each class of ERP (e.g., differential adaptation/habitation effects and number of trials to obtain adequate response signal-to-noise ratio). We investigated a new stimulus paradigm for concurrent recording of the auditory brainstem frequency-following response (FFR) and cortical ERPs. The paradigm is more "optimal" in that it uses a clustered stimulus presentation and variable interstimulus interval (ISI) to (i) achieve the most ideal acquisition parameters for eliciting subcortical and cortical responses, (ii) obtain an adequate number of trials to detect each class of response, and (iii) minimize neural adaptation/habituation effects. Comparison between the clustered and traditional (fixed, slow ISI) stimulus paradigms revealed minimal change in amplitude or latencies of either the brainstem FFR or cortical ERP. The clustered paradigm offered over a 3x increase in recording efficiency compared to conventional (fixed ISI presentation) and thus, a more efficient protocol for obtaining dual brainstem-cortical recordings in individual listeners. We infer that faster recording of subcortical and cortical potentials might allow more complete and sensitive testing of neurophysiological function and aid in the differential assessment of auditory function. Copyright © 2015. Published by Elsevier B.V.
    Journal of Neuroscience Methods 01/2015; 241. DOI:10.1016/j.jneumeth.2014.12.019 · 2.05 Impact Factor
  • Source
    • "Deficient phase-locking of FFR is reported for elder people [7,8] and for children with autism spectrum disorders [9]. Tone-language speakers [6,10-12] and musicians [13-15] produce stronger FFRs than English speakers or non-musicians. Appropriately, higher FFR pitch-tracking accuracy accompanies improved behavioral performance following training [16,17]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background The present study investigated whether the frequency-following response (FFR) of the auditory brainstem can represent individual frequency-discrimination ability. Method We measured behavioral frequency-difference limens (FDLs) in normal hearing young adults. Then FFRs were evoked by two pure tones, whose frequency difference was no larger than behavioral FDL. Discrimination of FFRs to individual frequencies was conducted as the neural representation of stimulus frequency difference. Participants were 15 Chinese college students (ages 19–25; 3 males, 12 females) with normal hearing characteristics. Results According to discriminative neural representations of individual frequencies, FFRs accurately reflected individual FDLs and detected stimulus-frequency differences smaller than behavioral threshold (e.g., 75% of FDL). Conclusions These results suggest that when a frequency difference cannot be behaviorally distinguished, there is still a possibility of it being detected physiologically.
    BioMedical Engineering OnLine 08/2014; 13(1):114. DOI:10.1186/1475-925X-13-114 · 1.43 Impact Factor
  • Source
    • "Additionally, cross-cultural studies have shown that language learning influences the discriminability of speech sounds, such that phonemes in one particular language are only perceived categorically by speakers of that language and continuously otherwise (Kuhl et al., 1992). Similarly, lifelong (e.g., musical training) as well as short-term experience both affect behavioral processing—and neural encoding (see below)—of relevant speech cues, such as pitch, timber and timing (Kraus et al., 2009). In support of the claim that speech CP can be acquired through training stand experimental learning studies that successfully induced discontinuous perception of a non-native phoneme continuum through elaborate category training (Myers and Swan, 2012). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes-even in absence of changes in overall signal level-these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations.
    Frontiers in Neuroscience 06/2014; 8(8):132. DOI:10.3389/fnins.2014.00132 · 3.66 Impact Factor
Show more

Preview (2 Sources)

19 Reads
Available from