Article

Different Patterns of Perceptual Learning on Spectral Modulation Detection Between Older Hearing-Impaired and Younger Normal-Hearing Adults.

Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA, .
Journal of the Association for Research in Otolaryngology (Impact Factor: 2.55). 12/2012; 14(2). DOI: 10.1007/s10162-012-0363-y
Source: PubMed

ABSTRACT Young adults with normal hearing (YNH) can improve their sensitivity to basic acoustic features with practice. However, it is not known to what extent the influence of the same training regimen differs between YNH listeners and older listeners with hearing impairment (OHI)-the largest population seeking treatment in audiology clinics. To examine this issue, we trained OHI listeners on a basic auditory task (spectral modulation detection) using a training regimen previously administered to YNH listeners (∼1 h/session for seven sessions on a single condition). For the trained conditions on which pretraining performance was not already at asymptote, the YNH listeners who received training learned more than matched controls who received none, but that learning did not generalize to any untrained spectral modulation frequency. In contrast, the OHI-trained listeners and controls learned similar amounts on the trained condition, implying no effect of the training itself. However, surprisingly the OHI-trained listeners improved over the training phase and on an untrained spectral modulation frequency. These population differences suggest that learning consolidated more slowly, and that training modified an aspect of processing that had broader tuning to spectral modulation frequency, in OHI than YNH listeners. More generally, these results demonstrate that conclusions about perceptual learning that come from examination of one population do not necessarily apply to another.

0 Followers
 · 
55 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
    Frontiers in Human Neuroscience 07/2014; 8. DOI:10.3389/fnhum.2014.00628 · 2.90 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The ubiquity of social vocalization among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve vocal communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species.
    Hearing research 05/2013; DOI:10.1016/j.heares.2013.05.005 · 2.85 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Pairs of harmonic complexes with different fundamental frequencies f0 (105 and 189 Hz or 105 and 136 Hz) but identical bandwidth (0.25-3 kHz) were band-pass filtered using a filter having an identical center frequency of 1 kHz. The filter's center frequency was modulated using a triangular wave having a 5-Hz modulation frequency fmod to obtain a pair of vowel-analog waveforms with dynamically varying single-formant transitions. The target signal S contained a single modulation cycle starting either at a phase of -π/2 (up-down) or π/2 (down-up), whereas the longer distracter N contained several cycles of the modulating triangular wave starting at a random phase. The level at which the target formant's modulating phase could be correctly identified was adaptively determined for several distracter levels and several extents of frequency swing (10-55%) in a group of experienced normal-hearing young and a group of experienced elderly individuals with hearing loss not exceeding one considered moderate. The most important result was that, for the two f0 differences, all distracter levels, and all frequency swing extents tested, elderly listeners needed about 20 dB larger S/N ratios than the young. Results also indicate that identification thresholds of both the elderly and the young listeners are between 4 and 12 dB higher than similarly determined detection thresholds and that, contrary to detection, identification is not a linear function of distracter level. Since formant transitions represent potent cues for speech intelligibility, the large S/N ratios required by the elderly for correct discrimination of single-formant transition dynamics may at least partially explain the well-documented intelligibility loss of speech in babble noise by the elderly.
    Frontiers in Neuroscience 06/2014; 8:144. DOI:10.3389/fnins.2014.00144