Article

Different Patterns of Perceptual Learning on Spectral Modulation Detection Between Older Hearing-Impaired and Younger Normal-Hearing Adults.

Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA, .
Journal of the Association for Research in Otolaryngology (Impact Factor: 2.55). 12/2012; 14(2). DOI: 10.1007/s10162-012-0363-y
Source: PubMed

ABSTRACT Young adults with normal hearing (YNH) can improve their sensitivity to basic acoustic features with practice. However, it is not known to what extent the influence of the same training regimen differs between YNH listeners and older listeners with hearing impairment (OHI)-the largest population seeking treatment in audiology clinics. To examine this issue, we trained OHI listeners on a basic auditory task (spectral modulation detection) using a training regimen previously administered to YNH listeners (∼1 h/session for seven sessions on a single condition). For the trained conditions on which pretraining performance was not already at asymptote, the YNH listeners who received training learned more than matched controls who received none, but that learning did not generalize to any untrained spectral modulation frequency. In contrast, the OHI-trained listeners and controls learned similar amounts on the trained condition, implying no effect of the training itself. However, surprisingly the OHI-trained listeners improved over the training phase and on an untrained spectral modulation frequency. These population differences suggest that learning consolidated more slowly, and that training modified an aspect of processing that had broader tuning to spectral modulation frequency, in OHI than YNH listeners. More generally, these results demonstrate that conclusions about perceptual learning that come from examination of one population do not necessarily apply to another.

0 Followers
 · 
62 Views
 · 
0 Downloads
    • "These three metrics have been sepa- 123 rately used to assess listeners' ability to detect changes across 124 a broad range of spectral frequencies. These tasks include the 125 ripple-phase-inversion detection task (e.g., Aronoff and 126 Landsberger, 2013; Drennan et al., 2014; Henry et al., 2005; 127 Sabin et al., 2013; Supin et al., 1994, 1999; Won et al., 128 2007), the ripple-phase-shift detection task (Sheft et al., 129 2012; Nechaev and Supin, 2013), and the ripple modulation 130 depth detection task (Bernstein et al., 2013; Bernstein and 131 Green 1987; Litvak et al., 2007; Sabin et al., 2012a,b; 132 Summers and Leek, 1994; Zhang et al., 2013). In general, 133 these rippled noise tasks measure spectral processing across a 134 broad range of frequencies and thus cannot be postulated to 135 represent individual auditory filter bandwidths at localized 136 regions of the basilar membrane. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
    The Journal of the Acoustical Society of America 07/2015; 138(1):492. DOI:10.1121/1.4922700 · 1.56 Impact Factor
  • Source
    • "However, differences in the amount and pattern of perceptual learning over exposure between younger and older adults also indicate changes in the underlying processes. While younger and older listeners show the same amount of learning in the initial adaptation phase, older listeners' performance plateaus earlier in adapting to unfamiliar speech (Peelle and Wingfield, 2005; Adank and Janse, 2010), older adults show less transfer of learning to similar conditions (Peelle and Wingfield, 2005), and exhibit slower consolidation of learning (Sabin et al., 2013). Such differences illustrate that the interdependency between cognitive functions and implicit learning processes may change as a function of age. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
    Frontiers in Human Neuroscience 07/2014; 8. DOI:10.3389/fnhum.2014.00628 · 2.90 Impact Factor
  • Source
    • "In addition, the auditory spectrogram's filtering was made 10 percent less sharp, in order to mimic the broadening of cochlear frequency response often associated with age (Sommers and Gehr, 1998). Second, the spread of temporal (Takahashi and Bacon, 1992) and spectral (Sabin et al., 2013) modulation filters in aging was emulated by broadening the modulation filter kernels [the analogs to the Gabor (1946) transform's kernel] in the STRF model. This broadening of the two kernels is illustrated in Figure 5. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Pairs of harmonic complexes with different fundamental frequencies f0 (105 and 189 Hz or 105 and 136 Hz) but identical bandwidth (0.25-3 kHz) were band-pass filtered using a filter having an identical center frequency of 1 kHz. The filter's center frequency was modulated using a triangular wave having a 5-Hz modulation frequency fmod to obtain a pair of vowel-analog waveforms with dynamically varying single-formant transitions. The target signal S contained a single modulation cycle starting either at a phase of -π/2 (up-down) or π/2 (down-up), whereas the longer distracter N contained several cycles of the modulating triangular wave starting at a random phase. The level at which the target formant's modulating phase could be correctly identified was adaptively determined for several distracter levels and several extents of frequency swing (10-55%) in a group of experienced normal-hearing young and a group of experienced elderly individuals with hearing loss not exceeding one considered moderate. The most important result was that, for the two f0 differences, all distracter levels, and all frequency swing extents tested, elderly listeners needed about 20 dB larger S/N ratios than the young. Results also indicate that identification thresholds of both the elderly and the young listeners are between 4 and 12 dB higher than similarly determined detection thresholds and that, contrary to detection, identification is not a linear function of distracter level. Since formant transitions represent potent cues for speech intelligibility, the large S/N ratios required by the elderly for correct discrimination of single-formant transition dynamics may at least partially explain the well-documented intelligibility loss of speech in babble noise by the elderly.
    Frontiers in Neuroscience 06/2014; 8:144. DOI:10.3389/fnins.2014.00144 · 3.70 Impact Factor
Show more