Ear and Hearing (Ear Hear)

Publisher: American Auditory Society, Lippincott, Williams & Wilkins

Journal description

From the basic science of hearing to auditory electrophysiology to amplification and the psychological factors of hearing loss, Ear and Hearing covers all aspects of auditory disorders. This multidisciplinary journal consolidates the various factors that contribute to identification, remediation, and audiologic rehabilitation. It is the one journal that serves the diverse interest of all members of this professional community-- otologigts, educators, and to those involved in the design, manufacture, and distribution of amplification systems. The original articles published in the journal focus on assessment, diagnosis, and management of auditory disorders.

Current impact factor: 2.84

Impact Factor Rankings

2016 Impact Factor Available summer 2017
2014 / 2015 Impact Factor 2.842
2013 Impact Factor 2.833
2012 Impact Factor 3.262
2011 Impact Factor 2.578
2010 Impact Factor 2.257
2009 Impact Factor 2.091
2008 Impact Factor 2.182
2007 Impact Factor 2.057
2006 Impact Factor 1.858
2005 Impact Factor 2.255
2004 Impact Factor 2.302
2003 Impact Factor 1.45
2002 Impact Factor 1.281
2001 Impact Factor 1.321
2000 Impact Factor 1.506
1999 Impact Factor 1.169
1998 Impact Factor 1.037
1997 Impact Factor 1.591

Impact factor over time

Impact factor
Year

Additional details

5-year impact 3.11
Cited half-life 8.50
Immediacy index 0.52
Eigenfactor 0.01
Article influence 0.98
Website Ear and Hearing website
Other titles Ear and hearing
ISSN 0196-0202
OCLC 5731857
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Lippincott, Williams & Wilkins

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author cannot archive a post-print version
  • Restrictions
    • 12 months embargo
  • Conditions
    • Some journals have separate policies, please check with each journal directly
    • Pre-print must be removed upon acceptance for publication
    • Post-print may be deposited in personal website or institutional repository
    • Publisher's version/PDF cannot be used
    • Must include statement that it is not the final published version
    • Published source must be acknowledged with full citation
    • Set statement to accompany deposit
    • Must link to publisher version
    • NIH authors will have their accepted manuscripts transmitted to PubMed Central on their behalf after a 12 months embargo (see policy for details)
    • Wellcome Trust and HHMI authors will have their accepted manuscripts transmitted to PubMed Central on their behalf after a 6 months embargo (see policy for details)
    • Publisher last reviewed on 19/03/2015
  • Classification
    yellow

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: Hearing screening programs may benefit adults with unacknowledged or unaddressed hearing loss, but there is limited evidence regarding whether such programs are effective at improving health outcomes. The objective was to determine if poorer audiometric hearing thresholds are associated with poorer cognition, social isolation, burden of physical or mental health, inactivity due to poor physical or mental health, depression, and overnight hospitalizations among older American adults with unacknowledged or unaddressed hearing loss. Design: The authors performed a cross-sectional population-based analysis of older American adults with normal hearing or unacknowledged or unaddressed hearing loss. Data was obtained from the 1999 to 2010 cycles of the National Health and Nutrition Examination Survey. Participants with a pure-tone average (PTA in the better hearing ear of thresholds at 0.5, 1, 2, and 4 kHz) > 25 dB HL who self-reported their hearing ability to be "good" or "excellent" were categorized as having "unacknowledged" hearing loss. Those who had a PTA > 25 dB HL and who self-reported hearing problems but had never had a hearing test or worn a hearing aid were categorized as having "unaddressed" hearing loss. Multivariate regression was performed to account for confounding due to demographic and health variables. Results: A 10 dB increase in PTA was associated with a 52% increased odds of social isolation among 60- to 69-year-olds in multivariate analyses (p = 0.001). The average Digit Symbol Substitution Test score dropped by 2.14 points per 10 dB increase in PTA (p = 0.03), a magnitude equivalent to the drop expected for 3.9 years of chronological aging. PTA was not associated significantly with falls, hospitalizations, burden of physical or mental health, or depression, or social isolation among those aged 70 years or older in these samples. Conclusion: Unacknowledged or unaddressed hearing loss was associated with a significantly increased risk of social isolation among 60- to 69-year-olds but not those 70 years or older. It was also associated with lower cognitive scores on the Digit Symbol Substitution Test among 60- to 69-year-olds. This study differs from prior studies by focusing specifically on older adults who have unacknowledged or unaddressed hearing loss because they are the most likely to benefit from pure-tone hearing screening. The finding of associations between hearing loss and measures of social isolation and cognition in these specific samples extends previous findings on unrestricted samples of older adults including those who had already acknowledged hearing problems. Future randomized controlled trials measuring the effectiveness of adult hearing screening programs should measure whether interventions have an effect on these measures in those who have unacknowledged or unaddressed pure-tone hearing loss.
    No preview · Article · Jan 2016 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: To evaluate whether monothermal caloric screening can reduce the number of caloric irrigations required in the vestibular testing battery while maintaining diagnostic accuracy. Design: Prospective controlled cohort study. Three hundred and ninety patients referred for vestibular testing at this tertiary referral health system over a 1-year period were evaluated; 24 patients met exclusion or failure criteria and 366 patients were included in the study. Population was 35.6% male; average age was 50.4 years old. Each patient underwent caloric testing using either warm or cool water irrigation initially and this data was used for monothermal screening data. All patients then completed bithermal binaural caloric testing to obtain the "gold standard" bithermal data for comparison. The sensitivity and specificity of monothermal cool or monothermal warm caloric tests were calculated using a receiver operating characteristic curve analysis. Results: Using a monothermal interear difference threshold of 25%, warm monothermal screening had sensitivity of 98.0%, specificity of 91.3%, false negative rate of 2%, and false positive rate of 8.7%. Cool monothermal screening also had excellent sensitivity (92.3%) and specificity (95.3)%, with a false negative rate of 7.7%, and a false positive rate of 4.7%. The diagnosis associated with the single false negative warm monothermal caloric test was compensated vestibular paresis. In the study population, 71.9% had a -negative monothermal screen; if the monothermal data were accepted, 2 fewer irrigations would have been performed resulting in an average saving of $264 (typical Medicare reimbursement for 2 irrigations) billed per patient screened as well as shortening the average testing battery by about 15 min. Conclusions: Warm monothermal caloric screening can reduce time and cost of vestibular testing while nearly matching the diagnostic accuracy of bithermal testing.
    No preview · Article · Jan 2016 · Ear and Hearing

  • No preview · Article · Jan 2016 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: Postlingually deaf cochlear implant users' speech perception improves over several months after implantation due to a learning process which involves integration of the new acoustic information presented by the device. Basic tests of hearing acuity might evaluate sensitivity to the new acoustic information and be less sensitive to learning effects. It was hypothesized that, unlike speech perception, basic spectral and temporal discrimination abilities will not change over the first year of implant use. If there were limited change over time and the test scores were correlated with clinical outcome, the tests might be useful for acute diagnostic assessments of hearing ability and also useful for testing speakers of any language, many of which do not have validated speech tests. Design: Ten newly implanted cochlear implant users were tested for speech understanding in quiet and in noise at 1 and 12 months postactivation. Spectral-ripple discrimination, temporal-modulation detection, and Schroeder-phase discrimination abilities were evaluated at 1, 3, 6, 9, and 12 months postactivation. Results: Speech understanding in quiet improved between 1 and 12 months postactivation (mean 8% improvement). Speech in noise performance showed no statistically significant improvement. Mean spectral-ripple discrimination thresholds and temporal-modulation detection thresholds for modulation frequencies of 100 Hz and above also showed no significant improvement. Spectral-ripple discrimination thresholds were significantly correlated with speech understanding. Low FM detection and Schroeder-phase discrimination abilities improved over the period. Individual learning trends varied, but the majority of listeners followed the same stable pattern as group data. Conclusions: Spectral-ripple discrimination ability and temporal-modulation detection at 100-Hz modulation and above might serve as a useful diagnostic tool for early acute assessment of cochlear implant outcome for listeners speaking any native language.
    No preview · Article · Dec 2015 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objective: The objective of this study was to investigate the impact of using smaller and larger electric dynamic ranges on speech perception, aided thresholds, and subjective preference in cochlear implant (CI) subjects with the Nucleus® device. Design: Data were collected from 19 adults using the Nucleus CI system. Current levels (CLs) used to set threshold stimulation levels (T-levels) were set above or below the measured hearing thresholds to create smaller or larger electric output dynamic ranges, respectively, whereas the upper stimulation level (C-level) was fixed. The base (unadjusted) condition was compared against two conditions with higher T-levels (compression), by 30% and 60% of the measured hearing dynamic range, and three conditions with lower T-levels (expansion), by 30%, 60%, and 90% of the measured hearing dynamic range. For each subject, the clinical CL units were adjusted on each electrode to achieve these conditions. The slow-acting dynamic acoustic gains of ADRO® and Autosensitivity™ were enabled. Consonant-nucleus-consonant (CNC) word scores were measured in quiet at 50 dB and 60 dB SPL presentation levels. The signal-to-noise ratios (SNRs) for 50% understanding of sentences in noise were measured for sentences presented at 55 dB and 65 dB SPL in 4-talker babble noise. Free-field aided thresholds were measured at octave frequencies using frequency-modulated (warble) tones. Thirteen of the 19 subjects had take-home experience with the base and experimental conditions and provided subjective feedback via a questionnaire. Results: There were no significant effects of 30% expansion and 30% compression of the electric dynamic range on scores for words in quiet and SNRs for sentences in noise, at the two presentation levels. There was a significant decrement in scores for words in quiet for 60% and 90% expansion compared with the base condition at the 50 dB and 60 dB SPL presentation levels. The score decrement was much less at 60 dB SPL. For the 50 dB SPL presentation level, the decrements in word scores at 60% and 90% expansion were linearly related to the reduction in CL units required to achieve these experimental conditions, with a greater decrement in scores for a larger CL change. There was a significant increase in SNR for sentences in noise for 60% compression compared with the base condition at the 55 dB and 65 dB SPL presentation levels. There was also a significant increase in SNR for sentences at the 55 dB SPL presentation level for 90% expansion. Aided thresholds were significantly elevated for the three expansion conditions compared with the base condition, although the mean elevation at 30% expansion was only 4 dB. The questionnaire results showed no clear preference for any condition; however, subjects reported a reduced preference for the extreme compression (60%) and expansion (90%) conditions. Conclusions: The results showed that CI subjects using the Nucleus sound processor had no significant change in performance or preference for adjustments in T-levels by ±30% of the hearing dynamic range. In quiet, speech perception scores were reduced for the more marked expansion (60% and 90%) conditions, whereas in noise, performance was poorer for the highest compression (60%) condition. Across subjects, the decrement in scores for words at 50 dB SPL for the 60% and 90% expansion conditions was related to the changes in CL units required for these conditions, with greater decrements for larger changes in levels.
    No preview · Article · Dec 2015 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: This study used vocoder simulations with normal-hearing (NH) listeners to (1) measure their ability to integrate speech information from an NH ear and a simulated cochlear implant (CI), and (2) investigate whether binaural integration is disrupted by a mismatch in the delivery of spectral information between the ears arising from a misalignment in the mapping of frequency to place. Design: Eight NH volunteers participated in the study and listened to sentences embedded in background noise via headphones. Stimuli presented to the left ear were unprocessed. Stimuli presented to the right ear (referred to as the CI-simulation ear) were processed using an eight-channel noise vocoder with one of the three processing strategies. An Ideal strategy simulated a frequency-to-place map across all channels that matched the delivery of spectral information between the ears. A Realistic strategy created a misalignment in the mapping of frequency to place in the CI-simulation ear where the size of the mismatch between the ears varied across channels. Finally, a Shifted strategy imposed a similar degree of misalignment in all channels, resulting in consistent mismatch between the ears across frequency. The ability to report key words in sentences was assessed under monaural and binaural listening conditions and at signal to noise ratios (SNRs) established by estimating speech-reception thresholds in each ear alone. The SNRs ensured that the monaural performance of the left ear never exceeded that of the CI-simulation ear. The advantages of binaural integration were calculated by comparing binaural performance with monaural performance using the CI-simulation ear alone. Thus, these advantages reflected the additional use of the experimentally constrained left ear and were not attributable to better-ear listening. Results: Binaural performance was as accurate as, or more accurate than, monaural performance with the CI-simulation ear alone. When both ears supported a similar level of monaural performance (50%), binaural integration advantages were found regardless of whether a mismatch was simulated or not. When the CI-simulation ear supported a superior level of monaural performance (71%), evidence of binaural integration was absent when a mismatch was simulated using both the Realistic and the Ideal processing strategies. This absence of integration could not be accounted for by ceiling effects or by changes in SNR. Conclusions: If generalizable to unilaterally deaf CI users, the results of the current simulation study would suggest that benefits to speech perception in noise can be obtained by integrating information from an implanted ear and an NH ear. A mismatch in the delivery of spectral information between the ears due to a misalignment in the mapping of frequency to place may disrupt binaural integration in situations where both ears cannot support a similar level of monaural speech understanding. Previous studies that have measured the speech perception of unilaterally deaf individuals after CI but with nonindividualized frequency-to-electrode allocations may therefore have underestimated the potential benefits of providing binaural hearing. However, it remains unclear whether the size and nature of the potential incremental benefits from individualized allocations are sufficient to justify the time and resources required to derive them based on cochlear imaging or pitch-matching tasks. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright
    No preview · Article · Dec 2015 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objective: This study aimed to (1) characterize temporal response properties of the auditory nerve in implanted children with auditory neuropathy spectrum disorder (ANSD), and (2) compare results recorded in implanted children with ANSD with those measured in implanted children with sensorineural hearing loss (SNHL). Design: Participants included 28 children with ANSD and 29 children with SNHL. All subjects used cochlear nucleus devices in their test ears. Both ears were tested in 6 children with ANSD and 3 children with SNHL. For all other subjects, only one ear was tested. The electrically evoked compound action potential (ECAP) was measured in response to each of the 33 pulses in a pulse train (excluding the second pulse) for one apical, one middle-array, and one basal electrode. The pulse train was presented in a monopolar-coupled stimulation mode at 4 pulse rates: 500, 900, 1800, and 2400 pulses per second. Response metrics included the averaged amplitude, latencies of response components and response width, the alternating depth and the amount of neural adaptation. These dependent variables were quantified based on the last six ECAPs or the six ECAPs occurring within a time window centered around 11 to 12 msec. A generalized linear mixed model was used to compare these dependent variables between the 2 subject groups. The slope of the linear fit of the normalized ECAP amplitudes (re. amplitude of the first ECAP response) over the duration of the pulse train was used to quantify the amount of ECAP increment over time for a subgroup of 9 subjects. Results: Pulse train-evoked ECAPs were measured in all but 8 subjects (5 with ANSD and 3 with SNHL). ECAPs measured in children with ANSD had smaller amplitude, longer averaged P2 latency and greater response width than children with SNHL. However, differences in these two groups were only observed for some electrodes. No differences in averaged N1 latency or in the alternating depth were observed between children with ANSD and children with SNHL. Neural adaptation measured in these 2 subject groups was comparable for relatively short durations of stimulation (i.e., 11 to 12 msec). Children with ANSD showed greater neural adaptation than children with SNHL for a longer duration of stimulation. Amplitudes of ECAP responses rapidly declined within the first few milliseconds of stimulation, followed by a gradual decline up to 64 msec after stimulus onset in the majority of subjects. This decline exhibited an alternating pattern at some pulse rates. Further increases in pulse rate diminished this alternating pattern. In contrast, ECAPs recorded from at least one stimulating electrode in six ears with ANSD and three ears with SNHL showed a clear increase in amplitude over the time course of stimulation. The slope of linear regression functions measured in these subjects was significantly greater than zero. Conclusions: Some but not all aspects of temporal response properties of the auditory nerve measured in this study differ between implanted children with ANSD and implanted children with SNHL. These differences are observed for some but not all electrodes. A new neural response pattern is identified. Further studies investigating its underlying mechanism and clinical relevance are warranted.
    No preview · Article · Dec 2015 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: (1) To characterize the influence of type 2 diabetes mellitus (DM) on cortical auditory-evoked potentials (CAEPs) separate from the effects of normal aging, and (2) to determine whether the disease-related effects are modified by insulin dependence. Design: A cross-sectional study was conducted in a large cohort of Veterans to investigate the relationships among type 2 DM, age, and CAEPs in randomly selected participants with (N = 108) and without (N = 114) the disease and who had no more than a moderate hearing loss. Participants with DM were classified as insulin-dependent (IDDM, N = 47) or noninsulin-dependent (NIDDM, N = 61). Other DM measures included concurrent serum glucose, HbA1c, and duration of disease. CAEPs were evoked using a passive homogeneous paradigm (single repeating stimulus) by suprathreshold tones presented to the right ear, left ear, or both ears. Outcome measures were adjusted for the pure-tone threshold average for frequencies of 0.5, 1, and 2 kHz and analyzed for differences in age effects between participant groups using multiple regression. Results: There is little variation across test ear conditions (left, right, binaural) on any CAEP peak in any of the groups. Among no-DM controls, P2 latency increases about 9 msec per decade of life. DM is associated with an additional delay in the P2 latency of 7 and 9 msec for the IDDM and NIDDM groups, respectively. Moreover, the slope of the function relating P2 latency with age is similar across participant groups and thus the DM effect appears constant across age. Effects on N1 latency are considerably weaker, with age effects of less than 4 msec per decade across all groups, and DM effects of only 2 (IDDM) or 3 msec (NIDDM). In the NIDDM group, the slope relating N1 latency to age is steeper relative to that observed for the no-DM group, providing some evidence of accelerated "aging" for this CAEP peak. DM does not substantially reduce N1-P2 amplitude and age relationships with N1-P2 amplitude are effectively absent. There is no association between pure-tone average at 0.5, 1, and 2 kHz and any aspect of CAEPs in this cohort. Conclusions: In a large cohort of Veterans, we found that type 2 DM is associated with prolonged N1 and P2 latencies regardless of whether insulin is required to manage the disease and independent of peripheral hearing thresholds. The DM-related effects on CAEP latencies are threefold greater for P2 compared with N1, and there is little support that at the cortical level, IDDM participants had poorer responses compared with NIDDM participants, although their responses were more variable. Overall, these results indicate that DM is associated with slowed preattentive neural conduction. Moreover, the observed 7 to 9 msec P2 latency delay due to DM is substantial compared with normal age changes in P2, which are 9 msec per decade of life in this cohort. Results also suggest that whereas N1 latency changes with age are more pronounced among individuals with DM versus without DM, there was no evidence for more rapid aging of P2 among patients with DM. Thus, the damage responsible for the major DM-related differences may occur early in the DM disease process. These cross-sectional results should be verified using a longitudinal study design.
    No preview · Article · Dec 2015 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: The purpose of this study was to improve bimodal benefit in listeners using a cochlear implant (CI) and a hearing aid (HA) in contralateral ears, by matching the time constants and the number of compression channels of the automatic gain control (AGC) of the HA to the CI. Equivalent AGC was hypothesized to support a balanced loudness for dynamically changing signals like speech and improve bimodal benefit for speech understanding in quiet and with noise presented from the side(s) at 90 degree. Design: Fifteen subjects participated in the study, all using the same Advanced Bionics Harmony CI processor and HA (Phonak Naida S IX UP). In a 3-visit crossover design with 4 weeks between sessions, performance was measured using a HA with a standard AGC (syllabic multichannel compression with 1 ms attack time and 50 ms release time) or an AGC that was adjusted to match that of the CI processor (dual AGC broadband compression, 3 and 240 msec attack time, 80 and 1500 msec release time). In all devices, the AGC was activated above the threshold of 63 dB SPL. The authors balanced loudness across the devices for soft and loud input sounds in 3 frequency bands (0 to 548, 548 to 1000, and >1000 Hz). Speech understanding was tested in free field in quiet and in noise for three spatial speaker configurations, with target speech always presented from the front. Single-talker noise was either presented from the CI side or the HA side, or uncorrelated stationary speech-weighted noise or single-talker noise was presented from both sides. Questionnaires were administered to assess differences in perception between the two bimodal fittings. Results: Significant bimodal benefit over the CI alone was only found for the AGC-matched HA for the speech tests with single-talker noise. Compared with the standard HA, matched AGC characteristics significantly improved speech understanding in single-talker noise by 1.9 dB when noise was presented from the HA side. AGC matching increased bimodal benefit insignificantly by 0.6 dB when noise was presented from the CI implanted side, or by 0.8 (single-talker noise) and 1.1 dB (stationary noise) in the more complex configurations with two simultaneous maskers from both sides. In questionnaires, subjects rated the AGC-matched HA higher than the standard HA for understanding of one person in quiet and in noise, and for the quality of sounds. Listening to a slightly raised voice, subjects indicated increased listening comfort with matched AGCs. At the end of the study, 9 of 15 subjects preferred to take home the AGC-matched HA, 1 preferred the standard HA and 5 subjects had no preference. Conclusion: For bimodal listening, the AGC-matched HA outperformed the standard HA in speech understanding in noise tasks using a single competing talker and it was favored in questionnaires and in a subjective preference test. When noise was presented from the HA side, AGC matching resulted in a 1.9 dB SNR additional benefit, even though the HA was at the least favorable SNR side in this speaker configuration. Our results possibly suggest better binaural processing for matched AGCs.
    No preview · Article · Dec 2015 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: Formant rise time (FRT) and amplitude rise time (ART) are acoustic cues that inform phonetic identity. FRT represents the rate of transition of the formant(s) to a steady state, while ART represents the rate at which the sound reaches its peak amplitude. Normal-hearing (NH) native English speakers weight FRT more than ART during the perceptual labeling of the /ba/-/wa/ contrast. This weighting strategy is reflected neurophysiologically in the magnitude of the mismatch negativity (MMN)-MMN is larger during the FRT than the ART distinction. The present study examined the neurophysiological basis of acoustic cue weighting in adult cochlear implant (CI) listeners using the MMN design. It was hypothesized that individuals with CIs who weight ART more in behavioral labeling (ART users) would show larger MMNs during the ART than the FRT contrast, and the opposite would be seen for FRT users. Design: Electroencephalography was recorded while 20 adults with CIs listened passively to combinations of 3 synthetic speech stimuli: a /ba/ with /ba/-like FRT and ART; a /wa/ with /wa/-like FRT and ART; and a /ba/ stimulus with /ba/-like FRT and /wa/-like ART. The MMN response was elicited during the FRT contrast by having participants passively listen to a train of /wa/ stimuli interrupted occasionally by /ba/ stimuli, and vice versa. For the ART contrast, the same procedure was implemented using the /ba/ and /ba/ stimuli. Results: Both ART and FRT users with CIs elicited MMNs that were equal in magnitudes during FRT and ART contrasts, with the exception that FRT users exhibited MMNs for ART and FRT contrasts that were temporally segregated. That is, their MMNs occurred significantly earlier during the ART contrast (~100 msec following sound onset) than during the FRT contrast (~200 msec). In contrast, the MMNs for ART users of both contrasts occurred later and were not significantly separable in time (~230 msec). Interestingly, this temporal segregation observed in FRT users is consistent with the MMN behavior in NH listeners. Conclusions: Results suggest that listeners with CIs who learn to classify phonemes based on formant dynamics, consistent with NH listeners, develop a strategy similar to NH listeners, in which the organization of the amplitude and spectral representations of phonemes in auditory memory are temporally segregated.
    No preview · Article · Dec 2015 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: The purpose of this study is to assess the use of prosodic and contextual cues to focus by prelingually deaf adolescent users of cochlear implants (CIs) when identifying target phonemes. We predict that CI users will have slower reaction times to target phonemes compared with a group of normally-hearing (NH) peers. We also predict that reaction times will be faster when both prosodic and contextual (semantic) cues are provided. Design: Eight prelingually deaf adolescent users of CIs and 8 adolescents with NH completed 2 phoneme-monitoring experiments. Participants were aged between 13 and 18 years. The mean age at implantation for the CI group was 1.8 years (SD: 1.0). In the prosodic condition, reaction times to a target phoneme in a linguistically focused (i.e., stressed) word were compared between the two groups. The semantic condition compared reaction time with target phonemes when contextual cues to focus were provided in addition to prosodic cues. Results: Reaction times of the CI group were slower than those of the NH group in both the prosodic and semantic conditions. A linear mixed model was used to compare reaction times using Group as a fixed factor and Phoneme and Subject as random factors. When only prosodic cues (prosodic condition) to focus location were provided, the mean reaction time of the CI group was 512 msec compared with 317 msec for the NH group, and this difference was significant (p < 0.001). The provision of contextual cues speeded reaction times for both groups (semantic condition), indicating that top-down processing aided both groups in their search for a focused item. However, even with both prosodic and contextual cues, the CI users' processing times remained slower, compared with the NH group, with mean reaction times of 385 msec for the CI users but 232 msec for the NH listeners (p < 0.001). Conclusions: Prelingually deaf CI users' processing of prosodic cues is less efficient than that of their NH peers, as evidenced by slower reaction times to targets in phoneme monitoring. The provision of contextual cues speeded reaction times for both NH and CI groups, although the CI users were slower in responding than the NH group. These findings contribute to our understanding of how CI users employ/integrate prosodic and semantic cues in speech processing.
    No preview · Article · Dec 2015 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE:: Shifting the mean fundamental frequency (F0) of target speech down in frequency may be a way to provide the benefits of electric-acoustic stimulation (EAS) to cochlear implant (CI) users whose limited residual hearing precludes a benefit typically, even with amplification. However, previous study showed a decline in the amount of benefit at the greatest downward frequency shifts, and the authors hypothesized that this might be related to F0 variation. Thus, in the present study, the authors sought to determine the relationship between mean F0, F0 variation, and the benefits of combining electric stimulation from a CI with low-frequency residual acoustic hearing. DESIGN:: The authors measured speech intelligibility in normal-hearing listeners using an EAS simulation consisting of a sine vocoder combined either with speech low-pass filtered at 500 Hz, or with a pure tone representing target F0. The authors used extracted target voice pitch information to modulate the tone, and manipulated both the frequency of the carrier (mean F0), as well as the standard deviation of the voice pitch information (F0 variation). RESULTS:: A decline in EAS benefit was observed at the lowest mean F0 tested, but this decline disappeared when F0 variation was reduced to be proportional to the amount of the shift in frequency (i.e., when F0 was shifted logarithmically instead of linearly). CONCLUSION:: Lowering mean F0 by shifting the frequency of a pure tone carrying target voice pitch information can provide as much EAS benefit as an unshifted tone, at least in the current simulation of EAS. These results may have implications for CI users with extremely limited residual acoustic hearing.
    No preview · Article · Nov 2015 · Ear and Hearing