ArticlePDF Available

Abstract and Figures

Objective Unexplained variability in speech recognition outcomes among postlingually deafened adults with cochlear implants (CIs) is an enormous clinical and research barrier to progress. This variability is only partially explained by patient factors (e.g., duration of deafness) and auditory sensitivity (e.g., spectral and temporal resolution). This study sought to determine whether non‐auditory neurocognitive skills could explain speech recognition variability exhibited by adult CI users. Study Design Thirty postlingually deafened adults with CIs and thirty age‐matched normal‐hearing (NH) controls were enrolled. Methods Participants were assessed for recognition of words in sentences in noise and several non‐auditory measures of neurocognitive function. These non‐auditory tasks assessed global intelligence (problem‐solving), controlled fluency, working memory, and inhibition‐concentration abilities. Results For CI users, faster response times during a non‐auditory task of inhibition‐concentration predicted better recognition of sentences in noise; however, similar effects were not evident for NH listeners. Conclusions Findings from this study suggest that inhibition‐concentration skills play a role in speech recognition for CI users, but less so for NH listeners. Further research will be required to elucidate this role and its potential as a novel target for intervention.
Content may be subject to copyright.
Laryngoscope Investigative Otolaryngology
V
C2016 The Authors. Laryngoscope Investigative Otolaryngology
published by Wiley Periodicals, Inc. on behalf of The Triological Society
Non-auditory Neurocognitive Skills Contribute to Speech
Recognition in Adults With Cochlear Implants
Aaron C. Moberly, MD; Derek M. Houston, PhD; Irina Castellanos, PhD
Objective: Unexplained variability in speech recognition outcomes among postlingually deafened adults with cochlear
implants (CIs) is an enormous clinical and research barrier to progress. This variability is only partially explained by patient
factors (e.g., duration of deafness) and auditory sensitivity (e.g., spectral and temporal resolution). This study sought to deter-
mine whether non-auditory neurocognitive skills could explain speech recognition variability exhibited by adult CI users.
Study Design: Thirty postlingually deafened adults with CIs and thirty age-matched normal-hearing (NH) controls were
enrolled.
Methods: Participants were assessed for recognition of words in sentences in noise and several non-auditory measures
of neurocognitive function. These non-auditory tasks assessed global intelligence (problem-solving), controlled fluency, work-
ing memory, and inhibition-concentration abilities.
Results: For CI users, faster response times during a non-auditory task of inhibition-concentration predicted better rec-
ognition of sentences in noise; however, similar effects were not evident for NH listeners.
Conclusions: Findings from this study suggest that inhibition-concentration skills play a role in speech recognition for
CI users, but less so for NH listeners. Further research will be required to elucidate this role and its potential as a novel tar-
get for intervention.
Key Words: cochlear implants, sensorineural hearing loss, speech perception.
INTRODUCTION
Although cochlear implants (CIs) are effective in
restoring access to auditory input for adults with
acquired hearing loss, the benefits to speech recognition
are not consistent across patients. Average speech recog-
nition after implantation is approximately 70% correct
words in sentences in quiet, with generally poorer per-
formance in noise. Some patients experience minimal
speech recognition benefit after implantation, while
others achieve scores near 100% in quiet.
1–4
This vari-
ability in outcomes presents a challenge for healthcare
providers. Identifying factors that explain outcome
variability, along with factors that can be used to prog-
nosticate postoperative outcomes, may help us to better
counsel patients as well as to identify novel targets for
clinical intervention for poorly performing patients.
Most research on postlingually deaf adults with CIs
has focused on the “bottom-up” auditory sensitivity to
the spectral and temporal properties of speech signals by
improving CI hardware, processing, and stimulation
parameters.
3,5–7
However, there is increasing evidence
that “top-down” neurocognitive mechanisms—here
broadly defined as using language knowledge and execu-
tive control during intentional and goal-directed behav-
ior–contribute to speech recognition outcomes.
8–10
During spoken language recognition, the listener must
use neurocognitive skills to make sense of the incoming
speech signal, relating it to linguistic representations in
long-term memory.
11,12
These neurocognitive processes
appear to be especially important when the bottom-up
sensory input is degraded (e.g., in noise, when using a
hearing aid, or when listening to the degraded signals
transmitted by a CI); degraded input leads to greater
ambiguity in how the information within that input
should be organized perceptually. Under these degraded
listening conditions, sufficient neurocognitive resources
are required to result in successful speech recognition.
A number of neurocognitive skills have been exam-
ined previously for their effects on speech recognition in
adults with lesser degrees of hearing loss. Some listeners
may be better able to make sense of degraded speech by
being able to more effectively store and integrate new
information with information presented earlier, or by
being able to do so more rapidly. In general, measures of
This is an open access article under the terms of the Creative
Commons Attribution-NonCommercial-NoDerivs License, which permits
use and distribution in any medium, provided the original work is prop-
erly cited, the use is non-commercial and no modifications or adaptations
are made.
From the Department of Otolaryngology, The Ohio State
University Wexner Medical Center, Columbus, Ohio, USA
Editor’s Note: This Manuscript was accepted for publication
8 October 2016.
Data from this study were presented at the 2016 Triological Socie-
ty annual meeting of the Combined Otolaryngology Spring Meetings
(COSM), May 20-21, 2016, in Chicago, IL.
Financial Disclosures: Research reported in this publication was sup-
ported by the Triological Society Career Development Award and the Ameri-
can Speech-Language-Hearing Foundation Speech Science Award to Aaron
Moberly. Normal-hearing participants were recruited through Research-
Match, which is funded by the NIH Clinical and Translational Science Award
(CTSA) program, grants UL1TR000445 a nd 1U54RR032646-01.
Conflicts of Interest: None
Correspondence: Aaron C. Moberly, Division of Otology, Neurotol-
ogy, & Cranial Base Surgery, The Ohio State University, 915 Olentangy
River Road, Suite 4000, Columbus, OH 43212. E-mail: Aaron.Moberly@
osumc.edu
DOI: 10.1002/lio2.38
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
1
verbal working memory, a limited-capacity temporary
storage mechanism for holding and processing informa-
tion, have been found to be successful predictors of
speech recognition under degraded or challenging listen-
ing conditions.
8
On the other hand, general scholastic
abilities (e.g., standardized test scores or grade point
average), tests of IQ, and measures of simple reaction
time have typically failed to demonstrate significant
associations with speech recognition performance.
13–15
When it comes to CI users, much less is known
regarding the role of neurocognitive processes during
speech recognition. In an early study of predictors of
speech recognition performance in 29 adults with early-
generation multichannel CIs, scores on a Visual Monitor-
ing Task, requiring a rapid response to digits displayed
on a computer screen when a specified pattern was pro-
duced, and a visual Sequence Learning Task (a written
task of rapid detection and completion of a sequence of
characters) accounted for 10 to 31% of variance in
speech recognition measures.
16
Follow-up studies in a
larger group of 48 adults receiving CIs demonstrated
results of development of a preoperative predictive index
using multivariate regression modeling, including dura-
tion of deafness, speech-reading ability, residual hearing
function, measures of compliance with treatment, and
also cognitive ability.
17,18
In their combined multivariate
regression analysis, scores on the Visual Monitoring
Task predicted approximately 5 to 20% of the variance
in speech recognition outcomes. Interestingly, a more
recent study demonstrated Visual Monitoring Task
scores as significant predictors of accuracy in music rec-
ognition, suggesting similar neurocognitive demands in
tasks of speech recognition and music perception.
19
These early studies of predictors of speech recogni-
tion performance in CI users demonstrated support for
the role of rapid processing of sequentially presented
stimuli. The first goal of the current study was to exam-
ine several other neurocognitive abilities in a group of
adult CI users. The study was designed to test the
hypothesis that non-auditory neurocognitive skills con-
tribute to sentence recognition scores in postlingually
deafened adult CI users. Several neurocognitive skills
likely come into play when performing a task of sentence
recognition under degraded listening conditions. In par-
ticular, these skills include sustaining controlled atten-
tion to the task,
20
exerting controlled fluency: the ability
to process stimuli rapidly under concentration demands,
21
and exerting inhibition-concentration: the ability to con-
centrate on information relevant to the task while sup-
pressing prepotent or automatic responses not relevant to
the task.
22
Support for the role of inhibitory control comes
from studies demonstrating that reductions in older
adults’ abilities to ignore task-irrelevant information are
an important contributor to their difficulty recognizing
words in noise.
23–26
Inhibitory processes may also facili-
tate the identification of correct lexical items and inhibit
incorrect responses.
27
In CI users, it is possible that neurocognitive abilities
play an even greater role in speech recognition than for
individuals with normal hearing (NH) listening under
degraded conditions (e.g., noise), because CI users face
even greater degrees of degradation of the spectro-
temporal details of speech delivered by their implants.
The second goal of this study was to examine whether the
relations among non-auditory measures of neurocognitive
skills and sentence recognition were different between CI
users and NH age-matched peers listening to sentences in
noise.
To address the above goals, a group of postlingually
deafened adult experienced CI users, alongside a group
of age-matched peers with NH, were tested using several
measures of recognition of words in sentences, along
with non-auditory measures of neurocognitive function,
including global fluid intelligence (problem-solving),
working memory, controlled fluency, and inhibition-
concentration abilities. Neurocognitive scores were ana-
lyzed for their relationships with sentence recognition.
Addressing these two goals should have clinical
ramifications: identifying neurocognitive factors that
contribute to speech recognition outcomes, which can be
tested in a non-auditory fashion, could suggest novel
diagnostic predictors of outcomes for patients consider-
ing cochlear implantation. Moreover, findings could
suggest potential neurocognitive intervention targets for
poorly performing patients.
MATERIALS AND METHODS
Participants
Sixty adults were enrolled. Thirty were experienced CI
users, between ages 50 and 82 years, recruited from the Otolar-
yngology department at The Ohio State University. Implant
users had varying etiologies of hearing loss and ages at implan-
tation; however, all CI users had progressive declines in hearing
during adulthood. All patients received their implants at the
age of 35 years or later. Participants had CI-aided thresholds
better than 35 dB HL at 0.25, .5, 1, and 2 kHz, as measured by
clinical audiologists within the year before study enrollment.
All patients had used their CIs for at least 9 months. All used
Cochlear devices and an Advanced Combined Encoder process-
ing strategy. Thirteen CI users had a right CI, 9 used a left
device, and 8 had bilateral CIs. A contralateral hearing aid was
worn by 13 patients. During testing, participants wore devices
in their everyday mode, including use of hearing aids, and kept
the same settings during the entire testing session. Residual
hearing in each ear was assessed immediately before testing.
Thirty age-matched normal-hearing (NH) controls were
also tested, matched as closely as possible to the chronological
ages of the CI users. Controls were evaluated for NH immediately
before testing; NH was defined as four-tone (0.5, 1, 2, and 4 kHz)
pure-tone average (PTA) better than 25 dB HL in the better ear.
This criterion was relaxed to 30 dB HL PTA for participants over
age 60 years, and only three had a PTA of poorer than 25 dB HL.
NH control participants were recruited from patients in the Oto-
laryngology department with non-otologic complaints, or by using
ResearchMatch, a research recruitment database.
All participants underwent screening to ensure no evi-
dence of cognitive impairment. The Mini-Mental State Exami-
nation (MMSE) was used, which is a validated assessment tool
for verbal working memory, attention, and the ability to follow
instructions.
28
Raw scores were converted to Tscores, using age
and education, with a Tscore less than 29 being concerning for
cognitive impairment. All participants had Tscores greater
than 29 on the MMSE.
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
2
Participants were also assessed for basic word-reading
ability, using the Word Reading subtest of the Wide Range
Achievement Test, 4
th
edition (WRAT),
29
serving as a metric of
general language proficiency. All participants demonstrated a
standard score of 85, with no participant scoring poorer than
one standard deviation below the mean. Because some tasks
required looking at a computer monitor or paper forms, a final
screening test of near-vision was performed; all participants
had corrected near-vision of better than or equal to 20/30, the
criterion for passing vision screens in educational settings.
Participants of both CI and NH groups were adults with
spoken American English as their first language. All had a high
school diploma, except for one CI user with a GED. A measure
of socioeconomic status (SES) was obtained, because SES may
predict access to vocabulary and language. SES was quantified
using a metric defined by Nittrouer and Burton,
30
based on
occupational and educational levels, using two scales between 1
and 8. Scores of 8 were the highest levels possible. The two
scores were multiplied, resulting in scores between 1 and 64.
No significant differences were found for age or SES, but CI
participants scored significantly more poorly on the reading and
cognitive screening tasks. Demographic and audiologic data for
the CI users are shown in Table 1. Mean demographic measures
for the CI and NH groups are shown in Table 2.
Equipment
Audiometry was performed using a Welch Allyn TN262
audiometer with TDH-39 headphones. For the MMSE and
WRAT screening tasks, as well as the tasks of sentence recogni-
tion and the neurocognitive tasks, participant responses were
video- and audio-recorded. Participants wore vests holding FM
transmitters that sent signals to receivers, which provided
input directly into the video camera. Responses for these tasks
were live-scored but then could also be scored later; two staff
members could independently score responses to check reliabili-
ty. Participants were tested while using their usual devices (one
CI, two CIs, or CI plus contralateral hearing aid) or no devices
(for NH controls), and devices were checked at the beginning of
testing by having the tester confirm sound detection by the par-
ticipant. Speech samples for the sentence recognition measures
were collected from a male talker directly onto the computer
hard drive, via an AKG C535 EB microphone, a Shure M268
amplifier, and a Creative Laboratories Soundblaster soundcard.
Stimuli-specific Procedures
All tasks were performed in a soundproof booth or a
sound-treated testing room.
Sentence Recognition. Three measures examining the
recognition of words in sentences were included: long, syntacti-
cally complex sentences (“long, complex” sentences); short, high-
ly constrained, meaningful sentences (“short, meaningful”
sentences); and short strings of nonwords that were syntactical-
ly correct but semantically anomalous (“nonsense” sentences).
To avoid ceiling and floor effects, participants were tested in dif-
ferent amounts of speech-shaped noise based on pilot testing of
three NH and 3 CI listeners, with the presentation of signal
and noise at 68 dB SPL. For CI participants, the signal-to-noise
ratio (SNR) was 13 dB for long, complex and short, meaningful
sentences, and CI users were tested in quiet for nonsense sen-
tences; NH listeners were tested at 23 dB SNR for all sentence
recognition tasks. Percentages of correct words repeated for
each sentence type were used as the measures of interest.
Recognition of Words in Long, Complex Sentences.
These sentences were long, syntactically complex sentences that
were designed to assess comprehension of complex syntax in
children with dyslexia (e.g., “The stars that the sailor saw came
out at midnight”). These sentences contained a mix of sentences
with three types of syntax: compound clauses, subject-object,
and object-subject.
Recognition of Words in Short, Meaningful Senten-
ces. Fifty-four of the 72 five-word sentences (four for practice,
50 for testing) used by Nittrouer and Lowenstein study were
used.
31
These sentences are semantically predictable and syn-
tactically correct, and they follow a subject-predicate structure
(e.g., “Flowers grow in the garden”).
Recognition of Words in Nonsense Sentences. These
sentences were four words in length, syntactically correct, but
semantically anomalous (e.g., “Soft rocks taste red”), used by
Nittrouer and colleagues.
32
Non-auditory Measures of Neurocognitive Function-
ing. Non-auditory tasks from the Leiter-3 International Perfor-
mance Scale were used to assess global intelligence (“Figure
Ground,” “Form Completion,” and “Visual Patterns”), controlled
fluency (“Attention Sustained”), and working memory
(“Forward/Reverse Memory”).
33
A non-auditory computerized
measure of inhibition-concentration (Stroop) was also collected.
Leiter-3. The Leiter-3 is a standardized neurocognitive
assessment battery designed to assess neurocognitive functions
in children and adults, with age norms up to 751years of age.
Because all measures are non-auditory in nature, the Leiter-3
can be used with patients with hearing loss. All instructions are
given to the participant through pantomime and gesturing. The
following measures from the Leiter-3 were included. The first
three, Figure Ground, Form Completion, and Visual Patterns
were used as measures of global intellectual ability related to
fluid reasoning, and were collected to ensure that these global
intellectual skills were equivalent between CI and NH groups.
Moreover, it was predicted that these measures would not
demonstrate relations with speech recognition abilities. The
other tasks included from the Leiter-3 were Attention Sustained
(considered a task of controlled fluency in this paper) and For-
ward and Reverse Memory (non-auditory measures of working
memory). All tasks were presented as discussed in the Leiter-3
manual. Raw scores were converted into standard scores, which
were used in analyses.
Global Intellectual Skills. During the Figure Ground
task, participants pointed to where figures depicted on cards
were located on a larger picture. As the task proceeded, the
pictures and figures became more detailed, and abstract images
were included, increasing the difficulty of the task. During the
Form Completion task, three blocks with fragments of a com-
plete picture were placed on a table in front of the participant.
Participants were required to put the blocks in the correspond-
ing slots of an easel to complete the target form. During the
Visual Patterns task, participants selected blocks in an appro-
priate sequence to complete a visual pattern. For each of these
tasks, correct responses were counted.
Controlled Fluency. During the Attention Sustained
subtest, participants were given a 30- or 60-second duration of
time to cross out as many figures as possible on a piece of paper
that matched a target figure shown at the top of the page.
Correct responses were counted, and errors were subtracted.
Working Memory. During the Forward Memory and
Reverse Memory subtests, an easel was shown with several pic-
tures of animals in squares. The tester pointed to a sequence of
pictures, and participants were required to point to the corre-
sponding pictures in the same order or in the reverse order.
Correct responses were counted.
Inhibition-Concentration. A non-auditory computerized
version of a verbal Stroop task was used, which is publicly
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
3
TABLE 1.
Cochlear implant participant demographics. Sentence recognition tasks were performed at a 13 dB SNR for long, complex sentences and for short, meaningful sentences, and in
quiet for nonsense sentences.
Participant Gender
Age
(years)
Implantation
Age (years) SES
Side of
Implant
Hearing
Aid Etiology of Hearing Loss
Better ear PTA
(dB HL)
Sentence
Recognition -
Long,
Complex (%
correct words)
Sentence
Recognition -
Short,
Meaningful (%
correct words)
Sentence
Recognition -
Nonsense (%
correct words)
1 F 64 54 24 B N Genetic 120.0 70.7 96.0 93.0
2 F 66 62 35 R Y Genetic, progressive, adult onset 78.8 32.6 59.2 86.0
3 M 66 61 18 L N Noise, Meniere’s 82.5 44.3 66.4 91.0
4 F 66 58 12 R Y Genetic, progressive, adult onset 98.8 62.2 92.0 83.0
6 M 69 65 24 R N Genetic, progressive, adult onset 88.8 20.4 76.0 84.0
7 M 58 52 36 B N Rubella, progressive 115.0 6.5 25.6 40.0
8 F 56 48 25 R Y Genetic, progressive 82.5 51.4 84.0 77.0
9 M 79 67 49 L N Genetic 120.0 0.7 0.0 46.0
10 M 79 76 36 R Y Progressive, adult onset, noise 70.0 34.0 73.6 71.0
12 F 68 56 12 B N Otosclerosis 112.5 12.7 25.6 92.0
13 M 54 50 24 B N Progressive, adult onset 120.0 58.2 84.8 90.0
16 F 62 59 35 R N Progressive, adult onset 115.0 7.9 17.6 69.0
19 F 75 67 36 L N Progressive, adult onset, autoimmune 120.0 1.9 1.6 48.0
20 M 78 74 15 L N Ear infections 108.8 4.2 0.0 57.0
21 M 82 58 42 L Y Meniere’s 71.3 29.4 55.2 72.0
23 F 80 73 30 R N Progressive, adult onset 87.5 26.2 35.2 75.0
25 M 58 57 24 R Y Autoimmune, sudden 120.0 7.2 3.2 72.0
28 M 77 72 12 B N Progressive, adult onset 120.0 0.9 0.8 41.0
31 F 67 62 25 L Y Progressive as child 102.5 8.6 16.8 68.0
34 M 60 54 42 L Y Noise, Meniere’s, sudden 98.8 7.5 1.6 83.0
35 M 68 62 42 B N Genetic, progressive, adult onset 120.0 31.3 68.8 74.0
37 F 50 35 35 B N Progressive as child 120.0 76.8 97.6 92.0
38 M 75 74 35 L Y Ototoxicity 96.3 1.4 3.2 31.0
39 F 63 61 30 R N Progressive, adult onset 107.5 16.0 16.0 82.0
40 F 66 59 15 B N Genetic, Meniere’s 120.0 31.5 73.6 89.0
41 F 59 56 15 R Y Sudden HL 87.5 37.1 60.8 80.0
42 M 82 76 42 R Y Progressive, adult onset, noise 68.8 38.9 61.6 74.0
44 F 72 66 25 R N Progressive, adult onset 98.8 10.6 7.2 77.0
46 M 75 74 42 L Y Progressive, adult onset 87.5 0 0.0 27.0
48 F 78 48 15 R Y Progressive, adult onset 110.0 7.6 12.0 53.0
Notes: SES: socioeconomic status; PTA: pure-tone average; HL: hearing level
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
4
available (http://www.millisecond.com). Participants were pre-
sented with color words one at a time on a computer monitor
and were asked to give a response naming the color of the text
of the word shown. Scoring was done automatically at the time
of testing when the participant directly entered responses into a
computer by pressing buttons corresponding to the colors.
Response times were computed for correct responses to congru-
ent words (automatic word reading; e.g., the word “Red” was
shown in red ink) and to incongruent words (requires partici-
pants to inhibit word reading and concentrate on the ink color;
e.g., the word “Red” was shown in blue ink).
General Procedures
All procedures were approved by The Ohio State Universi-
ty Institutional Review Board. Participants were tested in one
session over two hours. First, hearing thresholds and screening
measures were obtained. Participants then completed sentence
recognition testing, with different sentence materials presented
in blocks and order of sentences randomized. Lastly, partici-
pants completed the neurocognitive testing, with task order
randomized.
Data Analyses
Independent-samples t-tests were performed to identify
differences in neurocognitive scores between CI and NH groups.
Pearson-product correlation analyses were performed among
neurocognitive and sentence recognition measures.
RESULTS
For the CI group, side of implantation (right, left,
or bilateral) did not influence any of the neurocognitive
or sentence recognition performance scores (p>.50).
Additionally, no differences in performance were found
for CI users who wear only CIs versus a CI plus hearing
aid (p>.50). Therefore, all CI users were included
together in subsequent analyses.
On screening measures, CI users performed signif-
icantly more poorly than NH peers on word reading
(WRAT) and cognitive functioning (MMSE), though all
participants were within the normal range. Item anal-
yses of the MMSE revealed that 74% of the errors in
CI users’ responses occurred during questions requir-
ing verbal working memory processes (e.g., recall a 3-
word list). CI and NH groups did not differ on global
nonverbal intelligence (Figure Ground, Form Comple-
tion, and Visual Patterns), nor did they differ on con-
trolled fluency (Attention Sustained), reverse working
memory (Reverse Memory), or inhibition-concentration
(Verbal Stroop; see Table 3). CI users scored more poor-
ly than NH participants on forward working memory
(Forward Memory). However, CI users displayed for-
ward working memory scores within the normal range.
Scores for the sentence recognition assessments were
not normally distributed; therefore, arcsine transfor-
mations were computed and used for all subsequent
analyses. Sentence recognition scores were not directly
compared between CI and NH groups, because they
were tested at different SNRs, but mean scores are
shown in Table 3.
TABLE 2.
Participant demographics
Groups
Normal
Hearing
(N530)
Cochlear
Implant
(N530)
Mean (SD) Mean (SD) tvalue pvalue
Demographics
Age (years) 68.3 (9.4) 68.4 (8.9) 0.03 .98
Reading (standard
score)
107 (12.5) 100.5 (11.1) 2.13 .04
MMSE (Tscore) 55.8 (10.7) 49.8 (9.4) 2.29 .03
SES 34 (13.9) 28.2 (11.3) 1.74 .09
TABLE 3.
Group mean neurocognitive and sentence recognition scores and results of independent-samples t-tests. Sentence recognition scores
were not compared between groups, because signal-to-noise ratio (SNR) was different between groups. For CI users, sentence recognition
scores were presented at 13 dB SNR for long, complex and short, meaningful sentences and in quiet for nonsense sentences. For NH
listeners, all sentence recognition tasks were presented at 23 dB SNR.
Groups
NH (N530) CI (N530)
NMean (SD) NMean (SD) tvalue pvalue
Figure Ground (scaled score) 30 11.6 (5.2) 30 11.2 (3.2) .36 .72
Form Completion (scaled score) 30 10.9 (2.4) 30 11.0 (2.9) .10 .92
Visual Patterns (scaled score) 30 12.4 (2.6) 30 11.8 (2.5) .89 .38
Attention Sustained (scaled score) 30 10.2 (1.9) 30 9.6 (2.0) 1.20 .24
Forward Memory (scaled score) 30 13.0 (2.3) 30 11.8 (2.3) 2.08 .04
Reverse Memory (scaled score) 30 13.5 (2.4) 30 12.7 (2.2) 1.44 .16
Verbal Stroop - Congruent (response time in seconds) 30 1.22 (.30) 28 1.34 (.47) 1.15 .26
Verbal Stroop - Incongruent (response time in seconds) 30 1.57 (.47) 28 1.72 (.48) 1.16 .25
Sentence Recognition - Long, complex (% words correct) 30 66.7 (14.4) 30 24.6 (22.4)
Sentence Recognition - Short, meaningful (% words correct) 30 81.7 (9.3) 30 40.5 (35.0)
Sentence Recognition - Nonsense (% words correct) 30 38.8 (11.7) 30 70.6 (19.0)
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
5
The first goal of this study was to examine whether
neurocognitive skills, assessed using non-auditory tasks,
were associated with sentence recognition performance.
Correlations between neurocognitive scores and sentence
recognition scores are shown in Table 4. For CI users,
only one of the neurocognitive domains, inhibition-
concentration, was significantly associated with all three
sentence recognition scores (p5.02 – .03 across sentence
measures). Specifically, the response times from the
“incongruent” condition correlated with sentence recog-
nition scores (see Figure 1), but response times from the
“congruent” condition did not. This finding suggests that
speed of inhibitory control, but not general response
speed, was associated with sentence recognition in CI
users. For NH controls, none of the neurocognitive
scores were associated with sentence recognition.
Because word reading (WRAT) and cognitive functioning
(MMSE) scores were poorer for CI users than NH peers,
these were also examined for correlations with sentence
recognition scores; no significant correlations were
identified.
The second goal of the study was to determine if
the relations among neurocognitive skills and sentence
recognition would differ between CI and NH groups. It
was predicted that different correlations would be
identified among neurocognitive skills and sentence
recognition scores for CI users than NH peers, because
of the greater degree of spectro-temporal degradation
experienced by CI listeners relative to NH listeners. As
demonstrated in Table 4, no correlations were demon-
strated between neurocognitive scores and sentence
recognition for the NH participants. Thus, it can be
concluded that inhibition-concentration skills contrib-
uted to sentence recognition in CI users, but not in NH
peers.
DISCUSSION
This study was designed to examine whether the
neurocognitive abilities of postlingually deafened adults
with contemporary CIs, as assessed using non-auditory
measures, would be associated with the ability to recog-
nize words in sentences. Moreover, the study aimed to
examine whether relationships among neurocognitive
measures and sentence recognition differed between CI
and NH listeners.
Results of this study demonstrated that neurocogni-
tive functions were generally similar for CI users as com-
pared with their NH age-matched peers. Scores were
poorer for CI users than for our sample of NH peers on
Forward Memory and MMSE (primarily as a result of rela-
tive deficits on the MMSE on items requiring verbal work-
ing memory). However, CI users’ scores for both Forward
Memory and MMSE were within the normal range. Read-
ing scores were also poorer for the CI group than NH
peers. However, we cannot necessarily attribute these dif-
ferences to hearing loss or use of a CI. Recent studies have
suggested that neurocognitive functions decline with wors-
ening hearing loss, and some even suggest that cochlear
implantation may reverse these declines.
34
Future studies
are required to examine these effects in detail.
Turning to relations among neurocognitive functions
and speech recognition, support for our first hypothesis
was demonstrated: inhibition-concentration skills of CI
TABLE 4.
rvalues from correlation analyses with recognition of words in sentences. CI users were tested at 13 dB SNR for long, complex and highly
meaningful sentences, and in quiet for nonsense sentences. NH listeners were tested at 23 dB SNR for all sentence materials.
Groups
NH CI
Long, complex
sentences
Highly meaningful
sentences
Nonsense
sentences
Long, complex
sentences
Highly meaningful
sentences
Nonsense
sentences
Figure Ground
(scaled score)
.05 .02 .09 .15 .13 -.03
Form Completion
(scaled score)
.13 -.11 .01 -.09 -.16 -.17
Visual Patterns
(scaled score)
.24 -.03 .32 .33 .26 .23
Attention Sustained
(scaled score)
.14 .07 -.08 .14 .19 .29
Forward Memory
(scaled score)
-.10 -.35 .17 .23 .23 .14
Reverse Memory
(scaled score)
.06 -.11 .08 .20 .20 .04
Verbal Stroop -
Congruent
(response time)
-.04 .20 .07 -.28 -.29 -.36
Verbal Stroop -
Incongruent
(response time)
-.14 -.05 -.03 -.41*-.43*-.43*
*p<0.05
** p<0.01
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
6
users were significantly correlated with recognizing
words in all three types of sentence materials, with faster
inhibition responses associated with better sentence rec-
ognition. Although inhibition-concentration skills have
not been previously examined in adult CI users, results
are consistent with findings by Sommers and Danielson,
who identified individual differences in inhibitory control
as contributing to sentence recognition performance in
older adults with NH.
27
We speculate that inhibition-
concentration abilities may be particularly important for
CI users during speech recognition, in which they must
ignore irrelevant stimuli (noise) and/or inhibit perceiving
incorrect lexical items. This explanation is consistent
with models of speech perception that emphasize the role
that working memory plays in inhibiting interference for
irrelevant information, or for inhibiting prepotent but
incorrect responses.
35
For example, in the Ease of Lan-
guage Understanding (ELU) model, under degraded lis-
tening conditions, successful speech perception requires a
shift from rapid automatic processing to more effortful,
controlled processing, which is heavily dependent on
working memory capacity.
36
The relations of inhibition-
concentration, working memory capacity, and speech rec-
ognition processes deserve further exploration.
In contrast to inhibition-concentration, controlled
fluency and non-auditory working memory skills were
not associated with speech recognition scores. At least
two possible conclusions may be drawn from these find-
ings: first, exerting executive control on linguistic repre-
sentations, versus visual representations, may relate
most strongly to speech recognition skills. However, our
results are not consistent with those of Knutson and col-
leagues, who demonstrated relations between speech rec-
ognition measures and visually presented sequential
processing tasks.
16–18
Alternatively, results may suggest
that our measures of neurocognitive functioning from the
Leiter–3 are not necessarily sensitive measures tapping
into the neurocognitive abilities that underlie spoken
Fig. 1. Correlations between sentence recognition scores and inhibition-concentration response times for cochlear implant users. Partici-
pants were tested at 13 dB SNR for long, complex sentences and short, meaningful sentences and in quiet for nonsense sentences.
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
7
language recognition, or that our sample sizes were not
large enough to detect significant relations. Further
research is necessary to delineate these findings.
The second hypothesis tested was that relations
among neurocognitive skills and sentence recognition
would differ between CI users and NH listeners. This
hypothesis was supported: faster inhibition was associat-
ed with better sentence recognition only for CI users.
Several possibilities may explain the lack of signifi-
cant correlations between neurocognitive functions and
speech recognition for NH listeners. One such explana-
tion is that NH listeners’ ranges of performance on the
speech recognition tasks were much narrower than those
of the CI users; this restricted variance in speech recog-
nition scores across NH listeners may have contributed
to the observed weak relationships with neurocognitive
scores. A second explanation is that there are differen-
tial relations between neurocognitive functioning and
speech recognition for CI and NH listeners. This differ-
ential relation between CI and NH listeners is consistent
with recent findings. F
ullgrabe and Rosen have demon-
strated that neurocognitive skills (particularly working
memory capacity) contribute little to NH listeners’ per-
formance on tasks of speech recognition in noise,
37
in
contrast with several studies in adults with hearing
loss.
8–10
Third, it could be that testing listeners under
noise conditions that provide greater informational
masking (e.g., multi-talker babble), rather than the ener-
getic masking provided by speech-shaped noise here,
would allow us to better observe top-down processing
contributions to speech recognition. Finally, although
our primary analyses correlated sentence recognition
with non-auditory neurocognitive skills, we also correlat-
ed five additional measures obtained from testing (Glob-
al Intellectual Skills: Figure Ground, Form Completion,
Visual Patterns; Reading Skills: WRAT; and Cognitive
Impairment Screen: MMSE) with the neurocognitive
assessments, thereby providing clinicians with more
comprehensive information about functioning following
hearing loss and cochlear implantation. However, by
conducting these additional correlations we increased
our risk of experiment-wise error and these additional
correlations should be interpreted with caution. Addi-
tional studies will be required to better understand the
differential relations between NH listeners and patients
with hearing loss, including those with CIs.
CONCLUSION
Our findings indicate that inhibition-concentration
skills contribute to CI users’ abilities to recognize words
in sentences, while other neurocognitive tests employed
by this study did not predict word recognition ability.
Findings provide further evidence for the role of neuro-
cognitive processing by CI users and imply potential
benefits of developing clinical aural rehabilitation pro-
grams that target inhibition-concentration skills.
Acknowledgments
Research reported in this publication was supported by the
Triological Society Career Development Award and the
American Speech-Language-Hearing Foundation/Acousti-
cal Society of America Speech Science Award to Aaron
Moberly. Normal-hearing participants were recruited
through ResearchMatch, which is funded by the NIH
Clinical and Translational Science Award (CTSA) pro-
gram, grants UL1TR000445 and 1U54RR032646-01. The
authors would like to acknowledge Susan Nittrouer and
Joanna Lowenstein for their development of sentence rec-
ognition materials used, and Lauren Boyce and Taylor
Wucinich for assistance in data collection and scoring.
The authors declare no conflicts of interest.
BIBLIOGRAPHY
1. Firszt JB, Holden LK, Skinner MW, et al. Recognition of speech presented
at soft to loud levels by adult cochlear implant recipients of three cochle-
ar implant systems. Ear Hear 2004;25:375–387.
2. Gifford RH, Shallop JK, Peterson AM. Speech recognition materials and
ceiling effects: Considerations for cochlear implant programs. Audiol
Neurotol 2008;13:193–205.
3. Holden LK, Finley CC, Firszt JB, et al. Factors affecting open-set word
recognition in adults with cochlear implants. Ear Hear 2013;34:342–360.
4. Moberly AC, Houston DM, Castellanos I, Boyce L, Nittrouer S. Linguistic
knowledge and working memory in adults with cochlear implants.
Under review.
5. Holden LK, Reeder RM, Firszt JB, Finley CC. Optimizing the perception
of soft speech and speech in noise with the advanced bionics cochlear
implant system. Int J Audiol 2011;50:255–269.
6. Kenway B, Tam YC, Vanat Z, Harris F, Gray R, Birchall J, et al. Pitch dis-
crimination—an independent doctor in cochlear implant performance
outcomes. Otol Neurotol 2015;36:1472–1479.
7. Srinivasan AG, Padilla M, Shannon RV, Landsberger DM. Improving
speech perception in noise with current focusing in cochlear implant
users. Hear Res 2013;299:29–36.
8. Akeroyd MA. Are individual differences in speech reception related to indi-
vidual differences in cognitive ability? A survey of twenty experimental
studies with normal and hearing-impaired adults. Int J Audiol 2008;47:
53–71.
9. Arehart KH, Souza P, Baca R, Kates J. Working memory, age and hearing
loss: Susceptibility to hearing aid distortion. Ear Hear 2013;34:251–260.
10. R
onnberg J, Lunner T, Zekveld A, et al. The ease of language understand-
ing (ELU) model: Theoretical, empirical, and clinical advances. Front
Syst Neurosci 2013;7:1–17.
11. Heald SLM, Nusbaum HC. Speech perception as an active cognitive pro-
cess. Front Syst Neurosci 2014;8:1–15.
12. Pisoni DB, Cleary M. Measures of working memory span and verbal
rehearsal speed in deaf children after cochlear implantation. Ear Hear
2003;24(Suppl. 1):106S–120S.
13. Jerger J, Jerger S, Pirozzolo F. Correlational analysis of speech audiomet-
ric scores, hearing loss, age, and cognitive abilities in the elderly. Ear
Hear 1991;12:103–109.
14. Kidd GR, Watson CS, Gygi B. Individual differences in auditory abilities.
J Acoust Soc Am 2007;122:418–435.
15. Surprenant AM, Watson CS. Individual differences in the processing of
speech and nonspeech sounds by normal-hearing listeners. J Acoust Soc
Am 2001;110:2085–2095.
16. Knutson JF, Hinrichs JV, Tyler RS, Gantz BJ, Schartz HA, Woodworth G.
Psychological predictors of audiological outcomes of multichannel cochle-
ar implants: Preliminary findings. Ann Otol Rhinol Laryngol 1991;100:
817–822.
17. Gantz BJ, Woodworth GG, Knutson JF, Abbas PJ, Tyler RS. Multivariate
predictors of success with cochlear implants. Adv Oto Rhino Laryngol
1993;48:153–167.
18. Gantz BJ, Woodworth G, Abbas P, Knutson JF, Tyler RS. Multivariate pre-
dictors of audiological success with cochlear implants. Ann Otol Rhinol
Laryngol 1993;102:909–916.
19. Gfeller K, Oleson J, Knutson JF, Breheny P, Driscoll V, Olszewski C. Mul-
tivariate predictors of music perception and appraisal by adult cochlear
implant users. J Am Acad Audiol 2008;19:120–134.
20. Amitay S. Forward and reverse hierarchies in auditory perceptual learn-
ing. Learn Percept 2009;1:59–68.
21. Humes LE, Floyd SS. Measures of working memory, sequence learning,
and speech recognition in the elderly. J Speech Lang Hear Res 2005;48:
224–235.
22. Cahana-Amitay D, Spiro III A, Sayers JT, et al. How older adults use cog-
nition in sentence-final word recognition. Neuropsychol Dev Cogn B
Aging Neuropsychol Cogn 2015;16:1–27.
23. Janse E. A non-auditory measure of interference predicts distraction by
competing speech in older adults. Neuropsychol Dev Cogn B Aging Neu-
ropsychol Cogn 2012;19:741–758.
24. Pichora-Fuller MK. Processing speed and timing in aging adults: Psycho-
acoustics, speech perception, and comprehension. Int J Audiol 2003;42:
S59–S67.
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
8
25. Tun PA, McCoy S, Wingfield A. Aging, hearing acuity, and the attentional
costs of effortful listening. Psychol Aging 2009;24:761–766.
26. Wingfield A, Tun PA. Cognitive supports and cognitive constraints on com-
prehension of spoken language. J Am Acad Audiol 2007;18:548–558.
27. Sommers MS, Danielson SM. Inhibitory processes and spoken word recog-
nition in young and older adults: The interaction of lexical competition
and semantic context. Psychol Aging 1999;14:458–472.
28. Folstein MF, Folstein SE, McHugh PR. Mini-mental state – practical
method for grading cognitive state of patients for clinician. J Psychiatr
Res 1975;12:189–198.
29. Wilkinson GS, Robertson GJ. Wide Range Achievement Test. 4th ed. Lutz,
FL: Psychological Assessment Resources; 2006.
30. Nittrouer S, Burton LT. The role of early language experience in the devel-
opment of speech perception and phonological processing abilities: evi-
dence from 5-year-olds with histories of otitis media with effusion and
low socioeconomic status. J Commun Dis 2005;38:29–63.
31. Nittrouer S, Lowenstein JH. Learning to perceptually organize speech sig-
nals in native fashion. J Acoust Soc Am 2010;127:1624–1635.
32. Nittrouer S, Tarr E, Bolster V, Caldwell-Tarr A, Moberly AC, Lowenstein
JH. Low-frequency signals support perceptual organization of implant-
simulated speech for adults and children. Int J Audiol 2014;53:270–284.
33. Roid GH, Miller LJ, Pomplun M, Koch C. Leiter international performance
scale, (Leiter-3). Los Angeles: Western Psychological Services; 2013.
34. Cosetti MK, Pinkston JB, Flores JM, et al. Neurocognitive testing and
cochlear implantation: Insights into performance in older adults. Clin
Interv Aging 2016;11:603–613.
35. Wingfield A. Evolution of models of working memory and cognitive resour-
ces. Ear Hear 2016;37:35S–43S.
36. R
onnberg J, Lunner T, Zekveld A, et al. The Ease of Language Under-
standing (ELU) model: Theoretical, empirical, and clinical advances.
Front Syst Neurosci 2013;7:1–17.
37. F
ullgrabe C, Rosen S. Investigating the role of working memory in speech-
in-noise identification for listeners with normal hearing. In Physiology,
Psychoacoustics and Cognition in Normal and Impaired Hearing,
Advances in Experimental Medicine and Biology, Dijk PV (ed). 216;894:
29–36.
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
9
... For the papers included in this review, the "Leiter-3 sustained attention task, " "Woodcock-Johnson IV (WJ-IV) letter and number pattern matching task and the pair cancelation task", and the "ALAcog M3 attentional task" are used (see Table 5b for an overview). These tests involve targets like figures, letters, numbers or repeated patterns on paper among a set of distractors (Moberly et al., 2016b(Moberly et al., , 2017bHillyer et al., 2019;Völter et al., 2021). Of these tasks, only performance on the ALAcog attentional task was significantly different between better and poorer performers on a word test in quiet (Cohen's d = 1.12, p = 0.003) (Völter et al., 2021). ...
... Of these tasks, only performance on the ALAcog attentional task was significantly different between better and poorer performers on a word test in quiet (Cohen's d = 1.12, p = 0.003) (Völter et al., 2021). The other tests showed no significant relationship with sentences in quiet or noise (Moberly et al., 2016b(Moberly et al., , 2017bHillyer et al., 2019). ...
... Lastly, in another paper they found that there was only a predictive value of Ravens score with anomalous sentences and not meaningful sentences (p = 0.008, df = 32) (Moberly and Reed, 2019). The other tasks used to assess non-verbal intelligence did not show any significant results when related to speech perception performance (r = -0.16 to 0.33) (Collison et al., 2004;Holden et al., 2013;Moberly et al., 2016b). ...
Article
Full-text available
Background Cochlear implants (CIs) are considered an effective treatment for severe-to-profound sensorineural hearing loss. However, speech perception outcomes are highly variable among adult CI recipients. Top-down neurocognitive factors have been hypothesized to contribute to this variation that is currently only partly explained by biological and audiological factors. Studies investigating this, use varying methods and observe varying outcomes, and their relevance has yet to be evaluated in a review. Gathering and structuring this evidence in this scoping review provides a clear overview of where this research line currently stands, with the aim of guiding future research. Objective To understand to which extent different neurocognitive factors influence speech perception in adult CI users with a postlingual onset of hearing loss, by systematically reviewing the literature. Methods A systematic scoping review was performed according to the PRISMA guidelines. Studies investigating the influence of one or more neurocognitive factors on speech perception post-implantation were included. Word and sentence perception in quiet and noise were included as speech perception outcome metrics and six key neurocognitive domains, as defined by the DSM-5, were covered during the literature search (Protocol in open science registries: 10.17605/OSF.IO/Z3G7W of searches in June 2020, April 2022). Results From 5,668 retrieved articles, 54 articles were included and grouped into three categories using different measures to relate to speech perception outcomes: (1) Nineteen studies investigating brain activation, (2) Thirty-one investigating performance on cognitive tests, and (3) Eighteen investigating linguistic skills. Conclusion The use of cognitive functions, recruiting the frontal cortex, the use of visual cues, recruiting the occipital cortex, and the temporal cortex still available for language processing, are beneficial for adult CI users. Cognitive assessments indicate that performance on non-verbal intelligence tasks positively correlated with speech perception outcomes. Performance on auditory or visual working memory, learning, memory and vocabulary tasks were unrelated to speech perception outcomes and performance on the Stroop task not to word perception in quiet. However, there are still many uncertainties regarding the explanation of inconsistent results between papers and more comprehensive studies are needed e.g., including different assessment times, or combining neuroimaging and behavioral measures. Systematic review registration https://doi.org/10.17605/OSF.IO/Z3G7W .
... Many factors arguably interact to generate the apparent heterogeneity in rehabilitation outcomes. They include compliance and satisfaction in hearing devices (Solheim et al., 2018;Solheim & Hickson, 2017), limited progression of performance in critical functions such as auditory attention, speech processing, and comprehension, especially in noisy environments (Moberly, Houston, et al., 2016;Nittrouer et al., 2016). It is, therefore, essential to better characterize this heterogeneity in the population to understand how central processing factors and rehabilitation interact. ...
Article
Full-text available
Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.
... In this study, when comparing CI users to NH controls, results show that, in the working memory tests, when verbal stimuli is presented in the visual modality, both groups perform similarly on recall however, when presented auditorily in both quiet and in noise, their performance is significantly worse. The similar performance in visual, verbal working memory is documented in other studies where CI users were shown to perform on par with NH controls (Lyxell et al., 2003;Moberly et al., 2016;Moberly, Houston, et al., 2017;Moberly, Pisoni, et al., 2017;O'Neill et al., 2019;Prince et al., 2021) suggesting that the phonological representations of words are intact and accessible in CI users. Results then show a decline in working memory recall performance once verbal stimuli are presented auditorily in quiet and only in CI users suggesting that, due to listening through a CI, working memory recall becomes affected Moberly, Houston, et al., 2017); whether the decline is due to working memory ability or perception of the word presented is unclear in this study. ...
Preprint
Full-text available
A common concern in individuals with cochlear implants (CIs) is difficulty following conversations in noisy environments and social settings. The ability to accomplish these listening tasks relies on the individual working memory abilities and draws upon limited cognitive resources to accomplish successful listening. For some individuals, allocating too much, can result deficits in speech perception and in long term detriments of quality of life. For this study, 31 CI users and NH controls completed a series of online behavioural tests and quality of life surveys, in order to investigate the relationship between visual and auditory working memory, clinical and behavioural measures of speech perception and quality of life and hearing. Results showed NH individuals were superior on auditory working memory and survey outcomes. In CI users, recall performance on the three working memory span tests declined from visual reading span to auditory listening in quiet and then listening in noise and speech perception was predictably worse when presented with noise maskers. Bilateral users performed better on each task compared to unilateral/HA and unilateral only users and reported better survey outcomes. Correlation analysis revealed that memory recall and speech perception ability were significantly correlated with sections of CIQOL and SSQ surveys along with clinical speech perception scores in CI users. These results confirm that hearing condition can predict working memory and speech perception and that working memory ability and speech perception, in turn, predict quality of life. Importantly, we demonstrate that online testing can be used as a tool to assess hearing, cognition, and quality of life in CI users.
... While found no relationship between perceptual restoration and cognitive skills (defined as scores on a composite assessment that included measures of working memory and processing speed) among NH listeners, it is possible that such a relationship could be observed among CI users. Such findings have occurred in other contexts; for example, Moberly et al. (2016) found a relationship between inhibition/concentration and sentence recognition in noise in CI users, but not in agematched NH listeners. CI users with better working memory, processing speed, and inhibitory control may demonstrate higher performance on the perceptual restoration task, as they may be more successful at storing and processing incoming speech and inhibiting irrelevant input. ...
Article
Cochlear-implant (CI) users have previously demonstrated perceptual restoration, or successful repair of noise-interrupted speech, using the interrupted sentences paradigm [Bhargava, Gaudrain, and Başkent (2014). "Top-down restoration of speech in cochlear-implant users," Hear. Res. 309, 113-123]. The perceptual restoration effect was defined experimentally as higher speech understanding scores with noise-burst interrupted sentences compared to silent-gap interrupted sentences. For the perceptual restoration illusion to occur, it is often necessary for the masking or interrupting noise bursts to have a higher intensity than the adjacent speech signal to be perceived as a plausible masker. Thus, signal processing factors like noise reduction algorithms and automatic gain control could have a negative impact on speech repair in this population. Surprisingly, evidence that participants with cochlear implants experienced the perceptual restoration illusion was not observed across the two planned experiments. A separate experiment, which aimed to provide a close replication of previous work on perceptual restoration in CI users, also found no consistent evidence of perceptual restoration, contrasting the original study's previously reported findings. Typical speech repair of interrupted sentences was not observed in the present work's sample of CI users, and signal-processing factors did not appear to affect speech repair.
Article
Objective: Hearing loss has a detrimental impact on cognitive function. However, there is a lack of consensus on the impact of cochlear implants on cognition. This review systematically evaluates whether cochlear implants in adult patients lead to cognitive improvements and investigates the relations of cognition with speech recognition outcomes. Data sources: A literature review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies evaluating cognition and cochlear implant outcomes in postlingual, adult patients from January 1996 to December 2021 were included. Of 2510 total references, 52 studies were included in qualitative analysis and 11 in meta-analyses. Review methods: Proportions were extracted from studies of (1) the significant impacts of cochlear implantation on 6 cognitive domains and (2) associations between cognition and speech recognition outcomes. Meta-analyses were performed using random effects models on mean differences between pre- and postoperative performance on 4 cognitive assessments. Results: Only half of the outcomes reported suggested cochlear implantation had a significant impact on cognition (50.8%), with the highest proportion in assessments of memory & learning and inhibition-concentration. Meta-analyses revealed significant improvements in global cognition and inhibition-concentration. Finally, 40.4% of associations between cognition and speech recognition outcomes were significant. Conclusion: Findings relating to cochlear implantation and cognition vary depending on the cognitive domain assessed and the study goal. Nonetheless, assessments of memory & learning, global cognition, and inhibition-concentration may represent tools to assess cognitive benefit after implantation and help explain variability in speech recognition outcomes. Enhanced selectivity in assessments of cognition is needed for clinical applicability.
Article
Importance: Many cochlear implant centers screen patients for cognitive impairment as part of the evaluation process, but the utility of these scores in predicting cochlear implant outcomes is unknown. Objective: To determine whether there is an association between cognitive impairment screening scores and cochlear implant outcomes. Design, setting, and participants: Retrospective case series of adult cochlear implant recipients who underwent preoperative cognitive impairment screening with the Montreal Cognitive Assessment (MoCA) from 2018 to 2020 with 1-year follow-up at a single tertiary cochlear implant center. Data analysis was performed on data from January 2018 through December 2021. Exposures: Cochlear implantation. Main outcomes and measures: Preoperative MoCA scores and mean (SD) improvement (aided preoperative to 12-month postoperative) in Consonant-Nucleus-Consonant phonemes (CNCp) and words (CNCw), AzBio sentences in quiet (AzBio Quiet), and Cochlear Implant Quality of Life-35 (CIQOL-35) Profile domain and global scores. Results: A total of 52 patients were included, 27 (52%) of whom were male and 46 (88%) were White; mean (SD) age at implantation was 68.2 (13.3) years. Twenty-three (44%) had MoCA scores suggesting mild and 1 (2%) had scores suggesting moderate cognitive impairment. None had been previously diagnosed with cognitive impairment. There were small to medium effects of the association between 12-month postoperative improvement in speech recognition measures and screening positive or not for cognitive impairment (CNCw mean [SD]: 48.4 [21.9] vs 38.5 [26.6] [d = -0.43 (95% CI, -1.02 to 0.16)]; AzBio Quiet mean [SD]: 47.5 [34.3] vs 44.7 [33.1] [d = -0.08 (95% CI, -0.64 to 0.47)]). Similarly, small to large effects of the associations between 12-month postoperative change in CIQOL-35 scores and screening positive or not for cognitive impairment were found (global: d = 0.32 [95% CI, -0.59 to 1.23]; communication: d = 0.62 [95% CI, -0.31 to 1.54]; emotional: d = 0.26 [95% CI, -0.66 to 1.16]; entertainment: d = -0.005 [95% CI, -0.91 to 0.9]; environmental: d = -0.92 [95% CI, -1.86 to 0.46]; listening effort: d = -0.79 [95% CI, -1.65 to 0.22]; social: d = -0.51 [95% CI, -1.43 to 0.42]). Conclusions and relevance: In this case series, screening scores were not associated with the degree of improvement of speech recognition or patient-reported outcome measures after cochlear implantation. Given the prevalence of screening positive for cognitive impairment before cochlear implantation, preoperative screening can be useful for early identification of potential cognitive decline. These findings support that screening scores may have a limited role in preoperative counseling of outcomes and should not be used to limit candidacy.
Article
The cochlear implant (CI) is widely considered to be one of the most innovative and successful neuroprosthetic treatments developed to date. Although outcomes vary, CIs are able to effectively improve hearing in nearly all recipients and can substantially improve speech understanding and quality of life for patients with significant hearing loss. A wealth of research has focused on underlying factors that contribute to success with a CI, and recent evidence suggests that the overall health of the cochlea could potentially play a larger role than previously recognized. This article defines and reviews attributes of cochlear health and describes procedures to evaluate cochlear health in humans and animal models in order to examine the effects of cochlear health on performance with a CI. Lastly, we describe how future biologic approaches can be used to preserve and/or enhance cochlear health in order to maximize performance for individual CI recipients.
Article
Hypotheses: 1) Scores of reading efficiency (the Test of Word Reading Efficiency, second edition) obtained in adults before cochlear implant surgery will be predictive of speech recognition outcomes 6 months after surgery; and 2) Cochlear implantation will lead to improvements in language processing as measured through reading efficiency from preimplantation to postimplantation. Background: Adult cochlear implant (CI) users display remarkable variability in speech recognition outcomes. "Top-down" processing-the use of cognitive resources to make sense of degraded speech-contributes to speech recognition abilities in CI users. One area that has received little attention is the efficiency of lexical and phonological processing. In this study, a visual measure of word and nonword reading efficiency-relying on lexical and phonological processing, respectively-was investigated for its ability to predict CI speech recognition outcomes, as well as to identify any improvements after implantation. Methods: Twenty-four postlingually deaf adult CI candidates were tested on the Test of Word Reading Efficiency, Second Edition preoperatively and again 6 months post-CI. Six-month post-CI speech recognition measures were also assessed across a battery of word and sentence recognition. Results: Preoperative nonword reading scores were moderately predictive of sentence recognition outcomes, but real word reading scores were not; word recognition scores were not predicted by either. No 6-month post-CI improvement was demonstrated in either word or nonword reading efficiency. Conclusion: Phonological processing as measured by the Test of Word Reading Efficiency, Second Edition nonword reading predicts to a moderate degree 6-month sentence recognition outcomes in adult CI users. Reading efficiency did not improve after implantation, although this could be because of the relatively short duration of CI use.
Article
The ability to understand speech varies significantly among cochlear implant (CI) listeners as it depends on a variety of individual and environmental factors. Especially in adverse listening situations, good cognitive and linguistic (“top-down”) skills and the use of signal pre-processing algorithms are considered beneficial. However, not much is known about the interactions between these top-down and perceptive bottom-up processes in CI listeners. To shed light on the relation between the spectral representation of speech sounds and the ability of CI listeners to decode these signals, two methods for the simplification of speech signals through spectral sparsification were developed and evaluated in listening tests with postlingually deaf adult CI listeners. Speech signals were separated into transient and harmonic parts. After sparsification of the harmonic spectrum by principal component analysis (PCA) or by an individualized spectral peak picking approach, the transient and the sparsened harmonic parts were remixed. Furthermore, cognitive parameters of the subjects were assessed via a neurocognitive test battery (ALACog) and their correlation with recognition scores evaluated. The PCA-based sparsification method showed a significant benefit in speech recognition relative to the unprocessed signal. Furthermore, subjects with better performance in working memory and mental flexibility showed larger improvements.
Article
Introduction Real-world speech communication involves interacting with many talkers with diverse voices and accents. Many adults with cochlear implants (CIs) demonstrate poor talker discrimination, which may contribute to real-world communication difficulties. However, the factors contributing to talker discrimination ability, and how discrimination ability relates to speech recognition outcomes in adult CI users are still unknown. The current study investigated talker discrimination ability in adult CI users, and the contributions of age, auditory sensitivity, and neurocognitive skills. In addition, the relation between talker discrimination ability and multiple-talker sentence recognition was explored. Methods Fourteen post-lingually deaf adult CI users (3 female, 11 male) with ≥1 year of CI use completed a talker discrimination task. Participants listened to two monosyllabic English words, produced by the same talker or by two different talkers, and indicated if the words were produced by the same or different talkers. Nine female and nine male native English talkers were paired, resulting in same- and different-talker pairs as well as same-gender and mixed-gender pairs. Participants also completed measures of spectro-temporal processing, neurocognitive skills, and multiple-talker sentence recognition. Results CI users showed poor same-gender talker discrimination, but relatively good mixed-gender talker discrimination. Older age and weaker neurocognitive skills, in particular inhibitory control, were associated with less accurate mixed-gender talker discrimination. Same-gender discrimination was significantly related to multiple-talker sentence recognition accuracy. Conclusion Adult CI users demonstrate overall poor talker discrimination ability. Individual differences in mixed-gender discrimination ability were related to age and neurocognitive skills, suggesting that these factors contribute to the ability to make use of available, degraded talker characteristics. Same-gender talker discrimination was associated with multiple-talker sentence recognition, suggesting that access to subtle talker-specific cues may be important for speech recognition in challenging listening conditions.
Article
Full-text available
Objective: The aim of this case series was to assess the impact of auditory rehabilitation with cochlear implantation on the cognitive function of elderly patients over time. Design: This is a longitudinal case series of prospective data assessing neurocognitive function and speech perception in an elderly cohort pre- and post-implantation. Setting: University cochlear implant center. Participants: The patients were post-lingually deafened elderly female (mean, 73.6 years; SD, 5.82; range, 67-81 years) cochlear implant recipients (n=7). Measurements: A neurocognitive battery of 20 tests assessing intellectual function, learning, short- and long-term memory, verbal fluency, attention, mental flexibility, and processing speed was performed prior to and 2-4.1 years (mean, 3.7) after cochlear implant (CI). Speech perception testing using Consonant-Nucleus-Consonant words was performed prior to implantation and at regular intervals postoperatively. Individual and aggregate differences in cognitive function pre- and post-CI were estimated. Logistic regression with cluster adjustment was used to estimate the association (%improvement or %decline) between speech understanding and years from implantation at 1 year, 2 years, and 3 years post-CI. Results: Improvements after CI were observed in 14 (70%) of all subtests administered. Declines occurred in five (25%) subtests. In 55 individual tests (43%), post-CI performance improved compared to a patient's own performance before implantation. Of these, nine (45%) showed moderate or pronounced improvement. Overall, improvements were largest in the verbal and memory domains. Logistic regression demonstrated a significant relationship between speech perception and cognitive function over time. Five neurocognitive tests were predictive of improved speech perception following implantation. Conclusion: Comprehensive neurocognitive testing of elderly women demonstrated areas of improvement in cognitive function and auditory perception following cochlear implantation. Multiple neurocognitive tests were strongly associated with current speech perception measures. While these data shed light on the complex relationship between hearing and cognition by showing that CI may slow the expected age-related cognitive decline, further research is needed to examine the impact of hearing rehabilitation on cognitive decline.
Chapter
Full-text available
With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in understanding speech in noise (SiN). The psychological construct that has received most interest is working memory (WM), representing the ability to simultaneously store and process information. Common lore and theoretical models assume that WM-based processes subtend speech processing in adverse perceptual conditions, such as those associated with hearing loss or background noise. Empirical evidence confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. To assess whether WMC also plays a role when listeners without hearing loss process speech in acoustically adverse conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification. The survey revealed little or no evidence for an association between WMC and SiN performance. We also analysed new data from 132 normal-hearing participants sampled from across the adult lifespan (18–91 years), for a relationship between Reading-Span scores and identification of matrix sentences in noise. Performance on both tasks declined with age, and correlated weakly even after controlling for the effects of age and audibility (r = 0.39, p ≤ 0.001, one-tailed). However, separate analyses for different age groups revealed that the correlation was only significant for middle-aged and older groups but not for the young (< 40 years) participants.
Article
Full-text available
This study examined the effects of executive control and working memory on older adults' sentence-final word recognition. The question we addressed was the importance of executive functions to this process and how it is modulated by the predictability of the speech material. To this end, we tested 173 neurologically intact adult native English speakers aged 55-84 years. Participants were given a sentence-final word recognition test in which sentential context was manipulated and sentences were presented in different levels of babble, and multiple tests of executive functioning assessing inhibition, shifting, and efficient access to long-term memory, as well as working memory. Using a generalized linear mixed model, we found that better inhibition was associated with higher accuracy in word recognition, while increased age and greater hearing loss were associated with poorer performance. Findings are discussed in the framework of semantic control and are interpreted as supporting a theoretical view of executive control which emphasizes functional diversity among executive components.
Article
Full-text available
Objective: To assess differences in pitch-ranking ability across a range of speech understanding performance levels and as a function of electrode position. Study design: An observational study of a cross-section of cochlear implantees. Setting: Tertiary referral center for cochlear implantation. Patients: A total of 22 patients were recruited. All three manufacturers' devices were included (MED-EL, Innsbruck, Austria, n = 10; Advanced Bionics, California, USA, n = 8; and Cochlear, Sydney, Australia, n = 4) and all patients were long-term users (more than 18 months). Twelve of these were poor performers (scores on BKB sentence lists <60%) and 10 were excellent performers (BKB >90%). Intervention: After measurement of threshold and comfort levels, and loudness balancing across the array, all patients underwent thorough pitch-ranking assessments at 80% of comfort levels. Main outcome measure: Ability to discriminate pitch across the electrode array, measured by consistency in discrimination of adjacent pairs of electrodes, as well as an assessment of the pitch order across the array using the midpoint comparison task. Results: Within the poor performing group there was wide variability in ability to pitch rank, from no errors, to a complete inability to reliably and consistently differentiate pitch change across the electrode array. Good performers were overall significantly more accurate at pitch ranking (p = 0.026). Consistent pitch ranking was found to be a significant independent predictor of BKB score, even after adjusting for age. Users of the MED-EL implant experienced significantly more pitch confusions at the apex than at more basal parts of the electrode array. Conclusions: Many cochlear implant users struggle to discriminate pitch effectively. Accurate pitch ranking appears to be an independent predictor of overall outcome. Future work will concentrate on manipulating maps based upon pitch discrimination findings in an attempt to improve speech understanding.
Article
The goal of this article is to trace the evolution of models of working memory and cognitive resources from the early 20th century to today. Linear flow models of information processing common in the 1960s and 1970s centered on the transfer of verbal information from a limited-capacity short-term memory store to long-term memory through rehearsal. Current conceptions see working memory as a dynamic system that includes both maintaining and manipulating information through a series of interactive components that include executive control and attentional resources. These models also reflect the evolution from an almost exclusive concentration on working memory for verbal materials to inclusion of a visual working memory component. Although differing in postulated mechanisms and emphasis, these evolving viewpoints all share the recognition that human information processing is a limited-capacity system with limits on the amount of information that can be attended to, remain activated in memory, and utilized at one time. These limitations take on special importance in spoken language comprehension, especially when the stimuli have complex linguistic structures or listening effort is increased by poor acoustic quality or reduced hearing acuity.
Conference Paper
Age-related differences are observed on many measures of both perceptual and cognitive processing. Indeed, strong correlations between basic measures of hearing and vision and age-related variations in intelligence have highlighted the powerful links between perception and cognition. In this paper, links between age-related differences in auditory temporal processing and slowing in cognitive processing are explored in an effort to illuminate how older adults listen to language spoken in challenging everyday conditions. Experiments in which the signal-to-noise condition is varied to equate listening difficulty for younger and older adults and experiments that simulate auditory aging in younger listeners provide evidence that at least some of the apparent age-related differences in cognitive performance during spoken language comprehension may be secondary to auditory temporal processing differences.
Article
Objective: To assess differences in pitch-ranking ability across a range of speech understanding performance levels and as a function of electrode position. Study Design: An observational study of a cross-section of cochlear implantees. Setting: Tertiary referral center for cochlear implantation. Patients: A total of 22 patients were recruited. All three manufacturers' devices were included (MED-EL, Innsbruck, Austria, n=10; Advanced Bionics, California, USA, n=8; and Cochlear, Sydney, Australia, n=4) and all patients were long-term users (more than 18 months). Twelve of these were poor performers (scores on BKB sentence lists <60%) and 10 were excellent performers (BKB >90%). Intervention: After measurement of threshold and comfort levels, and loudness balancing across the array, all patients underwent thorough pitch-ranking assessments at 80% of comfort levels. Main Outcome Measure: Ability to discriminate pitch across the electrode array, measured by consistency in discrimination of adjacent pairs of electrodes, as well as an assessment of the pitch order across the array using the midpoint comparison task. Results: Within the poor performing group there was wide variability in ability to pitch rank, from no errors, to a complete inability to reliably and consistently differentiate pitch change across the electrode array. Good performers were overall significantly more accurate at pitch ranking (p=0.026). Consistent pitch ranking was found to be a significant independent predictor of BKB score, even after adjusting for age. Users of the MED-EL implant experienced significantly more pitch confusions at the apex than at more basal parts of the electrode array. Conclusions: Many cochlear implant users struggle to discriminate pitch effectively. Accurate pitch ranking appears to be an independent predictor of overall outcome. Future work will concentrate on manipulating maps based upon pitch discrimination findings in an attempt to improve speech understanding.