Laryngoscope Investigative Otolaryngology
C2016 The Authors. Laryngoscope Investigative Otolaryngology
published by Wiley Periodicals, Inc. on behalf of The Triological Society
Non-auditory Neurocognitive Skills Contribute to Speech
Recognition in Adults With Cochlear Implants
Aaron C. Moberly, MD; Derek M. Houston, PhD; Irina Castellanos, PhD
Objective: Unexplained variability in speech recognition outcomes among postlingually deafened adults with cochlear
implants (CIs) is an enormous clinical and research barrier to progress. This variability is only partially explained by patient
factors (e.g., duration of deafness) and auditory sensitivity (e.g., spectral and temporal resolution). This study sought to deter-
mine whether non-auditory neurocognitive skills could explain speech recognition variability exhibited by adult CI users.
Study Design: Thirty postlingually deafened adults with CIs and thirty age-matched normal-hearing (NH) controls were
Methods: Participants were assessed for recognition of words in sentences in noise and several non-auditory measures
of neurocognitive function. These non-auditory tasks assessed global intelligence (problem-solving), controlled fluency, work-
ing memory, and inhibition-concentration abilities.
Results: For CI users, faster response times during a non-auditory task of inhibition-concentration predicted better rec-
ognition of sentences in noise; however, similar effects were not evident for NH listeners.
Conclusions: Findings from this study suggest that inhibition-concentration skills play a role in speech recognition for
CI users, but less so for NH listeners. Further research will be required to elucidate this role and its potential as a novel tar-
get for intervention.
Key Words: cochlear implants, sensorineural hearing loss, speech perception.
Although cochlear implants (CIs) are effective in
restoring access to auditory input for adults with
acquired hearing loss, the benefits to speech recognition
are not consistent across patients. Average speech recog-
nition after implantation is approximately 70% correct
words in sentences in quiet, with generally poorer per-
formance in noise. Some patients experience minimal
speech recognition benefit after implantation, while
others achieve scores near 100% in quiet.
ability in outcomes presents a challenge for healthcare
providers. Identifying factors that explain outcome
variability, along with factors that can be used to prog-
nosticate postoperative outcomes, may help us to better
counsel patients as well as to identify novel targets for
clinical intervention for poorly performing patients.
Most research on postlingually deaf adults with CIs
has focused on the “bottom-up” auditory sensitivity to
the spectral and temporal properties of speech signals by
improving CI hardware, processing, and stimulation
However, there is increasing evidence
that “top-down” neurocognitive mechanisms—here
broadly defined as using language knowledge and execu-
tive control during intentional and goal-directed behav-
ior–contribute to speech recognition outcomes.
During spoken language recognition, the listener must
use neurocognitive skills to make sense of the incoming
speech signal, relating it to linguistic representations in
These neurocognitive processes
appear to be especially important when the bottom-up
sensory input is degraded (e.g., in noise, when using a
hearing aid, or when listening to the degraded signals
transmitted by a CI); degraded input leads to greater
ambiguity in how the information within that input
should be organized perceptually. Under these degraded
listening conditions, sufficient neurocognitive resources
are required to result in successful speech recognition.
A number of neurocognitive skills have been exam-
ined previously for their effects on speech recognition in
adults with lesser degrees of hearing loss. Some listeners
may be better able to make sense of degraded speech by
being able to more effectively store and integrate new
information with information presented earlier, or by
being able to do so more rapidly. In general, measures of
This is an open access article under the terms of the Creative
Commons Attribution-NonCommercial-NoDerivs License, which permits
use and distribution in any medium, provided the original work is prop-
erly cited, the use is non-commercial and no modifications or adaptations
From the Department of Otolaryngology, The Ohio State
University Wexner Medical Center, Columbus, Ohio, USA
Editor’s Note: This Manuscript was accepted for publication
8 October 2016.
Data from this study were presented at the 2016 Triological Socie-
ty annual meeting of the Combined Otolaryngology Spring Meetings
(COSM), May 20-21, 2016, in Chicago, IL.
Financial Disclosures: Research reported in this publication was sup-
ported by the Triological Society Career Development Award and the Ameri-
can Speech-Language-Hearing Foundation Speech Science Award to Aaron
Moberly. Normal-hearing participants were recruited through Research-
Match, which is funded by the NIH Clinical and Translational Science Award
(CTSA) program, grants UL1TR000445 a nd 1U54RR032646-01.
Conflicts of Interest: None
Correspondence: Aaron C. Moberly, Division of Otology, Neurotol-
ogy, & Cranial Base Surgery, The Ohio State University, 915 Olentangy
River Road, Suite 4000, Columbus, OH 43212. E-mail: Aaron.Moberly@
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
verbal working memory, a limited-capacity temporary
storage mechanism for holding and processing informa-
tion, have been found to be successful predictors of
speech recognition under degraded or challenging listen-
On the other hand, general scholastic
abilities (e.g., standardized test scores or grade point
average), tests of IQ, and measures of simple reaction
time have typically failed to demonstrate significant
associations with speech recognition performance.
When it comes to CI users, much less is known
regarding the role of neurocognitive processes during
speech recognition. In an early study of predictors of
speech recognition performance in 29 adults with early-
generation multichannel CIs, scores on a Visual Monitor-
ing Task, requiring a rapid response to digits displayed
on a computer screen when a specified pattern was pro-
duced, and a visual Sequence Learning Task (a written
task of rapid detection and completion of a sequence of
characters) accounted for 10 to 31% of variance in
speech recognition measures.
Follow-up studies in a
larger group of 48 adults receiving CIs demonstrated
results of development of a preoperative predictive index
using multivariate regression modeling, including dura-
tion of deafness, speech-reading ability, residual hearing
function, measures of compliance with treatment, and
also cognitive ability.
In their combined multivariate
regression analysis, scores on the Visual Monitoring
Task predicted approximately 5 to 20% of the variance
in speech recognition outcomes. Interestingly, a more
recent study demonstrated Visual Monitoring Task
scores as significant predictors of accuracy in music rec-
ognition, suggesting similar neurocognitive demands in
tasks of speech recognition and music perception.
These early studies of predictors of speech recogni-
tion performance in CI users demonstrated support for
the role of rapid processing of sequentially presented
stimuli. The first goal of the current study was to exam-
ine several other neurocognitive abilities in a group of
adult CI users. The study was designed to test the
hypothesis that non-auditory neurocognitive skills con-
tribute to sentence recognition scores in postlingually
deafened adult CI users. Several neurocognitive skills
likely come into play when performing a task of sentence
recognition under degraded listening conditions. In par-
ticular, these skills include sustaining controlled atten-
tion to the task,
exerting controlled fluency: the ability
to process stimuli rapidly under concentration demands,
and exerting inhibition-concentration: the ability to con-
centrate on information relevant to the task while sup-
pressing prepotent or automatic responses not relevant to
Support for the role of inhibitory control comes
from studies demonstrating that reductions in older
adults’ abilities to ignore task-irrelevant information are
an important contributor to their difficulty recognizing
words in noise.
Inhibitory processes may also facili-
tate the identification of correct lexical items and inhibit
In CI users, it is possible that neurocognitive abilities
play an even greater role in speech recognition than for
individuals with normal hearing (NH) listening under
degraded conditions (e.g., noise), because CI users face
even greater degrees of degradation of the spectro-
temporal details of speech delivered by their implants.
The second goal of this study was to examine whether the
relations among non-auditory measures of neurocognitive
skills and sentence recognition were different between CI
users and NH age-matched peers listening to sentences in
To address the above goals, a group of postlingually
deafened adult experienced CI users, alongside a group
of age-matched peers with NH, were tested using several
measures of recognition of words in sentences, along
with non-auditory measures of neurocognitive function,
including global fluid intelligence (problem-solving),
working memory, controlled fluency, and inhibition-
concentration abilities. Neurocognitive scores were ana-
lyzed for their relationships with sentence recognition.
Addressing these two goals should have clinical
ramifications: identifying neurocognitive factors that
contribute to speech recognition outcomes, which can be
tested in a non-auditory fashion, could suggest novel
diagnostic predictors of outcomes for patients consider-
ing cochlear implantation. Moreover, findings could
suggest potential neurocognitive intervention targets for
poorly performing patients.
MATERIALS AND METHODS
Sixty adults were enrolled. Thirty were experienced CI
users, between ages 50 and 82 years, recruited from the Otolar-
yngology department at The Ohio State University. Implant
users had varying etiologies of hearing loss and ages at implan-
tation; however, all CI users had progressive declines in hearing
during adulthood. All patients received their implants at the
age of 35 years or later. Participants had CI-aided thresholds
better than 35 dB HL at 0.25, .5, 1, and 2 kHz, as measured by
clinical audiologists within the year before study enrollment.
All patients had used their CIs for at least 9 months. All used
Cochlear devices and an Advanced Combined Encoder process-
ing strategy. Thirteen CI users had a right CI, 9 used a left
device, and 8 had bilateral CIs. A contralateral hearing aid was
worn by 13 patients. During testing, participants wore devices
in their everyday mode, including use of hearing aids, and kept
the same settings during the entire testing session. Residual
hearing in each ear was assessed immediately before testing.
Thirty age-matched normal-hearing (NH) controls were
also tested, matched as closely as possible to the chronological
ages of the CI users. Controls were evaluated for NH immediately
before testing; NH was defined as four-tone (0.5, 1, 2, and 4 kHz)
pure-tone average (PTA) better than 25 dB HL in the better ear.
This criterion was relaxed to 30 dB HL PTA for participants over
age 60 years, and only three had a PTA of poorer than 25 dB HL.
NH control participants were recruited from patients in the Oto-
laryngology department with non-otologic complaints, or by using
ResearchMatch, a research recruitment database.
All participants underwent screening to ensure no evi-
dence of cognitive impairment. The Mini-Mental State Exami-
nation (MMSE) was used, which is a validated assessment tool
for verbal working memory, attention, and the ability to follow
Raw scores were converted to Tscores, using age
and education, with a Tscore less than 29 being concerning for
cognitive impairment. All participants had Tscores greater
than 29 on the MMSE.
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
Participants were also assessed for basic word-reading
ability, using the Word Reading subtest of the Wide Range
Achievement Test, 4
serving as a metric of
general language proficiency. All participants demonstrated a
standard score of 85, with no participant scoring poorer than
one standard deviation below the mean. Because some tasks
required looking at a computer monitor or paper forms, a final
screening test of near-vision was performed; all participants
had corrected near-vision of better than or equal to 20/30, the
criterion for passing vision screens in educational settings.
Participants of both CI and NH groups were adults with
spoken American English as their first language. All had a high
school diploma, except for one CI user with a GED. A measure
of socioeconomic status (SES) was obtained, because SES may
predict access to vocabulary and language. SES was quantified
using a metric defined by Nittrouer and Burton,
occupational and educational levels, using two scales between 1
and 8. Scores of 8 were the highest levels possible. The two
scores were multiplied, resulting in scores between 1 and 64.
No significant differences were found for age or SES, but CI
participants scored significantly more poorly on the reading and
cognitive screening tasks. Demographic and audiologic data for
the CI users are shown in Table 1. Mean demographic measures
for the CI and NH groups are shown in Table 2.
Audiometry was performed using a Welch Allyn TN262
audiometer with TDH-39 headphones. For the MMSE and
WRAT screening tasks, as well as the tasks of sentence recogni-
tion and the neurocognitive tasks, participant responses were
video- and audio-recorded. Participants wore vests holding FM
transmitters that sent signals to receivers, which provided
input directly into the video camera. Responses for these tasks
were live-scored but then could also be scored later; two staff
members could independently score responses to check reliabili-
ty. Participants were tested while using their usual devices (one
CI, two CIs, or CI plus contralateral hearing aid) or no devices
(for NH controls), and devices were checked at the beginning of
testing by having the tester confirm sound detection by the par-
ticipant. Speech samples for the sentence recognition measures
were collected from a male talker directly onto the computer
hard drive, via an AKG C535 EB microphone, a Shure M268
amplifier, and a Creative Laboratories Soundblaster soundcard.
All tasks were performed in a soundproof booth or a
sound-treated testing room.
Sentence Recognition. Three measures examining the
recognition of words in sentences were included: long, syntacti-
cally complex sentences (“long, complex” sentences); short, high-
ly constrained, meaningful sentences (“short, meaningful”
sentences); and short strings of nonwords that were syntactical-
ly correct but semantically anomalous (“nonsense” sentences).
To avoid ceiling and floor effects, participants were tested in dif-
ferent amounts of speech-shaped noise based on pilot testing of
three NH and 3 CI listeners, with the presentation of signal
and noise at 68 dB SPL. For CI participants, the signal-to-noise
ratio (SNR) was 13 dB for long, complex and short, meaningful
sentences, and CI users were tested in quiet for nonsense sen-
tences; NH listeners were tested at 23 dB SNR for all sentence
recognition tasks. Percentages of correct words repeated for
each sentence type were used as the measures of interest.
Recognition of Words in Long, Complex Sentences.
These sentences were long, syntactically complex sentences that
were designed to assess comprehension of complex syntax in
children with dyslexia (e.g., “The stars that the sailor saw came
out at midnight”). These sentences contained a mix of sentences
with three types of syntax: compound clauses, subject-object,
Recognition of Words in Short, Meaningful Senten-
ces. Fifty-four of the 72 five-word sentences (four for practice,
50 for testing) used by Nittrouer and Lowenstein study were
These sentences are semantically predictable and syn-
tactically correct, and they follow a subject-predicate structure
(e.g., “Flowers grow in the garden”).
Recognition of Words in Nonsense Sentences. These
sentences were four words in length, syntactically correct, but
semantically anomalous (e.g., “Soft rocks taste red”), used by
Nittrouer and colleagues.
Non-auditory Measures of Neurocognitive Function-
ing. Non-auditory tasks from the Leiter-3 International Perfor-
mance Scale were used to assess global intelligence (“Figure
Ground,” “Form Completion,” and “Visual Patterns”), controlled
fluency (“Attention Sustained”), and working memory
A non-auditory computerized
measure of inhibition-concentration (Stroop) was also collected.
Leiter-3. The Leiter-3 is a standardized neurocognitive
assessment battery designed to assess neurocognitive functions
in children and adults, with age norms up to 751years of age.
Because all measures are non-auditory in nature, the Leiter-3
can be used with patients with hearing loss. All instructions are
given to the participant through pantomime and gesturing. The
following measures from the Leiter-3 were included. The first
three, Figure Ground, Form Completion, and Visual Patterns
were used as measures of global intellectual ability related to
fluid reasoning, and were collected to ensure that these global
intellectual skills were equivalent between CI and NH groups.
Moreover, it was predicted that these measures would not
demonstrate relations with speech recognition abilities. The
other tasks included from the Leiter-3 were Attention Sustained
(considered a task of controlled fluency in this paper) and For-
ward and Reverse Memory (non-auditory measures of working
memory). All tasks were presented as discussed in the Leiter-3
manual. Raw scores were converted into standard scores, which
were used in analyses.
Global Intellectual Skills. During the Figure Ground
task, participants pointed to where figures depicted on cards
were located on a larger picture. As the task proceeded, the
pictures and figures became more detailed, and abstract images
were included, increasing the difficulty of the task. During the
Form Completion task, three blocks with fragments of a com-
plete picture were placed on a table in front of the participant.
Participants were required to put the blocks in the correspond-
ing slots of an easel to complete the target form. During the
Visual Patterns task, participants selected blocks in an appro-
priate sequence to complete a visual pattern. For each of these
tasks, correct responses were counted.
Controlled Fluency. During the Attention Sustained
subtest, participants were given a 30- or 60-second duration of
time to cross out as many figures as possible on a piece of paper
that matched a target figure shown at the top of the page.
Correct responses were counted, and errors were subtracted.
Working Memory. During the Forward Memory and
Reverse Memory subtests, an easel was shown with several pic-
tures of animals in squares. The tester pointed to a sequence of
pictures, and participants were required to point to the corre-
sponding pictures in the same order or in the reverse order.
Correct responses were counted.
Inhibition-Concentration. A non-auditory computerized
version of a verbal Stroop task was used, which is publicly
Laryngoscope Investigative Otolaryngology 00: Month 2016 Moberly et al.: Neurocognitive skills and speech recognition
Cochlear implant participant demographics. Sentence recognition tasks were performed at a 13 dB SNR for long, complex sentences and for short, meaningful sentences, and in
quiet for nonsense sentences.
Age (years) SES
Aid Etiology of Hearing Loss
Better ear PTA
1 F 64 54 24 B N Genetic 120.0 70.7 96.0 93.0
2 F 66 62 35 R Y Genetic, progressive, adult onset 78.8 32.6 59.2 86.0
3 M 66 61 18 L N Noise, Meniere’s 82.5 44.3 66.4 91.0
4 F 66 58 12 R Y Genetic, progressive, adult onset 98.8 62.2 92.0 83.0
6 M 69 65 24 R N Genetic, progressive, adult onset 88.8 20.4 76.0 84.0
7 M 58 52 36 B N Rubella, progressive 115.0 6.5 25.6 40.0
8 F 56 48 25 R Y Genetic, progressive 82.5 51.4 84.0 77.0
9 M 79 67 49 L N Genetic 120.0 0.7 0.0 46.0
10 M 79 76 36 R Y Progressive, adult onset, noise 70.0 34.0 73.6 71.0
12 F 68 56 12 B N Otosclerosis 112.5 12.7 25.6 92.0
13 M 54 50 24 B N Progressive, adult onset 120.0 58.2 84.8 90.0
16 F 62 59 35 R N Progressive, adult onset 115.0 7.9 17.6 69.0
19 F 75 67 36 L N Progressive, adult onset, autoimmune 120.0 1.9 1.6 48.0
20 M 78 74 15 L N Ear infections 108.8 4.2 0.0 57.0
21 M 82 58 42 L Y Meniere’s 71.3 29.4 55.2 72.0
23 F 80 73 30 R N Progressive, adult onset 87.5 26.2 35.2 75.0
25 M 58 57 24 R Y Autoimmune, sudden 120.0 7.2 3.2 72.0
28 M 77 72 12 B N Progressive, adult onset 120.0 0.9 0.8 41.0
31 F 67 62 25 L Y Progressive as child 102.5 8.6 16.8 68.0
34 M 60 54 42 L Y Noise, Meniere’s, sudden 98.8 7.5 1.6 83.0
35 M 68 62 42 B N Genetic, progressive, adult onset 120.0 31.3 68.8 74.0
37 F 50 35 35 B N Progressive as child 120.0 76.8 97.6 92.0
38 M 75 74 35 L Y Ototoxicity 96.3 1.4 3.2 31.0
39 F 63 61 30 R N Progressive, adult onset 107.5 16.0 16.0 82.0
40 F 66 59 15 B N Genetic, Meniere’s 120.0 31.5 73.6 89.0
41 F 59 56 15 R Y Sudden HL 87.5 37.1 60.8 80.0
42 M 82 76 42 R Y Progressive, adult onset, noise 68.8 38.9 61.6 74.0
44 F 72 66 25 R N Progressive, adult onset 98.8 10.6 7.2 77.0
46 M 75 74 42 L Y Progressive, adult onset 87.5 0 0.0 27.0
48 F 78 48 15 R Y Progressive, adult onset 110.0 7.6 12.0 53.0
Notes: SES: socioeconomic status; PTA: pure-tone average; HL: hearing level
available (http://www.millisecond.com). Participants were pre-
sented with color words one at a time on a computer monitor
and were asked to give a response naming the color of the text
of the word shown. Scoring was done automatically at the time
of testing when the participant directly entered responses into a
computer by pressing buttons corresponding to the colors.
Response times were computed for correct responses to congru-
ent words (automatic word reading; e.g., the word “Red” was
shown in red ink) and to incongruent words (requires partici-
pants to inhibit word reading and concentrate on the ink color;
e.g., the word “Red” was shown in blue ink).
All procedures were approved by The Ohio State Universi-
ty Institutional Review Board. Participants were tested in one
session over two hours. First, hearing thresholds and screening
measures were obtained. Participants then completed sentence
recognition testing, with different sentence materials presented
in blocks and order of sentences randomized. Lastly, partici-
pants completed the neurocognitive testing, with task order
Independent-samples t-tests were performed to identify
differences in neurocognitive scores between CI and NH groups.
Pearson-product correlation analyses were performed among
neurocognitive and sentence recognition measures.
For the CI group, side of implantation (right, left,
or bilateral) did not influence any of the neurocognitive
or sentence recognition performance scores (p>.50).
Additionally, no differences in performance were found
for CI users who wear only CIs versus a CI plus hearing
aid (p>.50). Therefore, all CI users were included
together in subsequent analyses.
On screening measures, CI users performed signif-
icantly more poorly than NH peers on word reading
(WRAT) and cognitive functioning (MMSE), though all
participants were within the normal range. Item anal-
yses of the MMSE revealed that 74% of the errors in
CI users’ responses occurred during questions requir-
ing verbal working memory processes (e.g., recall a 3-
word list). CI and NH groups did not differ on global
nonverbal intelligence (Figure Ground, Form Comple-
tion, and Visual Patterns), nor did they differ on con-
trolled fluency (Attention Sustained), reverse working
memory (Reverse Memory), or inhibition-concentration
(Verbal Stroop; see Table 3). CI users scored more poor-
ly than NH participants on forward working memory
(Forward Memory). However, CI users displayed for-
ward working memory scores within the normal range.
Scores for the sentence recognition assessments were
not normally distributed; therefore, arcsine transfor-
mations were computed and used for all subsequent
analyses. Sentence recognition scores were not directly
compared between CI and NH groups, because they
were tested at different SNRs, but mean scores are
shown in Table 3.
Mean (SD) Mean (SD) tvalue pvalue
Age (years) 68.3 (9.4) 68.4 (8.9) 0.03 .98
107 (12.5) 100.5 (11.1) 2.13 .04
MMSE (Tscore) 55.8 (10.7) 49.8 (9.4) 2.29 .03
SES 34 (13.9) 28.2 (11.3) 1.74 .09
Group mean neurocognitive and sentence recognition scores and results of independent-samples t-tests. Sentence recognition scores
were not compared between groups, because signal-to-noise ratio (SNR) was different between groups. For CI users, sentence recognition
scores were presented at 13 dB SNR for long, complex and short, meaningful sentences and in quiet for nonsense sentences. For NH
listeners, all sentence recognition tasks were presented at 23 dB SNR.
NH (N530) CI (N530)
NMean (SD) NMean (SD) tvalue pvalue
Figure Ground (scaled score) 30 11.6 (5.2) 30 11.2 (3.2) .36 .72
Form Completion (scaled score) 30 10.9 (2.4) 30 11.0 (2.9) .10 .92
Visual Patterns (scaled score) 30 12.4 (2.6) 30 11.8 (2.5) .89 .38
Attention Sustained (scaled score) 30 10.2 (1.9) 30 9.6 (2.0) 1.20 .24
Forward Memory (scaled score) 30 13.0 (2.3) 30 11.8 (2.3) 2.08 .04
Reverse Memory (scaled score) 30 13.5 (2.4) 30 12.7 (2.2) 1.44 .16
Verbal Stroop - Congruent (response time in seconds) 30 1.22 (.30) 28 1.34 (.47) 1.15 .26
Verbal Stroop - Incongruent (response time in seconds) 30 1.57 (.47) 28 1.72 (.48) 1.16 .25
Sentence Recognition - Long, complex (% words correct) 30 66.7 (14.4) 30 24.6 (22.4)
Sentence Recognition - Short, meaningful (% words correct) 30 81.7 (9.3) 30 40.5 (35.0)
Sentence Recognition - Nonsense (% words correct) 30 38.8 (11.7) 30 70.6 (19.0)
The first goal of this study was to examine whether
neurocognitive skills, assessed using non-auditory tasks,
were associated with sentence recognition performance.
Correlations between neurocognitive scores and sentence
recognition scores are shown in Table 4. For CI users,
only one of the neurocognitive domains, inhibition-
concentration, was significantly associated with all three
sentence recognition scores (p5.02 – .03 across sentence
measures). Specifically, the response times from the
“incongruent” condition correlated with sentence recog-
nition scores (see Figure 1), but response times from the
“congruent” condition did not. This finding suggests that
speed of inhibitory control, but not general response
speed, was associated with sentence recognition in CI
users. For NH controls, none of the neurocognitive
scores were associated with sentence recognition.
Because word reading (WRAT) and cognitive functioning
(MMSE) scores were poorer for CI users than NH peers,
these were also examined for correlations with sentence
recognition scores; no significant correlations were
The second goal of the study was to determine if
the relations among neurocognitive skills and sentence
recognition would differ between CI and NH groups. It
was predicted that different correlations would be
identified among neurocognitive skills and sentence
recognition scores for CI users than NH peers, because
of the greater degree of spectro-temporal degradation
experienced by CI listeners relative to NH listeners. As
demonstrated in Table 4, no correlations were demon-
strated between neurocognitive scores and sentence
recognition for the NH participants. Thus, it can be
concluded that inhibition-concentration skills contrib-
uted to sentence recognition in CI users, but not in NH
This study was designed to examine whether the
neurocognitive abilities of postlingually deafened adults
with contemporary CIs, as assessed using non-auditory
measures, would be associated with the ability to recog-
nize words in sentences. Moreover, the study aimed to
examine whether relationships among neurocognitive
measures and sentence recognition differed between CI
and NH listeners.
Results of this study demonstrated that neurocogni-
tive functions were generally similar for CI users as com-
pared with their NH age-matched peers. Scores were
poorer for CI users than for our sample of NH peers on
Forward Memory and MMSE (primarily as a result of rela-
tive deficits on the MMSE on items requiring verbal work-
ing memory). However, CI users’ scores for both Forward
Memory and MMSE were within the normal range. Read-
ing scores were also poorer for the CI group than NH
peers. However, we cannot necessarily attribute these dif-
ferences to hearing loss or use of a CI. Recent studies have
suggested that neurocognitive functions decline with wors-
ening hearing loss, and some even suggest that cochlear
implantation may reverse these declines.
are required to examine these effects in detail.
Turning to relations among neurocognitive functions
and speech recognition, support for our first hypothesis
was demonstrated: inhibition-concentration skills of CI
rvalues from correlation analyses with recognition of words in sentences. CI users were tested at 13 dB SNR for long, complex and highly
meaningful sentences, and in quiet for nonsense sentences. NH listeners were tested at 23 dB SNR for all sentence materials.
.05 .02 .09 .15 .13 -.03
.13 -.11 .01 -.09 -.16 -.17
.24 -.03 .32 .33 .26 .23
.14 .07 -.08 .14 .19 .29
-.10 -.35 .17 .23 .23 .14
.06 -.11 .08 .20 .20 .04
Verbal Stroop -
-.04 .20 .07 -.28 -.29 -.36
Verbal Stroop -
-.14 -.05 -.03 -.41*-.43*-.43*
users were significantly correlated with recognizing
words in all three types of sentence materials, with faster
inhibition responses associated with better sentence rec-
ognition. Although inhibition-concentration skills have
not been previously examined in adult CI users, results
are consistent with findings by Sommers and Danielson,
who identified individual differences in inhibitory control
as contributing to sentence recognition performance in
older adults with NH.
We speculate that inhibition-
concentration abilities may be particularly important for
CI users during speech recognition, in which they must
ignore irrelevant stimuli (noise) and/or inhibit perceiving
incorrect lexical items. This explanation is consistent
with models of speech perception that emphasize the role
that working memory plays in inhibiting interference for
irrelevant information, or for inhibiting prepotent but
For example, in the Ease of Lan-
guage Understanding (ELU) model, under degraded lis-
tening conditions, successful speech perception requires a
shift from rapid automatic processing to more effortful,
controlled processing, which is heavily dependent on
working memory capacity.
The relations of inhibition-
concentration, working memory capacity, and speech rec-
ognition processes deserve further exploration.
In contrast to inhibition-concentration, controlled
fluency and non-auditory working memory skills were
not associated with speech recognition scores. At least
two possible conclusions may be drawn from these find-
ings: first, exerting executive control on linguistic repre-
sentations, versus visual representations, may relate
most strongly to speech recognition skills. However, our
results are not consistent with those of Knutson and col-
leagues, who demonstrated relations between speech rec-
ognition measures and visually presented sequential
Alternatively, results may suggest
that our measures of neurocognitive functioning from the
Leiter–3 are not necessarily sensitive measures tapping
into the neurocognitive abilities that underlie spoken
Fig. 1. Correlations between sentence recognition scores and inhibition-concentration response times for cochlear implant users. Partici-
pants were tested at 13 dB SNR for long, complex sentences and short, meaningful sentences and in quiet for nonsense sentences.
language recognition, or that our sample sizes were not
large enough to detect significant relations. Further
research is necessary to delineate these findings.
The second hypothesis tested was that relations
among neurocognitive skills and sentence recognition
would differ between CI users and NH listeners. This
hypothesis was supported: faster inhibition was associat-
ed with better sentence recognition only for CI users.
Several possibilities may explain the lack of signifi-
cant correlations between neurocognitive functions and
speech recognition for NH listeners. One such explana-
tion is that NH listeners’ ranges of performance on the
speech recognition tasks were much narrower than those
of the CI users; this restricted variance in speech recog-
nition scores across NH listeners may have contributed
to the observed weak relationships with neurocognitive
scores. A second explanation is that there are differen-
tial relations between neurocognitive functioning and
speech recognition for CI and NH listeners. This differ-
ential relation between CI and NH listeners is consistent
with recent findings. F€
ullgrabe and Rosen have demon-
strated that neurocognitive skills (particularly working
memory capacity) contribute little to NH listeners’ per-
formance on tasks of speech recognition in noise,
contrast with several studies in adults with hearing
Third, it could be that testing listeners under
noise conditions that provide greater informational
masking (e.g., multi-talker babble), rather than the ener-
getic masking provided by speech-shaped noise here,
would allow us to better observe top-down processing
contributions to speech recognition. Finally, although
our primary analyses correlated sentence recognition
with non-auditory neurocognitive skills, we also correlat-
ed five additional measures obtained from testing (Glob-
al Intellectual Skills: Figure Ground, Form Completion,
Visual Patterns; Reading Skills: WRAT; and Cognitive
Impairment Screen: MMSE) with the neurocognitive
assessments, thereby providing clinicians with more
comprehensive information about functioning following
hearing loss and cochlear implantation. However, by
conducting these additional correlations we increased
our risk of experiment-wise error and these additional
correlations should be interpreted with caution. Addi-
tional studies will be required to better understand the
differential relations between NH listeners and patients
with hearing loss, including those with CIs.
Our findings indicate that inhibition-concentration
skills contribute to CI users’ abilities to recognize words
in sentences, while other neurocognitive tests employed
by this study did not predict word recognition ability.
Findings provide further evidence for the role of neuro-
cognitive processing by CI users and imply potential
benefits of developing clinical aural rehabilitation pro-
grams that target inhibition-concentration skills.
Research reported in this publication was supported by the
Triological Society Career Development Award and the
American Speech-Language-Hearing Foundation/Acousti-
cal Society of America Speech Science Award to Aaron
Moberly. Normal-hearing participants were recruited
through ResearchMatch, which is funded by the NIH
Clinical and Translational Science Award (CTSA) pro-
gram, grants UL1TR000445 and 1U54RR032646-01. The
authors would like to acknowledge Susan Nittrouer and
Joanna Lowenstein for their development of sentence rec-
ognition materials used, and Lauren Boyce and Taylor
Wucinich for assistance in data collection and scoring.
The authors declare no conflicts of interest.
1. Firszt JB, Holden LK, Skinner MW, et al. Recognition of speech presented
at soft to loud levels by adult cochlear implant recipients of three cochle-
ar implant systems. Ear Hear 2004;25:375–387.
2. Gifford RH, Shallop JK, Peterson AM. Speech recognition materials and
ceiling effects: Considerations for cochlear implant programs. Audiol
3. Holden LK, Finley CC, Firszt JB, et al. Factors affecting open-set word
recognition in adults with cochlear implants. Ear Hear 2013;34:342–360.
4. Moberly AC, Houston DM, Castellanos I, Boyce L, Nittrouer S. Linguistic
knowledge and working memory in adults with cochlear implants.
5. Holden LK, Reeder RM, Firszt JB, Finley CC. Optimizing the perception
of soft speech and speech in noise with the advanced bionics cochlear
implant system. Int J Audiol 2011;50:255–269.
6. Kenway B, Tam YC, Vanat Z, Harris F, Gray R, Birchall J, et al. Pitch dis-
crimination—an independent doctor in cochlear implant performance
outcomes. Otol Neurotol 2015;36:1472–1479.
7. Srinivasan AG, Padilla M, Shannon RV, Landsberger DM. Improving
speech perception in noise with current focusing in cochlear implant
users. Hear Res 2013;299:29–36.
8. Akeroyd MA. Are individual differences in speech reception related to indi-
vidual differences in cognitive ability? A survey of twenty experimental
studies with normal and hearing-impaired adults. Int J Audiol 2008;47:
9. Arehart KH, Souza P, Baca R, Kates J. Working memory, age and hearing
loss: Susceptibility to hearing aid distortion. Ear Hear 2013;34:251–260.
onnberg J, Lunner T, Zekveld A, et al. The ease of language understand-
ing (ELU) model: Theoretical, empirical, and clinical advances. Front
Syst Neurosci 2013;7:1–17.
11. Heald SLM, Nusbaum HC. Speech perception as an active cognitive pro-
cess. Front Syst Neurosci 2014;8:1–15.
12. Pisoni DB, Cleary M. Measures of working memory span and verbal
rehearsal speed in deaf children after cochlear implantation. Ear Hear
13. Jerger J, Jerger S, Pirozzolo F. Correlational analysis of speech audiomet-
ric scores, hearing loss, age, and cognitive abilities in the elderly. Ear
14. Kidd GR, Watson CS, Gygi B. Individual differences in auditory abilities.
J Acoust Soc Am 2007;122:418–435.
15. Surprenant AM, Watson CS. Individual differences in the processing of
speech and nonspeech sounds by normal-hearing listeners. J Acoust Soc
16. Knutson JF, Hinrichs JV, Tyler RS, Gantz BJ, Schartz HA, Woodworth G.
Psychological predictors of audiological outcomes of multichannel cochle-
ar implants: Preliminary findings. Ann Otol Rhinol Laryngol 1991;100:
17. Gantz BJ, Woodworth GG, Knutson JF, Abbas PJ, Tyler RS. Multivariate
predictors of success with cochlear implants. Adv Oto Rhino Laryngol
18. Gantz BJ, Woodworth G, Abbas P, Knutson JF, Tyler RS. Multivariate pre-
dictors of audiological success with cochlear implants. Ann Otol Rhinol
19. Gfeller K, Oleson J, Knutson JF, Breheny P, Driscoll V, Olszewski C. Mul-
tivariate predictors of music perception and appraisal by adult cochlear
implant users. J Am Acad Audiol 2008;19:120–134.
20. Amitay S. Forward and reverse hierarchies in auditory perceptual learn-
ing. Learn Percept 2009;1:59–68.
21. Humes LE, Floyd SS. Measures of working memory, sequence learning,
and speech recognition in the elderly. J Speech Lang Hear Res 2005;48:
22. Cahana-Amitay D, Spiro III A, Sayers JT, et al. How older adults use cog-
nition in sentence-final word recognition. Neuropsychol Dev Cogn B
Aging Neuropsychol Cogn 2015;16:1–27.
23. Janse E. A non-auditory measure of interference predicts distraction by
competing speech in older adults. Neuropsychol Dev Cogn B Aging Neu-
ropsychol Cogn 2012;19:741–758.
24. Pichora-Fuller MK. Processing speed and timing in aging adults: Psycho-
acoustics, speech perception, and comprehension. Int J Audiol 2003;42:
25. Tun PA, McCoy S, Wingfield A. Aging, hearing acuity, and the attentional
costs of effortful listening. Psychol Aging 2009;24:761–766.
26. Wingfield A, Tun PA. Cognitive supports and cognitive constraints on com-
prehension of spoken language. J Am Acad Audiol 2007;18:548–558.
27. Sommers MS, Danielson SM. Inhibitory processes and spoken word recog-
nition in young and older adults: The interaction of lexical competition
and semantic context. Psychol Aging 1999;14:458–472.
28. Folstein MF, Folstein SE, McHugh PR. Mini-mental state – practical
method for grading cognitive state of patients for clinician. J Psychiatr
29. Wilkinson GS, Robertson GJ. Wide Range Achievement Test. 4th ed. Lutz,
FL: Psychological Assessment Resources; 2006.
30. Nittrouer S, Burton LT. The role of early language experience in the devel-
opment of speech perception and phonological processing abilities: evi-
dence from 5-year-olds with histories of otitis media with effusion and
low socioeconomic status. J Commun Dis 2005;38:29–63.
31. Nittrouer S, Lowenstein JH. Learning to perceptually organize speech sig-
nals in native fashion. J Acoust Soc Am 2010;127:1624–1635.
32. Nittrouer S, Tarr E, Bolster V, Caldwell-Tarr A, Moberly AC, Lowenstein
JH. Low-frequency signals support perceptual organization of implant-
simulated speech for adults and children. Int J Audiol 2014;53:270–284.
33. Roid GH, Miller LJ, Pomplun M, Koch C. Leiter international performance
scale, (Leiter-3). Los Angeles: Western Psychological Services; 2013.
34. Cosetti MK, Pinkston JB, Flores JM, et al. Neurocognitive testing and
cochlear implantation: Insights into performance in older adults. Clin
Interv Aging 2016;11:603–613.
35. Wingfield A. Evolution of models of working memory and cognitive resour-
ces. Ear Hear 2016;37:35S–43S.
onnberg J, Lunner T, Zekveld A, et al. The Ease of Language Under-
standing (ELU) model: Theoretical, empirical, and clinical advances.
Front Syst Neurosci 2013;7:1–17.
ullgrabe C, Rosen S. Investigating the role of working memory in speech-
in-noise identification for listeners with normal hearing. In Physiology,
Psychoacoustics and Cognition in Normal and Impaired Hearing,
Advances in Experimental Medicine and Biology, Dijk PV (ed). 216;894: