Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Hypothesis: Significant variability in speech recognition persists among postlingually deafened adults with cochlear implants (CIs). We hypothesize that scores of nonverbal reasoning predict sentence recognition in adult CI users. Background: Cognitive functions contribute to speech recognition outcomes in adults with hearing loss. These functions may be particularly important for CI users who must interpret highly degraded speech signals through their devices. This study used a visual measure of reasoning (the ability to solve novel problems), the Raven's Progressive Matrices (RPM), to predict sentence recognition in CI users. Methods: Participants were 39 postlingually deafened adults with CIs and 43 age-matched normal-hearing (NH) controls. CI users were assessed for recognition of words in sentences in quiet, and NH controls listened to eight-channel vocoded versions to simulate the degraded signal delivered by a CI. A computerized visual task of the RPM, requiring participants to identify the correct missing piece in a 3×3 matrix of geometric designs, was also performed. Particular items from the RPM were examined for their associations with sentence recognition abilities, and a subset of items on the RPM was tested for the ability to predict degraded sentence recognition in the NH controls. Results: The overall number of items answered correctly on the 48-item RPM significantly correlated with sentence recognition in CI users (r = 0.35–0.47) and NH controls (r = 0.36–0.57). An abbreviated 12-item version of the RPM was created and performance also correlated with sentence recognition in CI users (r = 0.40–0.48) and NH controls (r = 0.49–0.56). Conclusions: Nonverbal reasoning skills correlated with sentence recognition in both CI and NH subjects. Our findings provide further converging evidence that cognitive factors contribute to speech processing by adult CI users and can help explain variability in outcomes. Our abbreviated version of the RPM may serve as a clinically meaningful assessment for predicting sentence recognition outcomes in CI users.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this review, the Ravens task is used most frequently to measure non-verbal intelligence (Moberly et al., 2017c(Moberly et al., , 2018aMattingly et al., 2018;Pisoni et al., 2018;Moberly and Reed, 2019;O'Neill et al., 2019;Skidmore et al., 2020;Zhan et al., 2020;. The task is to pick the piece that fits within the pattern of a visual geometric matrix. ...
... The "visual digit span task, " "Leiter-3 forward and reversed memory test, letters, and symbols, " "ALAcog 2-back test" and "Operation Span" (OSPAN) are used to assess visual working memory (Moberly et al., 2016b(Moberly et al., , 2017c(Moberly et al., , 2018cMattingly et al., 2018;Hillyer et al., 2019;Moberly and Reed, 2019;Skidmore et al., 2020;Tamati et al., 2020;Zhan et al., 2020;Völter et al., 2021;Luo et al., 2022). ...
... First, non-verbal intelligence, assessed using the Ravens Matrices task, was positively related to word or sentence perception in quiet in most studies (9 out of 13) (Moberly et al., 2017c(Moberly et al., , 2018cMattingly et al., 2018;Pisoni et al., 2018;Moberly and Reed, 2019;O'Neill et al., 2019;Skidmore et al., 2020;Zhan et al., 2020;. The Ravens task is thought to, amongst other things, involve the ability of inducing abstract relations as well as working memory (Carpenter et al., 1990). ...
Article
Full-text available
Background Cochlear implants (CIs) are considered an effective treatment for severe-to-profound sensorineural hearing loss. However, speech perception outcomes are highly variable among adult CI recipients. Top-down neurocognitive factors have been hypothesized to contribute to this variation that is currently only partly explained by biological and audiological factors. Studies investigating this, use varying methods and observe varying outcomes, and their relevance has yet to be evaluated in a review. Gathering and structuring this evidence in this scoping review provides a clear overview of where this research line currently stands, with the aim of guiding future research. Objective To understand to which extent different neurocognitive factors influence speech perception in adult CI users with a postlingual onset of hearing loss, by systematically reviewing the literature. Methods A systematic scoping review was performed according to the PRISMA guidelines. Studies investigating the influence of one or more neurocognitive factors on speech perception post-implantation were included. Word and sentence perception in quiet and noise were included as speech perception outcome metrics and six key neurocognitive domains, as defined by the DSM-5, were covered during the literature search (Protocol in open science registries: 10.17605/OSF.IO/Z3G7W of searches in June 2020, April 2022). Results From 5,668 retrieved articles, 54 articles were included and grouped into three categories using different measures to relate to speech perception outcomes: (1) Nineteen studies investigating brain activation, (2) Thirty-one investigating performance on cognitive tests, and (3) Eighteen investigating linguistic skills. Conclusion The use of cognitive functions, recruiting the frontal cortex, the use of visual cues, recruiting the occipital cortex, and the temporal cortex still available for language processing, are beneficial for adult CI users. Cognitive assessments indicate that performance on non-verbal intelligence tasks positively correlated with speech perception outcomes. Performance on auditory or visual working memory, learning, memory and vocabulary tasks were unrelated to speech perception outcomes and performance on the Stroop task not to word perception in quiet. However, there are still many uncertainties regarding the explanation of inconsistent results between papers and more comprehensive studies are needed e.g., including different assessment times, or combining neuroimaging and behavioral measures. Systematic review registration https://doi.org/10.17605/OSF.IO/Z3G7W .
... Variability in basic auditory sensitivity (e.g., spectral and temporal resolution) may help predict individual differences in speech recognition in postlingually deafened adult CI users (e.g., Moberly et al) 30 . CI users rely on a signal that is highly degraded in spectrotemporal detail because of the limitations of the electrode-nerve interface and the relatively broad electrical stimulation of the auditory nerve. ...
... Working memory (Lyxell et al; 29 Tao et al) 52 as well as inhibitory control (Moberly et al 31 ), verbal learning and memory (Pisoni et al) 25 , and processing speed (Tinnemore et al) 53 have been linked to individual differences in speech recognition among adult CI users. In addition, although a strong relation has not been established, nonverbal reasoning skills have recently been found to be associated with individual performance among postlingually deafened adult CI users, independently of age (Mattingly et al) 30 . ...
... The PRESTO materials have been shown to be more challenging to recognize than sentence materials with lower talker variability, such as Hearing in Noise Test (HINT; Nilsson et al) 37 and AzBio (Spahr et al) 46 sentences, for NH listeners and hearing-impaired listeners with CIs (Gilbert et al) 50 . In addition, PRESTO materials yield large individual differences in performance, which have been found to be related to several neurocognitive skills (Tamati et al; 50 Moberly et al) 30 . Taken together, these earlier studies suggest that high-variability speech recognition cannot be attributed to peripheral hearing acuity or audibility alone and reflect complex interactions of auditory sensitivity and neurocognitive processes. ...
Article
Background: Postlingually deafened adult cochlear implant (CI) users routinely display large individual differences in the ability to recognize and understand speech, especially in adverse listening conditions. Although individual differences have been linked to several sensory ("bottom-up") and cognitive ("top-down") factors, little is currently known about the relative contributions of these factors in high- and low-performing CI users. Purpose: The aim of the study was to investigate differences in sensory functioning and neurocognitive functioning between high- and low-performing CI users on the Perceptually Robust English Sentence Test Open-set (PRESTO), a high-variability sentence recognition test containing sentence materials produced by multiple male and female talkers with diverse regional accents. Research design: CI users with accuracy scores in the upper (HiPRESTO) or lower quartiles (LoPRESTO) on PRESTO in quiet completed a battery of behavioral tasks designed to assess spectral resolution and neurocognitive functioning. Study sample: Twenty-one postlingually deafened adult CI users, with 11 HiPRESTO and 10 LoPRESTO participants. Data collection and analysis: A discriminant analysis was carried out to determine the extent to which measures of spectral resolution and neurocognitive functioning discriminate HiPRESTO and LoPRESTO CI users. Auditory spectral resolution was measured using the Spectral-Temporally Modulated Ripple Test (SMRT). Neurocognitive functioning was assessed with visual measures of working memory (digit span), inhibitory control (Stroop), speed of lexical/phonological access (Test of Word Reading Efficiency), and nonverbal reasoning (Raven's Progressive Matrices). Results: HiPRESTO and LoPRESTO CI users were discriminated primarily by performance on the SMRT and secondarily by the Raven's test. No other neurocognitive measures contributed substantially to the discriminant function. Conclusions: High- and low-performing CI users differed by spectral resolution and, to a lesser extent, nonverbal reasoning. These findings suggest that the extreme groups are determined by global factors of richness of sensory information and domain-general, nonverbal intelligence, rather than specific neurocognitive processing operations related to speech perception and spoken word recognition. Thus, although both bottom-up and top-down information contribute to speech recognition performance, low-performing CI users may not be sufficiently able to rely on neurocognitive skills specific to speech recognition to enhance processing of spectrally degraded input in adverse conditions involving high talker variability.
... More recently, Moberly et al. [2016bMoberly et al. [ , 2017 examined several neurocognitive skills in CI users using visual tasks and found that inhibition-concentration and modality-specific (listening span rather than reading span) WM capacity played an important role in speech recognition abilities in CI users, but not in normal-hearing (NH) controls listening to sentences in speech-shaped noise [. In addition to the aforementioned studies, performance on a nonauditory test of nonverbal reasoning, the visual Raven's Progressive Matrices (RPM), was recently found to correlate with sentence recognition performance in quiet in CI users, as well as in NH controls listening to 8-channel noise-vocoded speech [Mattingly et al., 2018]. The RPM is a classic nonverbal reasoning task originally developed in the 1930s as a way to assess generalized intelligence [Raven, 1938;Carpenter et al., 1990]. ...
... Additionally, it has a visual and nonverbal format, making it ideal for populations who exhibit difficulty processing auditory input and language, such as clinical populations with hearing loss [Carpenter et al., 1990]. In the recent study by Mattingly et al. [2018], it was predicted that fluid intelligence would be important with regard to the ability to extract a meaningful form from degraded auditory information. Indeed, scores from the RPM were found to be a moderate cognitive correlate of sentence recognition scores (r values between 0.35 and 0.57) in both CI listeners and NH participants, and performance on the RPM was also found to partially mediate the effects of advancing age on poorer speech recognition outcomes in older adult CI users [Moberly et al., 2018b]. ...
... Indeed, scores from the RPM were found to be a moderate cognitive correlate of sentence recognition scores (r values between 0.35 and 0.57) in both CI listeners and NH participants, and performance on the RPM was also found to partially mediate the effects of advancing age on poorer speech recognition outcomes in older adult CI users [Moberly et al., 2018b]. Analyses in the Mattingly et al. [2018] study were limited to examining the relationship between RPM performance and sentence recognition abilities in CI users and NH controls listening to noise-vocoded speech. However, it is known that most tasks of fluid intelligence tap into more basic underlying neurocognitive functions, like WM capacity, information processing speed, and inhibition-concentration abilities [Carpenter et al., 1990;Dillon et al., 1981;Salthouse, Audiol Neurotol 2019;24:127-138 DOI: 10.115924:127-138 DOI: 10. /000500699 1993. ...
Article
Background: Previous research has demonstrated an association of scores on a visual test of nonverbal reasoning, Raven's Progressive Matrices (RPM), with scores on open-set sentence recognition in quiet for adult cochlear implant (CI) users as well as for adults with normal hearing (NH) listening to noise-vocoded sentence materials. Moreover, in that study, CI users demonstrated poorer nonverbal reasoning when compared with NH peers. However, it remains unclear what underlying neurocognitive processes contributed to the association of nonverbal reasoning scores with sentence recognition, and to the poorer scores demonstrated by CI users. Objectives: Three hypotheses were tested: (1) nonverbal reasoning abilities of adult CI users and normal-hearing (NH) age-matched peers would be predicted by performance on more basic neurocognitive measures of working memory capacity, information-processing speed, inhibitory control, and concentration; (2) nonverbal reasoning would mediate the effects of more basic neurocognitive functions on sentence recognition in both groups; and (3) group differences in more basic neurocognitive functions would explain the group differences previously demonstrated in nonverbal reasoning. Method: Eighty-three participants (40 CI and 43 NH) underwent testing of sentence recognition using two sets of sentence materials: sentences produced by a single male talker (Harvard sentences) and high-variability sentences produced by multiple talkers (Perceptually Robust English Sentence Test Open-set, PRESTO). Participants also completed testing of nonverbal reasoning using a visual computerized RPM test, and additional neurocognitive assessments were collected using a visual Digit Span test and a Stroop Color-Word task. Multivariate regression analyses were performed to test our hypotheses while treating age as a covariate. Results: In the CI group, information processing speed on the Stroop task predicted RPM performance, and RPM scores mediated the effects of information processing speed on sentence recognition abilities for both Harvard and PRESTO sentences. In contrast, for the NH group, Stroop inhibitory control predicted RPM performance, and a trend was seen towards RPM scores mediating the effects of inhibitory control on sentence recognition, but only for PRESTO sentences. Poorer RPM performance in CI users than NH controls could be partially attributed to slower information processing speed. Conclusions: Neurocognitive functions contributed differentially to nonverbal reasoning performance in CI users as compared with NH peers, and nonverbal reasoning appeared to partially mediate the effects of these different neurocognitive functions on sentence recognition in both groups, at least for PRESTO sentences. Slower information processing speed accounted for poorer nonverbal reasoning scores in CI users. Thus, it may be that prolonged auditory deprivation contributes to cognitive decline through slower information processing.
... Holden et al. (2013) found a correlation between a composite cognitive score (including verbal memory, vocabulary, similarities, and matrix reasoning) and word recognition outcomes in adult CI users; however, it was unclear in that study which component of the cognitive measure drove this relationship. Finally, a recent study by Mattingly, Castellanos, and Moberly (2018) demonstrated a relation between scores on the Raven's Matrices task and recognition scores for different meaningful sentence types (rs = .35-.47) in adult CI users. ...
... Similarly, nonverbal reasoning independently predicted Anomalous Sentence recognition. This finding is consistent with findings by Mattingly et al. (2018), in which Raven's scores were found to predict recognition scores for high-talkervariability Perceptually Robust English Sentence Test Open-Set sentences by adult CI users. The current study expands those findings by identifying an association between nonverbal reasoning and Anomalous Sentence recognition. ...
Article
Purpose Speech recognition relies upon a listener's successful pairing of the acoustic–phonetic details from the bottom-up input with top-down linguistic processing of the incoming speech stream. When the speech is spectrally degraded, such as through a cochlear implant (CI), this role of top-down processing is poorly understood. This study explored the interactions of top-down processing, specifically the use of semantic context during sentence recognition, and the relative contributions of different neurocognitive functions during speech recognition in adult CI users. Method Data from 41 experienced adult CI users were collected and used in analyses. Participants were tested for recognition and immediate repetition of speech materials in the clear. They were asked to repeat 2 sets of sentence materials, 1 that was semantically meaningful and 1 that was syntactically appropriate but semantically anomalous. Participants also were tested on 4 visual measures of neurocognitive functioning to assess working memory capacity (Digit Span; Wechsler, 2004 ), speed of lexical access (Test of Word Reading Efficiency; Torgeson, Wagner, & Rashotte, 1999 ), inhibitory control (Stroop; Stroop, 1935 ), and nonverbal fluid reasoning (Raven's Progressive Matrices; Raven, 2000 ). Results Individual listeners' inhibitory control predicted recognition of meaningful sentences when controlling for performance on anomalous sentences, our proxy for the quality of the bottom-up input. Additionally, speed of lexical access and nonverbal reasoning predicted recognition of anomalous sentences. Conclusions Findings from this study identified inhibitory control as a potential mechanism at work when listeners make use of semantic context during sentence recognition. Moreover, speed of lexical access and nonverbal reasoning were associated with recognition of sentences that lacked semantic context. These results motivate the development of improved comprehensive rehabilitative approaches for adult patients with CIs to optimize use of top-down processing and underlying core neurocognitive functions.
... Kognitive Funktionen üben auch einen top-down Einfluss auf die auditive Wahrnehmung und die Spracheverarbeitung selbst, so dass ein Teil des Sprachverständnisses von diesen Funktionen beeinflusst wird [66,67]. Selbst bei postlingual ertaubten Patienten erlauben es die kognitiven Leistungen ein Teil der interindividuellen Variabilität der Resultate von Cochlea-Implantation aufzuklären und deswegen wird von einigen Autoren vorgeschlagen diese Funktionen klinisch zu testen [68]. ...
Article
Full-text available
Zusammenfassung Nach der Geburt entwickelt sich das Gehirn weiter. Diese umfangreiche Entwicklung ist durch Hörstörungen in der Kindheit beeinträchtigt. Die Entwicklung von kortikalen Synapsen im Hörsystem ist dann verzögert und deren nachfolgender Abbau verstärkt. Neueste Arbeiten belegen, dass dabei vor allem die Synapsen betroffen sind, die für kortikokortikale Verarbeitung der Reize verantwortlich sind. Dies äußert sich in Defiziten bei der auditiven Verarbeitung. Andere Sinnessysteme sind indirekt beeinträchtigt, vor allem in der multisensorischen Kooperation. Wegen der umfangreichen Vernetzung des Hörsystems mit dem Rest des Gehirns werden interindividuell unterschiedliche kognitive Funktionen bei Hörstörungen verändert. Diese Effekte erfordern einen individualisierten Ansatz bei Therapie von Gehörlosigkeit.
... So far, only very few studies have focused on the subgroup of extremely poor-or high-performing CI users analyzing some nonauditory skills [Hillyer et al., 2019;Tamati et al., 2020]. Most studies correlated speech perception and cognitive or linguistic parameters in CI users in general or in comparison to normal-hearing subjects [Heydebrand et al., 2007;Holden et al., 2013;Cosetti et al., 2016;Moberly et al., 2016b;Moberly et al., 2017;Mattingly et al., 2018;Zhan et al., 2020]. Due to the small number of poor-performing subjects, however, individual strategies in speech processing might be overlooked in studies dealing with the total CI population and a focus on extreme cases might better reveal even slight individual differences [Başkent et al., 2016;Nagels et al., 2019]. ...
Article
Full-text available
Introduction: Several factors are known to influence speech perception in cochlear implant (CI) users. To date, the underlying mechanisms have not yet been fully clarified. Although many CI users achieve a high level of speech perception, a small percentage of patients does not or only slightly benefit from the CI (poor performer, PP). In a previous study, PP showed significantly poorer results on nonauditory-based cognitive and linguistic tests than CI users with a very high level of speech understanding (star performer, SP). We now investigate if PP also differs from the CI user with an average performance (average performer, AP) in cognitive and linguistic performance. Methods: Seventeen adult postlingually deafened CI users with speech perception scores in quiet of 55 (9.32) % (AP) on the German Freiburg monosyllabic speech test at 65 dB underwent neurocognitive (attention, working memory, short- and long-term memory, verbal fluency, inhibition) and linguistic testing (word retrieval, lexical decision, phonological input lexicon). The results were compared to the performance of 15 PP (speech perception score of 15 [11.80] %) and 19 SP (speech perception score of 80 [4.85] %). For statistical analysis, U-Test and discrimination analysis have been done. Results: Significant differences between PP and AP were observed on linguistic tests, in Rapid Automatized Naming (RAN: p = 0.0026), lexical decision (LexDec: p = 0.026), phonological input lexicon (LEMO: p = 0.0085), and understanding of incomplete words (TRT: p = 0.0024). AP also had significantly better neurocognitive results than PP in the domains of attention (M3: p = 0.009) and working memory (OSPAN: p = 0.041; RST: p = 0.015) but not in delayed recall (delayed recall: p = 0.22), verbal fluency (verbal fluency: p = 0.084), and inhibition (Flanker: p = 0.35). In contrast, no differences were found hereby between AP and SP. Based on the TRT and the RAN, AP and PP could be separated in 100%. Discussion: The results indicate that PP constitute a distinct entity of CI users that differs even in nonauditory abilities from CI users with an average speech perception, especially with regard to rapid word retrieval either due to reduced phonological abilities or limited storage. Further studies should investigate if improved word retrieval by increased phonological and semantic training results in better speech perception in these CI users.
... Less is known about the influence of cognitive factors on CI-mediated speech recognition. However, recent work has shown associations of speech recognition in CI users and in NH listeners presented with spectrally degraded (i.e., noise-vocoded) speech with WMC (Kaandorp et al., 2017), non-verbal reasoning (Mattingly et al., 2018;Moberly et al., 2018), inhibition control (Zhan et al., 2020) and processing speed as well as executive functions (Rosemann et al., 2017;Völter et al., 2021). ...
Article
Full-text available
The outcome of cochlear implantation is typically assessed by speech recognition tests in quiet and in noise. Many cochlear implant recipients reveal satisfactory speech recognition especially in quiet situations. However, since cochlear implants provide only limited spectro-temporal cues the effort associated with understanding speech might be increased. In this respect, measures of listening effort could give important extra information regarding the outcome of cochlear implantation. In order to shed light on this topic and to gain knowledge for clinical applications we compared speech recognition and listening effort in cochlear implants (CI) recipients and age-matched normal-hearing listeners while considering potential influential factors, such as cognitive abilities. Importantly, we estimated speech recognition functions for both listener groups and compared listening effort at similar performance level. Therefore, a subjective listening effort test (adaptive scaling, “ACALES”) as well as an objective test (dual-task paradigm) were applied and compared. Regarding speech recognition CI users needed about 4 dB better signal-to-noise ratio to reach the same performance level of 50% as NH listeners and even 5 dB better SNR to reach 80% speech recognition revealing shallower psychometric functions in the CI listeners. However, when targeting a fixed speech intelligibility of 50 and 80%, respectively, CI users and normal hearing listeners did not differ significantly in terms of listening effort. This applied for both the subjective and the objective estimation. Outcome for subjective and objective listening effort was not correlated with each other nor with age or cognitive abilities of the listeners. This study did not give evidence that CI users and NH listeners differ in terms of listening effort – at least when the same performance level is considered. In contrast, both listener groups showed large inter-individual differences in effort determined with the subjective scaling and the objective dual-task. Potential clinical implications of how to assess listening effort as an outcome measure for hearing rehabilitation are discussed.
... Similar effects of nonverbal IQ on word learning have been observed in prior studies utilizing pairedassociates learning paradigms (e.g., de Jong, Seveke, & van Veen, 2000; Krishnan, Watkins, & Bishop, 2017), as well as on pattern-based learning of orthographic wordforms (Hung, 2012;Ricketts, Bishop, Pimperton, & Nation, 2011) and grammatical categories (Brooks et al., 2006(Brooks et al., , 2017Kempe et al., 2010). There is additionally evidence that nonverbal reasoning among children with language impairments is a strong predictor of language development (Botting, 2005;Stevens et al., 2000;Stothard, Snowling, Bishop, Chipchase, & Kaplan, 1998;Tomblin, Freese, & Records, 1992) and that better nonverbal reasoning among adult cochlear implant users is associated with superior word and sentence recognition (e.g., Knutson et al., 1991;Mattingly, Castellanos, & Moberly, 2018;Moberly & Reed, 2019). Though the dynamic relationship between language abilities and nonverbal IQ is not yet fully understood, it has been proposed that individuals with higher nonverbal IQ may be better able to compensate for language difficulties (e.g., Snowling, Bishop, & Stothard, 2000;Stanovich, 1993;Stevens et al., 2000). ...
Article
Many languages use the same letters to represent different sounds (e.g., the letter P represents /p/ in English but /r/ in Russian). We report two experiments that examine how native language experience impacts the acquisition and processing of words with conflicting letter-to-sound mappings. Experiment 1 revealed that individual differences in nonverbal intelligence predicted word learning and that novel words with conflicting orthography-to-phonology mappings were harder to learn when their spelling was more typical of the native language than less typical (due to increased competition from the native language). Notably, Experiment 2 used eye tracking to reveal, for the first time, that hearing non-native spoken words activates native language orthography and both native and non-native letter-to-sound mappings. These findings evince high interactivity in the language system, illustrate the role of orthography in phonological learning and processing, and demonstrate that experience with written form changes the linguistic mind.
... Hearing loss exacerbates declines in overall cognition (Deal et al., 2015;Loughrey et al., 2018;Yuan et al., 2018) and processing ability (Rönnberg et al., 2011) with age, which could compound the difficulty in using working memory to support speech recognition. In support of this idea, tests of fluid intelligence also predict speech outcomes in individuals with cochlear implants (Mattingly et al., 2018;Moberly & Reed, 2019) and fluid intelligence partially mediates the relationship between aging and speech recognition independent of auditory sensitivity . Fluid intelligence is a closely related construct to processing in working memory (Wilhelm et al., 2013), so additional work is needed to distinguish how the role of these mechanisms in speech recognition changes with both aging and hearing loss. ...
Article
Full-text available
Purpose In individuals with cochlear implants, speech recognition is not associated with tests of working memory that primarily reflect storage, such as forward digit span. In contrast, our previous work found that vocoded speech recognition in individuals with normal hearing was correlated with performance on a forward digit span task. A possible explanation for this difference across groups is that variability in auditory resolution across individuals with cochlear implants could conceal the true relationship between speech and memory tasks. Here, our goal was to determine if performance on forward digit span and speech recognition tasks are correlated in individuals with cochlear implants after controlling for individual differences in auditory resolution. Method We measured sentence recognition ability in 20 individuals with cochlear implants with Perceptually Robust English Sentence Test Open-set sentences. Spectral and temporal modulation detection tasks were used to assess individual differences in auditory resolution, auditory forward digit span was used to assess working memory storage, and self-reported word familiarity was used to assess vocabulary. Results Individual differences in speech recognition were predicted by spectral and temporal resolution. A correlation was found between forward digit span and speech recognition, but this correlation was not significant after controlling for spectral and temporal resolution. No relationship was found between word familiarity and speech recognition. Forward digit span performance was not associated with individual differences in auditory resolution. Conclusions Our findings support the idea that sentence recognition in individuals with cochlear implants is primarily limited by individual differences in working memory processing, not storage. Studies examining the relationship between speech and memory should control for individual differences in auditory resolution.
... Moreover, many studies have now linked hearing loss to cognitive decline (e.g., Jayakody et al., 2018; F. R. Lin et al., 2011), and some have demonstrated improvement in cognitive performance following cochlear implantation (Cosetti et al., 2016;Jayakody et al., 2017;Mosnier et al., 2015;Völter et al., 2018), indicating that adult CI patients are likely to exhibit some cognitivelinguistic changes or impairments as a result of their hearing loss. Other studies have additionally demonstrated the contributions of cognitive-linguistic skills to both speech recognition performance (e.g., Mattingly et al., 2018;Moberly et al., 2016; and CI-related QOL (Moberly, Harris, et al., 2018). Given these collective findings, an understanding of cognitive-linguistic skills can assist AR clinicians in predicting and explaining outcomes; counseling for expectations; and appropriately choosing perceptual training material based on its linguistic content, cognitive processing demands, and an individual's associated strengths and weaknesses. ...
Article
Full-text available
Purpose This clinical focus article provides an overview of clinical models currently being used for the provision of comprehensive aural rehabilitation (AR) for adults with cochlear implants (CIs) in the Unites States. Method Clinical AR models utilized by hearing health care providers from nine clinics across the United States were discussed with regard to interprofessional AR practice patterns in the adult CI population. The clinical models were presented in the context of existing knowledge and gaps in the literature. Future directions were proposed for optimizing the provision of AR for the adult CI patient population. Findings/Conclusions There is a general agreement that AR is an integral part of hearing health care for adults with CIs. While the provision of AR is feasible in different clinical practice settings, service delivery models are variable across hearing health care professionals and settings. AR may include interprofessional collaboration among surgeons, audiologists, and speech-language pathologists with varying roles based on the characteristics of a particular setting. Despite various existing barriers, the clinical practice patterns identified here provide a starting point toward a more standard approach to comprehensive AR for adults with CIs.
... Working memory (Lyxell et al, 1998;Tao et al, 2014) as well as inhibitory control (Moberly et al, 2016b), verbal learning and memory , and processing speed (Tinnemore et al, 2018) have been linked to individual differences in speech recognition among adult CI users. In addition, although a strong relation has not been established, nonverbal reasoning skills have recently been found to be associated with individual performance among postlingually deafened adult CI users, independently of age (Mattingly et al, 2018). ...
Article
Objective: Hearing loss has a detrimental impact on cognitive function. However, there is a lack of consensus on the impact of cochlear implants on cognition. This review systematically evaluates whether cochlear implants in adult patients lead to cognitive improvements and investigates the relations of cognition with speech recognition outcomes. Data sources: A literature review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies evaluating cognition and cochlear implant outcomes in postlingual, adult patients from January 1996 to December 2021 were included. Of 2510 total references, 52 studies were included in qualitative analysis and 11 in meta-analyses. Review methods: Proportions were extracted from studies of (1) the significant impacts of cochlear implantation on 6 cognitive domains and (2) associations between cognition and speech recognition outcomes. Meta-analyses were performed using random effects models on mean differences between pre- and postoperative performance on 4 cognitive assessments. Results: Only half of the outcomes reported suggested cochlear implantation had a significant impact on cognition (50.8%), with the highest proportion in assessments of memory & learning and inhibition-concentration. Meta-analyses revealed significant improvements in global cognition and inhibition-concentration. Finally, 40.4% of associations between cognition and speech recognition outcomes were significant. Conclusion: Findings relating to cochlear implantation and cognition vary depending on the cognitive domain assessed and the study goal. Nonetheless, assessments of memory & learning, global cognition, and inhibition-concentration may represent tools to assess cognitive benefit after implantation and help explain variability in speech recognition outcomes. Enhanced selectivity in assessments of cognition is needed for clinical applicability.
Article
Importance: Many cochlear implant centers screen patients for cognitive impairment as part of the evaluation process, but the utility of these scores in predicting cochlear implant outcomes is unknown. Objective: To determine whether there is an association between cognitive impairment screening scores and cochlear implant outcomes. Design, setting, and participants: Retrospective case series of adult cochlear implant recipients who underwent preoperative cognitive impairment screening with the Montreal Cognitive Assessment (MoCA) from 2018 to 2020 with 1-year follow-up at a single tertiary cochlear implant center. Data analysis was performed on data from January 2018 through December 2021. Exposures: Cochlear implantation. Main outcomes and measures: Preoperative MoCA scores and mean (SD) improvement (aided preoperative to 12-month postoperative) in Consonant-Nucleus-Consonant phonemes (CNCp) and words (CNCw), AzBio sentences in quiet (AzBio Quiet), and Cochlear Implant Quality of Life-35 (CIQOL-35) Profile domain and global scores. Results: A total of 52 patients were included, 27 (52%) of whom were male and 46 (88%) were White; mean (SD) age at implantation was 68.2 (13.3) years. Twenty-three (44%) had MoCA scores suggesting mild and 1 (2%) had scores suggesting moderate cognitive impairment. None had been previously diagnosed with cognitive impairment. There were small to medium effects of the association between 12-month postoperative improvement in speech recognition measures and screening positive or not for cognitive impairment (CNCw mean [SD]: 48.4 [21.9] vs 38.5 [26.6] [d = -0.43 (95% CI, -1.02 to 0.16)]; AzBio Quiet mean [SD]: 47.5 [34.3] vs 44.7 [33.1] [d = -0.08 (95% CI, -0.64 to 0.47)]). Similarly, small to large effects of the associations between 12-month postoperative change in CIQOL-35 scores and screening positive or not for cognitive impairment were found (global: d = 0.32 [95% CI, -0.59 to 1.23]; communication: d = 0.62 [95% CI, -0.31 to 1.54]; emotional: d = 0.26 [95% CI, -0.66 to 1.16]; entertainment: d = -0.005 [95% CI, -0.91 to 0.9]; environmental: d = -0.92 [95% CI, -1.86 to 0.46]; listening effort: d = -0.79 [95% CI, -1.65 to 0.22]; social: d = -0.51 [95% CI, -1.43 to 0.42]). Conclusions and relevance: In this case series, screening scores were not associated with the degree of improvement of speech recognition or patient-reported outcome measures after cochlear implantation. Given the prevalence of screening positive for cognitive impairment before cochlear implantation, preoperative screening can be useful for early identification of potential cognitive decline. These findings support that screening scores may have a limited role in preoperative counseling of outcomes and should not be used to limit candidacy.
Article
Purpose When listening to speech under adverse conditions, older adults, even with “age-normal” hearing, face challenges that may lead to poorer speech recognition than their younger peers. Older listeners generally demonstrate poorer suprathreshold auditory processing along with aging-related declines in neurocognitive functioning that may impair their ability to compensate using “top-down” cognitive–linguistic functions. This study explored top-down processing in older and younger adult listeners, specifically the use of semantic context during noise-vocoded sentence recognition. Method Eighty-four adults with age-normal hearing (45 young normal-hearing [YNH] and 39 older normal-hearing [ONH] adults) participated. Participants were tested for recognition accuracy for two sets of noise-vocoded sentence materials: one that was semantically meaningful and the other that was syntactically appropriate but semantically anomalous. Participants were also tested for hearing ability and for neurocognitive functioning to assess working memory capacity, speed of lexical access, inhibitory control, and nonverbal fluid reasoning, as well as vocabulary knowledge. Results The ONH and YNH listeners made use of semantic context to a similar extent. Nonverbal reasoning predicted recognition of both meaningful and anomalous sentences, whereas pure-tone average contributed additionally to anomalous sentence recognition. None of the hearing, neurocognitive, or language measures significantly predicted the amount of context gain, computed as the difference score between meaningful and anomalous sentence recognition. However, exploratory cluster analyses demonstrated four listener profiles and suggested that individuals may vary in the strategies used to recognize speech under adverse listening conditions. Conclusions Older and younger listeners made use of sentence context to similar degrees. Nonverbal reasoning was found to be a contributor to noise-vocoded sentence recognition. However, different listeners may approach the problem of recognizing meaningful speech under adverse conditions using different strategies based on their hearing, neurocognitive, and language profiles. These findings provide support for the complexity of bottom-up and top-down interactions during speech recognition under adverse listening conditions.
Article
Hypotheses: 1) Scores of reading efficiency (the Test of Word Reading Efficiency, second edition) obtained in adults before cochlear implant surgery will be predictive of speech recognition outcomes 6 months after surgery; and 2) Cochlear implantation will lead to improvements in language processing as measured through reading efficiency from preimplantation to postimplantation. Background: Adult cochlear implant (CI) users display remarkable variability in speech recognition outcomes. "Top-down" processing-the use of cognitive resources to make sense of degraded speech-contributes to speech recognition abilities in CI users. One area that has received little attention is the efficiency of lexical and phonological processing. In this study, a visual measure of word and nonword reading efficiency-relying on lexical and phonological processing, respectively-was investigated for its ability to predict CI speech recognition outcomes, as well as to identify any improvements after implantation. Methods: Twenty-four postlingually deaf adult CI candidates were tested on the Test of Word Reading Efficiency, Second Edition preoperatively and again 6 months post-CI. Six-month post-CI speech recognition measures were also assessed across a battery of word and sentence recognition. Results: Preoperative nonword reading scores were moderately predictive of sentence recognition outcomes, but real word reading scores were not; word recognition scores were not predicted by either. No 6-month post-CI improvement was demonstrated in either word or nonword reading efficiency. Conclusion: Phonological processing as measured by the Test of Word Reading Efficiency, Second Edition nonword reading predicts to a moderate degree 6-month sentence recognition outcomes in adult CI users. Reading efficiency did not improve after implantation, although this could be because of the relatively short duration of CI use.
Article
Introduction Real-world speech communication involves interacting with many talkers with diverse voices and accents. Many adults with cochlear implants (CIs) demonstrate poor talker discrimination, which may contribute to real-world communication difficulties. However, the factors contributing to talker discrimination ability, and how discrimination ability relates to speech recognition outcomes in adult CI users are still unknown. The current study investigated talker discrimination ability in adult CI users, and the contributions of age, auditory sensitivity, and neurocognitive skills. In addition, the relation between talker discrimination ability and multiple-talker sentence recognition was explored. Methods Fourteen post-lingually deaf adult CI users (3 female, 11 male) with ≥1 year of CI use completed a talker discrimination task. Participants listened to two monosyllabic English words, produced by the same talker or by two different talkers, and indicated if the words were produced by the same or different talkers. Nine female and nine male native English talkers were paired, resulting in same- and different-talker pairs as well as same-gender and mixed-gender pairs. Participants also completed measures of spectro-temporal processing, neurocognitive skills, and multiple-talker sentence recognition. Results CI users showed poor same-gender talker discrimination, but relatively good mixed-gender talker discrimination. Older age and weaker neurocognitive skills, in particular inhibitory control, were associated with less accurate mixed-gender talker discrimination. Same-gender discrimination was significantly related to multiple-talker sentence recognition accuracy. Conclusion Adult CI users demonstrate overall poor talker discrimination ability. Individual differences in mixed-gender discrimination ability were related to age and neurocognitive skills, suggesting that these factors contribute to the ability to make use of available, degraded talker characteristics. Same-gender talker discrimination was associated with multiple-talker sentence recognition, suggesting that access to subtle talker-specific cues may be important for speech recognition in challenging listening conditions.
Article
Objective Post-implant rehabilitation is limited for adult cochlear implant (CI) recipients. The objective of this research was to capture the perspectives of CI users and their coaches regarding their experiences with auditory-verbal intervention as an example of post-implant rehabilitation and their views on perceived benefits and challenges related to the intervention. Design This qualitative study involved semi-structured focus group interviews with adult CI users and their coaches who accompanied them in a 24-week auditory-verbal intervention program. Study sample A total of 17 participants (eight CI users and nine coaches) contributed to the interviews. Results Three key topic areas emerged from the interviews capturing CI users’ and coaches’ experiences related to the intervention program: (1) benefits of the intervention, (2) factors affecting experiences, and (3) challenges and barriers. Benefits included increased confidence in hearing, communication, social participation, and new knowledge about technology and hearing. Factors affecting the experience were participants’ motivation and the therapist’s skills. The primary challenge was the time commitment for weekly therapy. Conclusions Both CI users and coaches perceived a focussed auditory-verbal intervention to be beneficial in improving speech understanding, confidence in using hearing, social interaction, and knowledge about technology. Participants recommended reducing the intensity of intervention to facilitate participation.
Article
Hypotheses: Significant variability persists in speech recognition outcomes in adults with cochlear implants (CIs). Sensory ("bottom-up") and cognitive-linguistic ("top-down") processes help explain this variability. However, the interactions of these bottom-up and top-down factors remain unclear. One hypothesis was tested: top-down processes would contribute differentially to speech recognition, depending on the fidelity of bottom-up input. Background: Bottom-up spectro-temporal processing, assessed using a Spectral-Temporally Modulated Ripple Test (SMRT), is associated with CI speech recognition outcomes. Similarly, top-down cognitive-linguistic skills relate to outcomes, including working memory capacity, inhibition-concentration, speed of lexical access, and nonverbal reasoning. Methods: Fifty-one adult CI users were tested for word and sentence recognition, along with performance on the SMRT and a battery of cognitive-linguistic tests. The group was divided into "low-," "intermediate-," and "high-SMRT" groups, based on SMRT scores. Separate correlation analyses were performed for each subgroup between a composite score of cognitive-linguistic processing and speech recognition. Results: Associations of top-down composite scores with speech recognition were not significant for the low-SMRT group. In contrast, these associations were significant and of medium effect size (Spearman's rho = 0.44-0.46) for two sentence types for the intermediate-SMRT group. For the high-SMRT group, top-down scores were associated with both word and sentence recognition, with medium to large effect sizes (Spearman's rho = 0.45-0.58). Conclusions: Top-down processes contribute differentially to speech recognition in CI users based on the quality of bottom-up input. Findings have clinical implications for individualized treatment approaches relying on bottom-up device programming or top-down rehabilitation approaches.
Article
Introduction: Talker-specific adaptation facilitates speech recognition in normal-hearing listeners. This study examined talker adaptation in adult cochlear implant (CI) users. Three hypotheses were tested: (1) high-performing adult CI users show improved word recognition following exposure to a talker ("talker adaptation"), particularly for lexically hard words, (2) individual performance is determined by auditory sensitivity and neurocognitive skills, and (3) individual performance relates to real-world functioning. Methods: Fifteen high-performing, post-lingually deaf adult CI users completed a word recognition task consisting of 6 single-talker blocks (3 female/3 male native English speakers); words were lexically "easy" and "hard." Recognition accuracy was assessed "early" and "late" (first vs. last 10 trials); adaptation was assessed as the difference between late and early accuracy. Participants also completed measures of spectral-temporal processing and neurocognitive skills, as well as real-world measures of multiple-talker sentence recognition and quality of life (QoL). Results: CI users showed limited talker adaptation overall, but performance improved for lexically hard words. Stronger spectral-temporal processing and neurocognitive skills were weakly to moderately associated with more accurate word recognition and greater talker adaptation for hard words. Finally, word recognition accuracy for hard words was moderately related to multiple-talker sentence recognition and QoL. Conclusion: Findings demonstrate a limited talker adaptation benefit for recognition of hard words in adult CI users. Both auditory sensitivity and neurocognitive skills contribute to performance, suggesting additional benefit from adaptation for individuals with stronger skills. Finally, processing differences related to talker adaptation and lexical difficulty may be relevant to real-world functioning.
Article
Objectives/Hypothesis Speech recognition with a cochlear implant (CI) tends to be better for younger adults than older adults. However, older adults may take longer to reach asymptotic performance than younger adults. The present study aimed to characterize speech recognition as a function of age at implantation and listening experience for adult CI users. Study Design Retrospective review. Methods A retrospective review identified 352 adult CI recipients (387 ears) with at least 5 years of device listening experience. Speech recognition, as measured with consonant-nucleus-consonant (CNC) words in quiet and AzBio sentences in a 10-talker noise masker (10 dB signal-to-noise ratio), was reviewed at 1, 5, and 10 years postactivation. Results Speech recognition was better in younger listeners, and performance was stable or continued to improve through 10 years of CI listening experience. There was no indication of differences in acclimatization as a function of age at implantation. For the better performing CI recipients, an effect of age at implantation was more apparent for sentence recognition in noise than for word recognition in quiet. Conclusions Adult CI recipients across the age range examined here experience speech recognition benefit with a CI. However, older adults perform more poorly than young adults for speech recognition in quiet and noise, with similar age effects through 5 to 10 years of listening experience. Level of Evidence 3 Laryngoscope, 2021
Article
Objectives: This study aimed to investigate effects of aging and duration of deafness on sensitivity of the auditory nerve (AN) to amplitude modulation (AM) cues delivered using trains of biphasic pulses in adult cochlear implant (CI) users. Design: There were 21 postlingually deaf adult CI users who participated in this study. All study participants used a Cochlear Nucleus device with a full electrode array insertion in the test ear. The stimulus was a 200-ms pulse train with a pulse rate of 2000 pulses per second. This carrier pulse train was sinusodially AM at four modulation rates (20, 40, 100, 200 Hz). The peak amplitude of the modulated pulse train was the maximum comfortable level (i.e., C level) measured for the carrier pulse train. The electrically evoked compound action potential (eCAP) to each of the 20 pulses selected over the last two AM cycles were measured. In addition, eCAPs to single pulses were measured with the probe levels corresponding to the levels of 20 selected pulses from each AM pulse train. There were seven electrodes across the array evaluated in 16 subjects (i.e., electrodes 3 or 4, 6, 9, 12, 15, 18, and 21). For the remaining five subjects, 4 to 5 electrodes were tested due to impedance issues or time constraints. The modulated response amplitude ratio (MRAR) was calculated as the ratio of the difference in the maximum and the minimum eCAP amplitude measured for the AM pulse train to that measured for the single pulse, and served as the dependent variable. Age at time of testing and duration of deafness measured/defined using three criteria served as the independent variables. Linear Mixed Models were used to assess the effects of age at testing and duration of deafness on the MRAR. Results: Age at testing had a strong, negative effect on the MRAR. For each subject, the duration of deafness varied substantially depending on how it was defined/measured, which demonstrates the difficulty of accurately measuring the duration of deafness in adult CI users. There was no clear or reliable trend showing a relationship between the MRAR measured at any AM rate and duration of deafness defined by any criteria. After controlling for the effect of age at testing, MRARs measured at 200 Hz and basal electrode locations (i.e., electrodes 3 and 6) were larger than those measured at any other AM rate and apical electrode locations (i.e., electrodes 18 and 21). Conclusions: The AN sensitivity to AM cues implemented in the pulse-train stimulation significantly declines with advanced age. Accurately measuring duration of deafness in adult CI users is challenging, which, at least partially, might have accounted for the inconclusive findings in the relationship between the duration of deafness and the AN sensitivity to AM cues in this study.
Article
Full-text available
Objective(s) Enormous variability in speech recognition outcomes persists in adults who receive cochlear implants (CIs), which leads to a barrier to progress in predicting outcomes before surgery, explaining “poor” outcomes, and determining how to provide tailored rehabilitation therapy for individual CI users. The primary goal of my research program over the past 9 years has been to extend our understanding of the contributions of “top‐down” cognitive‐linguistic skills to CI outcomes in adults, acknowledging that “bottom‐up” sensory processes also contribute substantially. The main objective of this invited narrative review is to provide an overview of this work. A secondary objective is to provide career “guidance points” to budding surgeon‐scientists in Otolaryngology. Methods A narrative, chronological review covers work done by our group to explore top‐down and bottom‐up processing in adult CI outcomes. A set of ten guidance points is also provided to assist junior Otolaryngology surgeon‐scientists. Results Work in our lab has identified substantial contributions of cognitive skills (working memory, inhibition‐concentration, speed of lexical access, nonverbal reasoning, verbal learning and memory) as well as linguistic abilities (acoustic cue‐weighting, phonological sensitivity) to speech recognition outcomes in adults with CIs. These top‐down skills interact with the quality of the bottom‐up input. Conclusion Although progress has been made in understanding speech recognition variability in adult CI users, future work is needed to predict CI outcomes before surgery, to identify particular patients' strengths and weaknesses, and to tailor rehabilitation approaches for individual CI users. Level of Evidence 4
Article
Hypotheses: Adult cochlear implant (CI) outcomes depend on demographic, sensory, and cognitive factors. However, these factors have not been examined together comprehensively for relations to different outcome types, such as speech recognition versus quality of life (QOL). Three hypotheses were tested: 1) speech recognition will be explained most strongly by sensory factors, whereas QOL will be explained more strongly by cognitive factors. 2) Different speech recognition outcome domains (sentences versus words) and different QOL domains (physical versus social versus psychological functioning) will be explained differentially by demographic, sensory, and cognitive factors. 3) Including cognitive factors as predictors will provide more power to explain outcomes than demographic and sensory predictors alone. Background: A better understanding of the contributors to CI outcomes is needed to prognosticate outcomes before surgery, explain outcomes after surgery, and tailor rehabilitation efforts. Methods: Forty-one adult postlingual experienced CI users were assessed for sentence and word recognition, as well as hearing-related QOL, along with a broad collection of predictors. Partial least squares regression was used to identify factors that were most predictive of outcome measures. Results: Supporting our hypotheses, speech recognition abilities were most strongly dependent on sensory skills, while QOL outcomes required a combination of cognitive, sensory, and demographic predictors. The inclusion of cognitive measures increased the ability to explain outcomes, mainly for QOL. Conclusions: Explaining variability in adult CI outcomes requires a broad assessment approach. Identifying the most important predictors depends on the particular outcome domain and even the particular measure of interest.
Article
Full-text available
Objectives Neurocognitive functions contribute to speech recognition in postlingual adults with cochlear implants (CIs). In particular, better verbal working memory (WM) on modality‐specific (auditory) WM tasks predicts better speech recognition. It remains unclear, however, whether this association can be attributed to basic underlying modality‐general neurocognitive functions, or whether it is solely a result of the degraded nature of auditory signals delivered by the CI. Three hypotheses were tested: 1) Both modality‐specific and modality‐general tasks of verbal WM would predict scores of sentence recognition in speech‐shaped noise; 2) Basic modality‐general neurocognitive functions of controlled fluency and inhibition‐concentration would predict both modality‐specific and modality‐general verbal WM; and 3) Scores on both tasks of verbal WM would mediate the effects of more basic neurocognitive functions on sentence recognition. Study Design Cross‐sectional study of 30 postlingual adults with CIs and thirty age‐matched normal‐hearing (NH) controls. Materials and Methods Participants were tested for sentence recognition in speech‐shaped noise, along with verbal WM using a modality‐general task (Reading Span) and an auditory modality‐specific task (Listening Span). Participants were also assessed for controlled fluency and inhibition‐concentration abilities. Results For CI users only, Listening Span scores predicted sentence recognition, and Listening Span scores mediated the effects of inhibition‐concentration on speech recognition. Scores on Reading Span were not related to sentence recognition for either group. Conclusion Inhibition‐concentration skills play an important role in CI users' sentence recognition skills, with effects mediated by modality‐specific verbal WM. Further studies will examine inhibition‐concentration and WM skills as novel targets for clinical intervention. Level of Evidence 4.
Article
Full-text available
Objective Unexplained variability in speech recognition outcomes among postlingually deafened adults with cochlear implants (CIs) is an enormous clinical and research barrier to progress. This variability is only partially explained by patient factors (e.g., duration of deafness) and auditory sensitivity (e.g., spectral and temporal resolution). This study sought to determine whether non‐auditory neurocognitive skills could explain speech recognition variability exhibited by adult CI users. Study Design Thirty postlingually deafened adults with CIs and thirty age‐matched normal‐hearing (NH) controls were enrolled. Methods Participants were assessed for recognition of words in sentences in noise and several non‐auditory measures of neurocognitive function. These non‐auditory tasks assessed global intelligence (problem‐solving), controlled fluency, working memory, and inhibition‐concentration abilities. Results For CI users, faster response times during a non‐auditory task of inhibition‐concentration predicted better recognition of sentences in noise; however, similar effects were not evident for NH listeners. Conclusions Findings from this study suggest that inhibition‐concentration skills play a role in speech recognition for CI users, but less so for NH listeners. Further research will be required to elucidate this role and its potential as a novel target for intervention.
Article
Full-text available
Objectives/Hypothesis Aural rehabilitation is not standardized for adults after cochlear implantation. Most cochlear implant (CI) centers in the United States do not routinely enroll adult CI users in focused postoperative rehabilitation programs due to poor reimbursement and lack of data supporting (or refuting) the efficacy of any one specific approach. Consequently, patients generally assume a self‐driven approach toward rehabilitation. This exploratory pilot study examined rehabilitation strategies pursued by adults with CIs and associated these strategies with speech recognition and CI‐specific quality of life (QOL). Study Design Cross‐sectional study of 23 postlingually deafened adults with CIs. Methods Participants responded to an open‐ended questionnaire regarding rehabilitation strategies. A subset underwent in‐depth interviews. Thematic content analysis was applied to the questionnaires and interview transcripts. Participants also underwent word recognition testing and completed a CI‐related QOL measure. Participants were classified as having good or poor performance (upper or lower quartile for speech recognition) and high or low QOL (upper or lower quartile for QOL). Rehabilitation themes were compared and contrasted among groups. Results Five rehabilitation themes were identified: 1) Preimplant expectations of postoperative performance, 2) personal motivation, 3) social support, 4) specific rehabilitation strategies, and 5) patient‐perceived role of the audiologist. Patients with good speech recognition and high QOL tended to pursue more active rehabilitation and had greater social support. Patient expectations and motivation played significant roles in postoperative QOL. Conclusion Postoperative patient‐driven rehabilitation strategies are highly variable but appear to relate to outcomes. Larger‐scale extensions of this pilot study are needed.
Article
Full-text available
Conclusion: The human frequency-to-place map may be modified by experience, even in adult listeners. However, such plasticity has limitations. Knowledge of the extent and the limitations of human auditory plasticity can help optimize parameter settings in users of auditory prostheses. Objectives: To what extent can adults adapt to sharply different frequency-to-place maps across ears? This question was investigated in two bilateral cochlear implant users who had a full electrode insertion in one ear, a much shallower insertion in the other ear, and standard frequency-to-electrode maps in both ears. Methods: Three methods were used to assess adaptation to the frequency-to-electrode maps in each ear: (1) pitch matching of electrodes in opposite ears, (2) listener-driven selection of the most intelligible frequency-to-electrode map, and (3) speech perception tests. Based on these measurements, one subject was fitted with an alternative frequency-to-electrode map, which sought to compensate for her incomplete adaptation to the standard frequency-to-electrode map. Results: Both listeners showed remarkable ability to adapt, but such adaptation remained incomplete for the ear with the shallower electrode insertion, even after extended experience. The alternative frequency-to-electrode map that was tested resulted in substantial increases in speech perception for one subject in the short insertion ear.
Article
Full-text available
For patients having residual hearing in one ear and a cochlear implant (CI) in the opposite ear, interaural place-pitch mismatches might be partly responsible for the large variability in individual benefit. Behavioral pitch-matching between the two ears has been suggested as a way to individualize the fitting of the frequency-to-electrode map but is rather tedious and unreliable. Here, an alternative method using two-formant vowels was developed and tested. The interaural spectral shift was inferred by comparing vowel spaces, measured by presenting the first formant (F1) to the nonimplanted ear and the second (F2) on either side. The method was first evaluated with eight normal-hearing listeners and vocoder simulations, before being tested with 11 CI users. Average vowel distributions across subjects showed a similar pattern when presenting F2 on either side, suggesting acclimatization to the frequency map. However, individual vowel spaces with F2 presented to the implant did not allow a reliable estimation of the interaural mismatch. These results suggest that interaural frequency-place mismatches can be derived from such vowel spaces. However, the method remains limited by difficulties in bimodal fusion of the two formants. © The Author(s) 2014.
Article
Full-text available
Previous studies have found a significant correlation between spectral-ripple discrimination and speech and music perception in cochlear implant (CI) users. This relationship could be of use to clinicians and scientists who are interested in using spectral-ripple stimuli in the assessment and habilitation of CI users. However, previous psychoacoustic tasks used to assess spectral discrimination are not suitable for all populations, and it would be beneficial to develop methods that could be used to test all age ranges, including pediatric implant users. Additionally, it is important to understand how ripple stimuli are processed in the central auditory system and how their neural representation contributes to behavioral performance. For this reason, we developed a single-interval, yes/no paradigm that could potentially be used both behaviorally and electrophysiologically to estimate spectral-ripple threshold. In experiment 1, behavioral thresholds obtained using the single-interval method were compared to thresholds obtained using a previously established three-alternative forced-choice method. A significant correlation was found (r = 0.84, p = 0.0002) in 14 adult CI users. The spectral-ripple threshold obtained using the new method also correlated with speech perception in quiet and noise. In experiment 2, the effect of the number of vocoder-processing channels on the behavioral and physiological threshold in normal-hearing listeners was determined. Behavioral thresholds, using the new single-interval method, as well as cortical P1-N1-P2 responses changed as a function of the number of channels. Better behavioral and physiological performance (i.e., better discrimination ability at higher ripple densities) was observed as more channels added. In experiment 3, the relationship between behavioral and physiological data was examined. Amplitudes of the P1-N1-P2 "change" responses were significantly correlated with d' values from the single-interval behavioral procedure. Results suggest that the single-interval procedure with spectral-ripple phase inversion in ongoing stimuli is a valid approach for measuring behavioral or physiological spectral resolution.
Article
Full-text available
The cognitive processes in a widely used, nonverbal test of analytic intelligence, the Raven Progressive Matrices Test (Raven, 1962), are analyzed in terms of which processes distinguish between higher scoring and lower scoring subjects and which processes are common to all subjects and all items on the test. The analysis is based on detailed performance characteristics, such as verbal protocols, eye-fixation patterns, and errors. The theory is expressed as a pair of computer simulation models that perform like the median or best college students in the sample. The processing characteristic common to all subjects is an incremental, reiterative strategy for encoding and inducing the regularities in each problem. The processes that distinguish among individuals are primarily the ability to induce abstract relations and the ability to dynamically manage a large set of problem-solving goals in working memory.
Article
Full-text available
This study examined spoken word recognition in adults with cochlear implants (CIs) to determine the extent to which linguistic and cognitive abilities predict variability in speech-perception performance. Both a traditional consonant-vowel-consonant (CVC)-repetition measure and a gated-word recognition measure (F. Grosjean, 1996) were used. Stimuli in the gated-word-recognition task varied in neighborhood density. Adults with CIs repeated CVC words less accurately than did age-matched adults with normal hearing sensitivity (NH). In addition, adults with CIs required more acoustic information to recognize gated words than did adults with NH. Neighborhood density had a smaller influence on gated-word recognition by adults with CIs than on recognition by adults with NH. With the exception of 1 outlying participant, standardized, norm-referenced measures of cognitive and linguistic abilities were not correlated with word-recognition measures. Taken together, these results do not support the hypothesis that cognitive and linguistic abilities predict variability in speech-perception performance in a heterogeneous group of adults with CIs. Findings are discussed in light of the potential role of auditory perception in mediating relations among cognitive and linguistic skill and spoken word recognition.
Article
Full-text available
This study investigated whether cognitive measures obtained prior to cochlear implant surgery activation could predict improvements in spoken word recognition in adult cochlear implant recipients 6 months after activation. In addition to noncognitive factors identified by previous studies (i.e. younger age, shorter duration of hearing loss), the present results indicated that improvement in spoken word recognition was associated with higher verbal learning scores and better verbal working memory. Contrary to expectation, neither general cognitive ability nor processing speed was significantly correlated with outcome at 6 months. Multiple regression analyses revealed that a combination of verbal learning scores and lip-reading skill accounted for nearly 72% of the individual differences in improvement in spoken word recognition (i.e. the variance in spoken word recognition scores at 6 months that remained unexplained after controlling for baseline spoken word recognition scores). These findings have relevance for research on auditory processing with cochlear implants as well as implications for clinical interventions.
Article
Full-text available
Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.
Article
Hypotheses: 1) When controlling for age in postlingual adult cochlear implant (CI) users, information-processing functions, as assessed using "process" measures of working memory capacity, inhibitory control, information-processing speed, and fluid reasoning, will predict traditional "product" outcome measures of speech recognition. 2) Demographic/audiologic factors, particularly duration of deafness, duration of CI use, degree of residual hearing, and socioeconomic status, will impact performance on underlying information-processing functions, as assessed using process measures. Background: Clinicians and researchers rely heavily on endpoint product measures of accuracy in speech recognition to gauge patient outcomes postoperatively. However, these measures are primarily descriptive and were not designed to assess the underlying core information-processing operations that are used during speech recognition. In contrast, process measures reflect the integrity of elementary core subprocesses that are operative during behavioral tests using complex speech signals. Methods: Forty-two experienced adult CI users were tested using three product measures of speech recognition, along with four process measures of working memory capacity, inhibitory control, speed of lexical/phonological access, and nonverbal fluid reasoning. Demographic and audiologic factors were also assessed. Results: Scores on product measures were associated with core process measures of speed of lexical/phonological access and nonverbal fluid reasoning. After controlling for participant age, demographic and audiologic factors did not correlate with process measure scores. Conclusion: Findings provide support for the important foundational roles of information processing operations in speech recognition outcomes of postlingually deaf patients who have received CIs.
Article
Objective: Considerable unexplained variability and large individual differences exist in speech recognition outcomes for postlingually deaf adults who use cochlear implants (CIs), and a sizeable fraction of CI users can be considered "poor performers." This article summarizes our current knowledge of poor CI performance, and provides suggestions to clinicians managing these patients. Method: Studies are reviewed pertaining to speech recognition variability in adults with hearing loss. Findings are augmented by recent studies in our laboratories examining outcomes in postlingually deaf adults with CIs. Results: In addition to conventional clinical predictors of CI performance (e.g., amount of residual hearing, duration of deafness), factors pertaining to both "bottom-up" auditory sensitivity to the spectro-temporal details of speech, and "top-down" linguistic knowledge and neurocognitive functions contribute to CI outcomes. Conclusions: The broad array of factors that contribute to speech recognition performance in adult CI users suggests the potential both for novel diagnostic assessment batteries to explain poor performance, and also new rehabilitation strategies for patients who exhibit poor outcomes. Moreover, this broad array of factors determining outcome performance suggests the need to treat individual CI patients using a personalized rehabilitation approach.
Article
Electrocochleography (ECoG) to acoustic stimuli can differentiate relative degrees of cochlear responsiveness across the population of cochlear implant recipients. The magnitude of the ongoing portion of the ECoG, which includes both hair cell and neural contributions, will correlate with speech outcomes as measured by results on CNC word score tests. Postoperative speech outcomes with cochlear implants vary from almost no benefit to near normal comprehension. A factor expected to have a high predictive value is the degree of neural survival. However, speech performance with the implant does not correlate with the number and distribution of surviving ganglion cells when measured postmortem. We will investigate whether ECoG can provide an estimate of cochlear function that helps predict postoperative speech outcomes. An electrode was placed at the round window of the ear about to be implanted during implant surgery. Tone bursts were delivered through an insert earphone. Subjects included children (n = 52, 1-18 yr) and postlingually hearing impaired adults (n = 32). Word scores at 6 months were available from 21 adult subjects. Significant responses to sound were recorded from almost all subjects (80/84 or 95%). The ECoG magnitudes spanned more than 50 dB in both children and adults. The distributions of ECoG magnitudes and frequencies were similar between children and adults. The correlation between the ECoG magnitude and word score accounted for 47% of the variance. ECoGs with high signal-to-noise ratios can be recorded from almost all implant candidates, including both adult and pediatric populations. In postlingual adults, the ECoG magnitude is more predictive of implant outcomes than other nonsurgical variables such as duration of deafness or degree of residual hearing.
Article
Objective: A great deal of variability exists in the speech-recognition abilities of postlingually deaf adult cochlear implant (CI) recipients. A number of previous studies have shown that duration of deafness is a primary factor affecting CI outcomes; however, there is little agreement regarding other factors that may affect performance. The objective of the present study was to determine the source of variability in CI outcomes by examining three main factors, biographic/audiologic information, electrode position within the cochlea, and cognitive abilities in a group of newly implanted CI recipients. Design: Participants were 114 postlingually deaf adults with either the Cochlear or Advanced Bionics CI systems. Biographic/audiologic information, aided sentence-recognition scores, a high resolution temporal bone CT scan and cognitive measures were obtained before implantation. Monosyllabic word recognition scores were obtained during numerous test intervals from 2 weeks to 2 years after initial activation of the CI. Electrode position within the cochlea was determined by three-dimensional reconstruction of pre- and postimplant CT scans. Participants' word scores over 2 years were fit with a logistic curve to predict word score as a function of time and to highlight 4-word recognition metrics (CNC initial score, CNC final score, rise time to 90% of CNC final score, and CNC difference score). Results: Participants were divided into six outcome groups based on the percentile ranking of their CNC final score, that is, participants in the bottom 10% were in group 1; those in the top 10% were in group 6. Across outcome groups, significant relationships from low to high performance were identified. Biographic/audiologic factors of age at implantation, duration of hearing loss, duration of hearing aid use, and duration of severe-to-profound hearing loss were significantly and inversely related to performance as were frequency modulated tone, sound-field threshold levels obtained with the CI. That is, the higher-performing outcome groups were younger in age at the time of implantation, had shorter duration of severe-to-profound hearing loss, and had lower CI sound-field threshold levels. Significant inverse relationships across outcome groups were also observed for electrode position, specifically the percentage of electrodes in scala vestibuli as opposed to scala tympani and depth of insertion of the electrode array. In addition, positioning of electrode arrays closer to the modiolar wall was positively correlated with outcome. Cognitive ability was significantly and positively related to outcome; however, age at implantation and cognition were highly correlated. After controlling for age, cognition was no longer a factor affecting outcomes. Conclusion: There are a number of factors that limit CI outcomes. They can act singularly or collectively to restrict an individual's performance and to varying degrees. The highest performing CI recipients are those with the least number of limiting factors. Knowledge of when and how these factors affect performance can favorably influence counseling, device fitting, and rehabilitation for individual patients and can contribute to improved device design and application.
Article
Objectives: Hearing aids use complex processing intended to improve speech recognition. Although many listeners benefit from such processing, it can also introduce distortion that offsets or cancels intended benefits for some individuals. The purpose of the present study was to determine the effects of cognitive ability (working memory) on individual listeners' responses to distortion caused by frequency compression applied to noisy speech. Design: The present study analyzed a large data set of intelligibility scores for frequency-compressed speech presented in quiet and at a range of signal-to-babble ratios. The intelligibility data set was based on scores from 26 adults with hearing loss with ages ranging from 62 to 92 years. The listeners were grouped based on working memory ability. The amount of signal modification (distortion) caused by frequency compression and noise was measured using a sound quality metric. Analysis of variance and hierarchical linear modeling were used to identify meaningful differences between subject groups as a function of signal distortion caused by frequency compression and noise. Results: Working memory was a significant factor in listeners' intelligibility of sentences presented in babble noise and processed with frequency compression based on sinusoidal modeling. At maximum signal modification (caused by both frequency compression and babble noise), the factor of working memory (when controlling for age and hearing loss) accounted for 29.3% of the variance in intelligibility scores. Combining working memory, age, and hearing loss accounted for a total of 47.5% of the variability in intelligibility scores. Furthermore, as the total amount of signal distortion increased, listeners with higher working memory performed better on the intelligibility task than listeners with lower working memory did. Conclusions: Working memory is a significant factor in listeners' responses to total signal distortion caused by cumulative effects of babble noise and frequency compression implemented with sinusoidal modeling. These results, together with other studies focused on wide-dynamic range compression, suggest that older listeners with hearing loss and poor working memory are more susceptible to distortions caused by at least some types of hearing aid signal-processing algorithms and by noise, and that this increased susceptibility should be considered in the hearing aid fitting process.
Article
Background: There is a pressing need for new clinically feasible speech recognition tests that are theoretically motivated, sensitive to individual differences, and access the core perceptual and neurocognitive processes used in speech perception. PRESTO (Perceptually Robust English Sentence Test Open-set) is a new high-variability sentence test designed to reflect current theories of exemplar-based learning, attention, and perception, including lexical organization and automatic encoding of indexical attributes. Using sentences selected from the TIMIT (Texas Instruments/Massachusetts Institute of Technology) speech corpus, PRESTO was developed to include talker and dialect variability. The test consists of lists balanced for talker gender, keywords, frequency, and familiarity. Purpose: To investigate the performance, reliability, and validity of PRESTO. Research design: In Phase I, PRESTO sentences were presented in multitalker babble at four signal-to-noise ratios (SNRs) to obtain a distribution of performance. In Phase II, participants returned and were tested on new PRESTO sentences and on HINT (Hearing In Noise Test) sentences presented in multitalker babble. Study sample: Young, normal-hearing adults (N = 121) were recruited from the Indiana University community for Phase I. Participants who scored within the upper and lower quartiles of performance in Phase I were asked to return for Phase II (N = 40). Data collection and analysis: In both Phase I and Phase II, participants listened to sentences presented diotically through headphones while seated in enclosed carrels at the Speech Research Laboratory at Indiana University. They were instructed to type in the sentence that they heard using keyboards interfaced to a computer. Scoring for keywords was completed offline following data collection. Phase I data were analyzed by determining the distribution of performance on PRESTO at each SNR and at the average performance across all SNRs. PRESTO reliability was analyzed by a correlational analysis of participant performance at test (Phase I) and retest (Phase II). PRESTO validity was analyzed by a correlational analysis of participant performance on PRESTO and HINT sentences tested in Phase II, and by an analysis of variance of within-subject factors of sentence test and SNR, and a between-subjects factor of group, based on level of Phase I performance. Results: A wide range of performance on PRESTO was observed; averaged across all SNRs, keyword accuracy ranged from 40.26 to 76.18% correct. PRESTO accuracy at retest (Phase II) was highly correlated with Phase I accuracy (r = 0.92, p < 0.001). PRESTO scores were also correlated with scores on HINT sentences (r = 0.52, p < 0.001). Phase II results showed an interaction between sentence test type and SNR [F(3, 114) = 121.36, p < 0.001], with better performance on HINT sentences at more favorable SNRs and better performance on PRESTO sentences at poorer SNRs. Conclusions: PRESTO demonstrated excellent test/retest reliability. Although a moderate correlation was observed between PRESTO and HINT sentences, a different pattern of results occurred with the two types of sentences depending on the level of the competition, suggesting the use of different processing strategies. Findings from this study demonstrate the importance of high-variability materials for assessing and understanding individual differences in speech perception.
Article
The four studies reported in this article, involving a total of 401 adults ranging between 18 and 80 years of age, were designed to investigate how working memory might mediate adult age differences in matrix reasoning tasks such as the Raven's Progressive Matrices Test. Evidence of this mediation is available in the finding that statistical control of an index of working memory reduces the age-related variance in matrix reasoning performance by approximately 70 per cent. Because the age differences were nearly constant across items of varying difficulty, it was concluded that the factors responsible for variation in item difficulty were distinct from those responsible for the age differences. However, young adults were found to be more accurate than older adults at recognizing information presented earlier in the matrix reasoning trial, thereby supporting the interpretation that working memory exerts its influence by contributing to the preservation of information during subsequent processing.
Article
This paper summarizes twenty studies, published since 1989, that have measured experimentally the relationship between speech recognition in noise and some aspect of cognition, using statistical techniques such as correlation or factor analysis. The results demonstrate that there is a link, but it is secondary to the predictive effects of hearing loss, and it is somewhat mixed across study. No one cognitive test always gave a significant result, but measures of working memory (especially reading span) were mostly effective, whereas measures of general ability, such as IQ, were mostly ineffective. Some of the studies included aided listening, and two reported the benefits from aided listening: again mixed results were found, and in some circumstances cognition was a useful predictor of hearing-aid benefit.
Article
The purpose of this research was to determine whether psychological variables were associated with the variability that characterizes the audiological performance of recipients of multichannel cochlear implants. Twenty-nine consecutive recipients of multichannel implants participated in a preoperative psychological assessment and audiological follow-up assessments after 18 months of implant use. Experimental cognitive measures that assess an ability to rapidly detect and respond to features imbedded in sequentially arrayed information accounted for up to 30% of the variance in implant outcome, suggesting the importance of cognitive abilities in implant outcome. Standardized measures of intellectual ability, however, were not predictive of outcome. The Health Opinion Survey, a measure of participatory engagement, was also a significant predictor of audiological outcome. Overall, the results implicated the importance of several specific psychological factors in the audiological outcome of cochlear implants in postlingually deafened adult recipients.
Article
A battery of speech audiometric measures and a battery of neuropsychological measures were administered to 200 elderly individuals with varying degrees of pure-tone sensitivity loss. Results were analyzed from the standpoint of the extent to which variation in speech audiometric scores could be predicted by knowledge of pure-tone hearing level, age, and cognitive status. For the four monotic test procedures (PB, SPIN-Low, SPIN-High, and SSI) degree of hearing loss bore the strongest relation to speech recognition score. Cognitive status accounted for little of the variance in any of these four speech audiometric scores. In the case of the single dichotic test procedure (DSI), both degree of hearing loss and speed of mental processing, as measured by the Digit Symbol subtest of the WAIS-R, accounted for significant variance. Finally, age accounted for significant unique variance only in the SSI score.
Article
A model of auditory performance and a model of ganglion cell survival in postlinguistically deafened adult cochlear implant users are suggested to describe the effects of aetiology, duration of deafness, age at implantation, age at onset of deafness, and duration of implant use. The models were compared with published data and a composite data set including 808 implant users. Qualitative agreement with the model of auditory performance was found. Duration of deafness had a strong negative effect on performance. Age at implantation had a slight negative effect on performance, increasing after age 60 years. Age at onset of deafness had little effect on performance up to age 60. Duration of implant use had a positive effect on performance. Aetiology had a relatively weak effect on performance.
Article
Over the past few years, there has been increased interest in studying some of the cognitive factors that affect speech perception performance of cochlear implant patients. In this paper, I provide a brief theoretical overview of the fundamental assumptions of the information-processing approach to cognition and discuss the role of perception, learning, and memory in speech perception and spoken language processing. The information-processing framework provides researchers and clinicians with a new way to understand the time-course of perceptual and cognitive development and the relations between perception and production of spoken language. Directions for future research using this approach are discussed including the study of individual differences, predicting success with a cochlear implant from a set of cognitive measures of performance and developing new intervention strategies.
Article
Unlabelled: This study tested the hypothesis that early language experience facilitates the development of language-specific perceptual weighting strategies believed to be critical for accessing phonetic structure. In turn, that structure allows for efficient storage and retrieval of words in verbal working memory, which is necessary for sentence comprehension. Participants were forty-nine 5-year-olds, evenly distributed among four groups: those with chronic otitis media with effusion (OME), low socio-economic status (low-SES), both conditions (both), or neither condition (control). All children participated in tasks of speech perception and phonological awareness. Children in the control and OME groups participated in additional tasks examining verbal working memory, sentence comprehension, and temporal processing. The temporal-processing task tested the hypothesis that any deficits observed on the language-related tasks could be explained by temporal-processing deficits. Children in the three experimental groups demonstrated similar results to each other, but different from the control group for speech perception and phonological awareness. Children in the OME group differed from those in the control group on tasks involving verbal working memory and sentence comprehension, but not temporal processing. Overall these results supported the major hypothesis explored, but failed to support the hypothesis that language problems are explained to any extent by temporal-processing problems. Learning outcomes: As a result of this activity, the participant will be able to (1) Explain the relation between language experience and the development of mature speech perception strategies, phonological awareness, verbal working memory, and syntactic comprehension. (2) Name at least three populations of individuals who exhibit delays in the development of mature speech perception strategies, phonological awareness, verbal working memory, and syntactic comprehension, and explain why these delays exist for each group. (3) Point out why perceptual strategies for speech are different for different languages. (4) Describe Baddeley's model [A.D. Baddeley, The development of the concept of working memory: implications and contributions of neuropsychology, in: G. Vallar, T. Shallice (Eds.), Neuropsychological Impairments of Short-term Memory, Cambridge University Press, New York, 1990, p. 54] of verbal working memory.
Article
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.
Wide Range Achievement Test
  • G Wilkinson
  • G Robertson
Wilkinson G, Robertson G. Wide Range Achievement Test. 4th Ed. Lutz, FL: Psychological Assessment Resources; 2006.
IEEE recommended practice for speech quality measurements: IEEE
IEEE. IEEE recommended practice for speech quality measurements: IEEE Report, 1969.