D B Pisoni

Indiana University Bloomington, Bloomington, Indiana, United States

Are you D B Pisoni?

Claim your profile

Publications (113)254.9 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the ability of a cochlear implant user to categorize talkers by region of origin and examine the influence of prior linguistic experience on the perception of regional dialect variation. A postlingually deafened adult cochlear implant user from the Southern region of the United States completed a six-alternative forced-choice dialect categorization task. The cochlear implant user was most accurate at categorizing unfamiliar talkers from his own region and another familiar dialect region, and least accurate at categorizing talkers from less familiar regions. Although the dialect-specific information made available by a cochlear implant may be degraded compared with information available to normal-hearing listeners, this experienced cochlear implant user was able to reliably categorize unfamiliar talkers by region of origin. The participant made use of dialect-specific acoustic-phonetic information in the speech signal and previously stored knowledge of regional dialect differences from early exposure before implantation despite an early hearing loss.
    Ear and hearing 02/2014; · 2.06 Impact Factor
  • Source
    Kathleen F Faulkner, David B Pisoni
    [Show abstract] [Hide abstract]
    ABSTRACT: At the present time, cochlear implantation is the only available medical intervention for patients with profound hearing loss and is considered the "standard of care" for both prelingually deaf infants and post-lingually deaf adults. It has been suggested recently that cochlear implants are one of the greatest accomplishments of auditory neuroscience. Despite the enormous success of cochlear implantation for the treatment of profound deafness, especially in young prelingually deaf children, several pressing unresolved clinical issues have emerged that are at the forefront of current research efforts in the field. In this commentary we briefly review how a cochlear implant works and then discuss five of the most critical clinical and basic research issues: (1) individual differences in outcome and benefit, (2) speech perception in noise, (3) music perception, (4) neuroplasticity and perceptual learning, and (5) binaural hearing.
    Neuroscience Discovery. 10/2013; 1(9):1-10.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background: There is a pressing need for new clinically feasible speech recognition tests that are theoretically motivated, sensitive to individual differences, and access the core perceptual and neurocognitive processes used in speech perception. PRESTO (Perceptually Robust English Sentence Test Open-set) is a new high-variability sentence test designed to reflect current theories of exemplar-based learning, attention, and perception, including lexical organization and automatic encoding of indexical attributes. Using sentences selected from the TIMIT (Texas Instruments/Massachusetts Institute of Technology) speech corpus, PRESTO was developed to include talker and dialect variability. The test consists of lists balanced for talker gender, keywords, frequency, and familiarity. Purpose: To investigate the performance, reliability, and validity of PRESTO. Research Design: In Phase I, PRESTO sentences were presented in multitalker babble at four signal-to-noise ratios (SNRs) to obtain a distribution of performance. In Phase II, participants returned and were tested on new PRESTO sentences and on HINT (Hearing In Noise Test) sentences presented in multitalker babble. Study Sample: Young, normal-hearing adults (N = 121) were recruited from the Indiana University community for Phase I. Participants who scored within the upper and lower quartiles of performance in Phase I were asked to return for Phase II (N = 40). Data Collection and Analysis: In both Phase I and Phase II, participants listened to sentences presented diotically through headphones while seated in enclosed carrels at the Speech Research Laboratory at Indiana University. They were instructed to type in the sentence that they heard using keyboards interfaced to a computer. Scoring for keywords was completed offline following data collection. Phase I data were analyzed by determining the distribution of performance on PRESTO at each SNR and at the average performance across all SNRs. PRESTO reliability was analyzed by a correlational analysis of participant performance at test (Phase I) and retest (Phase II). PRESTO validity was analyzed by a correlational analysis of participant performance on PRESTO and HINT sentences tested in Phase II, and by an analysis of variance of within-subject factors of sentence test and SNR, and a between-subjects factor of group, based on level of Phase I performance. Results: A wide range of performance on PRESTO was observed; averaged across all SNRs, keyword accuracy ranged from 40.26 to 76.18% correct. PRESTO accuracy at retest (Phase II) was highly correlated with Phase I accuracy (r = 0.92, p < 0.001). PRESTO scores were also correlated with scores on HINT sentences (r = 0.52, p < 0.001). Phase II results showed an interaction between sentence test type and SNR [F(3, 114) = 121.36, p < 0.001], with better performance on HINT sentences at more favorable SNRs and better performance on PRESTO sentences at poorer SNRs. Conclusions: PRESTO demonstrated excellent test/retest reliability. Although a moderate correlation was observed between PRESTO and HINT sentences, a different pattern of results occurred with the two types of sentences depending on the level of the competition, suggesting the use of different processing strategies. Findings from this study demonstrate the importance of high-variability materials for assessing and understanding individual differences in speech perception.
    Journal of the American Academy of Audiology 01/2013; 24(1):26-36. · 1.63 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective of this study was to assess whether training on speech processed with an eight-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of nonspeech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. Twenty-four normal-hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional 24 subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and post-test sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. Although both groups of subjects showed significant pre- to post-test improvements, subjects who transcribed vocoded sentences during training performed significantly better at post-test than those in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pre-test speech performance and, to a higher degree, post-test speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to use the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well ( approximately 75% correct) on the gender-identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall ( approximately 55%), suggesting that either explicit training is required to discriminate talkers' voices reliably or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that although transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone.
    Ear and hearing 09/2009; 30(6):662-74. · 2.06 Impact Factor
  • Source
    Susannah Levi, Stephen Winters, David B Pisoni
    [Show abstract] [Hide abstract]
    ABSTRACT: Previous research has shown that familiar talkers are more intelligible than unfamiliar talkers. In the current study, we tested the source of this familiar talker advantage by manipulating the type of talker information available to listeners. Two groups of native English listeners were familiarized with the voices of five German-English bilingual talkers; one group learned the voices from German stimuli and the other from English stimuli. Thus, English-trained listeners had access to both language-independent and English-specific talker information, while German-trained listeners had access to language-independent and German-specific talker information. After three days of voice learning, all listeners performed a word recognition task in English. Consistent with previous findings, English-trained listeners found the speech of familiar talkers to be more intelligible than unfamiliar talkers, as measured by whole words and phonemes correct. In contrast, German-trained listeners showed no familiar talker advantage, suggesting that listeners must have knowledge of talker-specific, linguistically relevant information to elicit the familiar talker advantage and that knowledge of language-independent talker information - such as size and shape of the vocal tract - does not facilitate speech perception.
    The Journal of the Acoustical Society of America 06/2008; 123(5):3331. · 1.65 Impact Factor
  • Source
    David B. Pisoni
    [Show abstract] [Hide abstract]
    ABSTRACT: Although a cochlear implant (CI) restores access to sound and speech for profoundly deaf children, there is substantial inter-individual variation in outcomes and many children with a CI continue to be delayed in their spoken language development. This suggests that they may benefit from alternative modes of communication such as sign language. However, the role of signed input in the education of children with a CI is much debated. The aim of the present thesis was two-folded: to explore underlying processes in speech perception that may help to explain inter-individual variation in outcomes, and to obtain insight into the effects of signed input on spoken language abilities. To that end, this thesis investigates speech and sign perception in 5- to 6-year old children with a CI. More specifically, it examines and interrelates the use of acoustic and visual cues in phonetic categorization and the representation of phonetic contrasts in novel words and signs. Additionally, it investigates the effects of bimodal (i.e., simultaneously spoken and signed) input on speech perception. The analyses show that children with a CI have fuzzy boundaries between sound categories and have difficulties to represent phonetic detail in novel words. Weakly-specified auditory phonological-lexical representations likely negatively impact speech processing. Importantly, signing experience did not negatively affect their speech perception and bimodal input seemed to even facilitate spoken word recognition. Together, these findings form an argument for bilingualism in a spoken and a signed language as the ultimate goal in the rehabilitation and education of children with a CI.
    01/2008: pages 494 - 523; , ISBN: 9780470757024
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To determine the effects of length of cochlear implant use and other demographic factors on the development of sustained visual attention in prelingually deaf children and to examine the relations between performance on a test of sustained visual attention and audiological outcome measures in this population. A retrospective analysis of data collected before cochlear implantation and over several years after implantation. Two groups of prelingually deaf children, one >6 years old (N = 41) and one <6 years old (N = 47) at testing, were given an age-appropriate Continuous Performance Task (CPT). In both groups, children monitored visually presented numbers for several minutes and responded whenever a designated number appeared. Hit rate, false alarm rate, and signal detection parameters were dependent measures of sustained visual attention. We tested for effects of a number of patient variables on CPT performance. Multiple regression analyses were conducted to determine if CPT scores were related to performance on several audiological outcome measures. In both groups of children, mean CPT performance was low compared with published norms for normal-hearing children, and performance improved as a function of length of cochlear implant use and chronological age. The improvement in performance was manifested as an increase in hit rate and perceptual sensitivity over time. In the younger age group, a greater number of active electrodes predicted better CPT performance. Results from regression analyses indicated a relationship between CPT response criterion and receptive language in the younger age group. However, we failed to uncover any other relations between CPT performance and speech and language outcome measures. Our findings suggest that cochlear implantation in prelingually deaf children leads to improved performance on a test of sustained visual processing of numbers over 2 or more years of cochlear implant use. In preschool-age children who use cochlear implants, individuals who are more conservative responders on the CPT show higher receptive language scores than do individuals with more impulsive response patterns. Theoretical accounts of these findings are discussed, including cross-modal reorganization of visual attention and enhanced phonological encoding of visually presented numbers.
    Ear and Hearing 08/2005; 26(4):389-408. · 3.26 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Individual speech and language outcomes of deaf children with cochlear implants (CIs) are quite varied. Individual differences in underlying cognitive functions may explain some of this variance. The current study investigated whether behavioral inhibition skills of deaf children were related to performance on a range of audiologic outcome measures. Retrospective analysis of longitudinal data collected from prelingually and profoundly deaf children who used CIs. Behavioral inhibition skills were measured using a visual response delay task that did not require hearing. Speech and language measures were obtained from behavioral tests administered at 1-year intervals of CI use. Female subjects showed higher response delay scores than males. Performance increased with length of CI use. Younger children showed greater improvement in performance as a function of device use than older children. No other subject variable had a significant effect on response delay score. A series of multiple regression analyses revealed several significant relations between delay task performance and open set word recognition, vocabulary, receptive language, and expressive language scores. The present results suggest that CI experience affects visual information processing skills of prelingually deaf children. Furthermore, the observed pattern of relations suggests that speech and language processing skills are closely related to the development of response delay skills in prelingually deaf children with CIs. These relations may reflect underlying verbal encoding skills, subvocal rehearsal skills, and verbally mediated self-regulatory skills. Clinically, visual response delay tasks may be useful in assessing behavioral and cognitive development in deaf children after implantation.
    The Laryngoscope 05/2005; 115(4):595-600. · 1.98 Impact Factor
  • Source
    R Burkholder, D Pisoni
    [Show abstract] [Hide abstract]
    ABSTRACT: The errors made by 37 pediatric cochlear implant users and age-matched normal- hearing children during forward and backward digit span recall were analyzed. All children were between 8 and 10 years old. The children who used implants had at least 4.5 years of experience with their device. Error classification was made using four categories: item, order, omission, or combination errors. Recall of digits not presented on a given trial was classified as item errors. The recall of all correct digits in an incorrect order was considered to be an order error. Results from a univariate ANOVA revealed main effects for error type, recall condition, and hearing ability. In addition, the error type by recall condition interaction revealed that order errors increased more in backward digit span recall than any other type of error for both normal-hearing children and children with cochlear implants. The present results are consistent with previous studies, suggesting that the shorter digit spans of children using cochlear implants are not primarily related to perceptual difficulties but appear to reflect memory processing problems related to slower subvocal verbal rehearsal and serial scanning of items in short-term memory.
    International Congress Series 11/2004; 1273:312-315.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated source misattributions in the DRM false memory paradigm (Deese, 1959, Roediger & McDermott, 1995). Subjects studied words in one of two voices, manipulated between-lists (pure-voice lists) or within-list (mixed-voice lists), and were subsequently given a recognition test with voice-attribution judgements. Experiments 1 and 2 used visual tests. With pure-voice lists (Experiment 1), subjects frequently attributed related lures to the corresponding study voice, despite having the option to not respond. Further, these erroneous attributions remained high with mixed-voice lists (Experiment 2). Thus, even when their related lists were not associated with a particular voice, subjects misattributed the lures to one of the voices. Attributions for studied items were fairly accurate in both cases. Experiments 3 and 4 used auditory tests. With pure-voice lists (Experiment 3), subjects frequently attributed related lures and studied items to the corresponding study voice, regardless of the test voice. In contrast, with mixed-voice lists (Experiment 4), subjects frequently attributed related lures and studied items to the corresponding test voice, regardless of the study voice. These findings indicate that source attributions can be sensitive to voice information provided either at study or at test, even though this information is irrelevant for related lures.
    Memory 10/2004; 12(5):586-602. · 2.09 Impact Factor
  • Source
    R Burkholder, D Pisoni, M Svirsky
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examined the effects of perceptual learning on nonword repetition performance of normal-hearing listeners who were exposed to severely degraded auditory conditions that were designed to simulate the auditory input of a cochlear implant. Twenty normal-hearing adult listeners completed a nonword repetition task using an eight-band, frequency-shifted cochlear implant simulation strategy both before and after training on open- and closed-set word recognition tasks. Feedback was provided during training. The nonword responses obtained from each participant were digitally recorded and played back to normal-hearing listeners. These listeners rated the nonword repetition accuracy in comparison to the original unprocessed target stimuli using a seven-point scale. The mean nonword accuracy ratings were significantly higher for the non words repeated after training than for non words repeated prior to training. These results suggest that the word recognition training tasks encouraged auditory perceptual learning that generalized to novel, nonword auditory stimuli. The present findings also suggest that adaptation and learning from the degraded auditory stimuli produced by a cochlear implant simulation can be achieved even in a difficult perceptual-motor task such as nonword repetition which involves both speech perception and production of an auditory stimulus that lacks any lexical or semantic representation.
    International Congress Series 01/2004; 1273(10):208-211.
  • Source
    M Cleary, D B Pisoni, A E Geers
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this study was to examine working memory for sequences of auditory and visual stimuli in prelingually deafened pediatric cochlear implant users with at least 4 yr of device experience. Two groups of 8- and 9-yr-old children, 45 normal-hearing and 45 hearing-impaired users of cochlear implants, completed a novel working memory task requiring memory for sequences of either visual-spatial cues or visual-spatial cues paired with auditory signals. In each sequence, colored response buttons were illuminated either with or without simultaneous auditory presentation of verbal labels (color-names or digit-names). The child was required to reproduce each sequence by pressing the appropriate buttons on the response box. Sequence length was varied and a measure of memory span corresponding to the longest list length correctly reproduced under each set of presentation conditions was recorded. Additional children completed a modified task that eliminated the visual-spatial light cues but that still required reproduction of auditory color-name sequences using the same response box. Data from 37 pediatric cochlear implant users were collected using this modified task. The cochlear implant group obtained shorter span scores on average than the normal-hearing group, regardless of presentation format. The normal-hearing children also demonstrated a larger "redundancy gain" than children in the cochlear implant group-that is, the normal-hearing group displayed better memory for auditory-plus-lights sequences than for the lights-only sequences. Although the children with cochlear implants did not use the auditory signals as effectively as normal-hearing children when visual-spatial cues were also available, their performance on the modified memory task using only auditory cues showed that some of the children were capable of encoding auditory-only sequences at a level comparable with normal-hearing children. The finding of smaller redundancy gains from the addition of auditory cues to visual-spatial sequences in the cochlear implant group as compared with the normal-hearing group demonstrates differences in encoding or rehearsal strategies between these two groups of children. Differences in memory span between the two groups even on a visual-spatial memory task suggests that atypical working memory development irrespective of input modality may be present in this clinical population.
    Ear and Hearing 11/2001; 22(5):395-411. · 3.26 Impact Factor
  • Richard N. Aslin, David B. Pisoni
    Infancy 11/2001; 2(4):415-417. · 1.73 Impact Factor
  • Source
    W D Goh, D B Pisoni, K I Kirk, R E Remez
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this case study was to investigate multimodal perceptual coherence in speech perception in an exceptionally good postlingually deafened cochlear implant user. His ability to perceive sinewave replicas of spoken sentences, and the extent to which he integrated sensory information from multimodal sources was compared with a group of adult normal-hearing listeners to determine the contribution of natural auditory quality in the use of electrocochlear stimulation. The patient, "Mr. S," transcribed sinewave sentences of natural speech under audio-only (AO), visual-only (VO), and audio-visual (A+V) conditions. His performance was compared with the data collected from 25 normal-hearing adults. Although normal-hearing participants performed better than Mr. S for AO sentences (65% versus 53% syllables correct), Mr. S was superior for VO sentences (43% versus 18%). For A+V sentences, Mr. S's performance was comparable with the normal-hearing group (90% versus 86%). An estimate of the amount of visual enhancement, R, obtained from seeing the talker's face showed that Mr. S derived a larger gain from the additional visual information than the normal-hearing controls (78% versus 59%). The findings from this case study of an exceptionally good cochlear implant user suggest that he is perceiving the sinewave sentences on the basis of coherent variation from multimodal sensory inputs, and not on the basis of lipreading ability alone. Electrocochlear stimulation is evidently useful in multimodal contexts because it preserves dynamic speech-like variation, despite the absence of speech-like auditory qualities.
    Ear and Hearing 11/2001; 22(5):412-9. · 3.26 Impact Factor
  • Source
    L Lachs, D B Pisoni, K I Kirk
    [Show abstract] [Hide abstract]
    ABSTRACT: Although there has been a great deal of recent empirical work and new theoretical interest in audiovisual speech perception in both normal-hearing and hearing-impaired adults, relatively little is known about the development of these abilities and skills in deaf children with cochlear implants. This study examined how prelingually deafened children combine visual information available in the talker's face with auditory speech cues provided by their cochlear implants to enhance spoken language comprehension. Twenty-seven hearing-impaired children who use cochlear implants identified spoken sentences presented under auditory-alone and audiovisual conditions. Five additional measures of spoken word recognition performance were used to assess auditory-alone speech perception skills. A measure of speech intelligibility was also obtained to assess the speech production abilities of these children. A measure of audiovisual gain, "Ra," was computed using sentence recognition scores in auditory-alone and audiovisual conditions. Another measure of audiovisual gain, "Rv," was computed using scores in visual-alone and audiovisual conditions. The results indicated that children who were better at recognizing isolated spoken words through listening alone were also better at combining the complementary sensory information about speech articulation available under audiovisual stimulation. In addition, we found that children who received more benefit from audiovisual presentation also produced more intelligible speech, suggesting a close link between speech perception and production and a common underlying linguistic basis for audiovisual enhancement effects. Finally, an examination of the distribution of children enrolled in Oral Communication (OC) and Total Communication (TC) indicated that OC children tended to score higher on measures of audiovisual gain, spoken word recognition, and speech intelligibility. The relationships observed between auditory-alone speech perception, audiovisual benefit, and speech intelligibility indicate that these abilities are not based on independent language skills, but instead reflect a common source of linguistic knowledge, used in both perception and production, that is based on the dynamic, articulatory motions of the vocal tract. The effects of communication mode demonstrate the important contribution of early sensory experience to perceptual development, specifically, language acquisition and the use of phonological processing skills. Intervention and treatment programs that aim to increase receptive and productive spoken language skills, therefore, may wish to emphasize the inherent cross-correlations that exist between auditory and visual sources of information in speech perception.
    Ear and Hearing 07/2001; 22(3):236-51. · 3.26 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cochlear implant (CI) users differ in their ability to perceive and recognize speech sounds. Two possible reasons for such individual differences may lie in their ability to discriminate formant frequencies or to adapt to the spectrally shifted information presented by cochlear implants, a basalward shift related to the implant's depth of insertion in the cochlea. In the present study, we examined these two alternatives using a method-of-adjustment (MOA) procedure with 330 synthetic vowel stimuli varying in F1 and F2 that were arranged in a two-dimensional grid. Subjects were asked to label the synthetic stimuli that matched ten monophthongal vowels in visually presented words. Subjects then provided goodness ratings for the stimuli they had chosen. The subjects' responses to all ten vowels were used to construct individual perceptual "vowel spaces." If CI users fail to adapt completely to the basalward spectral shift, then the formant frequencies of their vowel categories should be shifted lower in both F1 and F2. However, with one exception, no systematic shifts were observed in the vowel spaces of CI users. Instead, the vowel spaces differed from one another in the relative size of their vowel categories. The results suggest that differences in formant frequency discrimination may account for the individual differences in vowel perception observed in cochlear implant users.
    The Journal of the Acoustical Society of America 06/2001; 109(5 Pt 1):2135-45. · 1.65 Impact Factor
  • Source
    S A Frisch, D B Pisoni
    [Show abstract] [Hide abstract]
    ABSTRACT: Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing.
    Ear and Hearing 01/2001; 21(6):578-89. · 3.26 Impact Factor
  • Source
    The Annals of otology, rhinology & laryngology. Supplement 01/2001; 185:114-6.
  • Source
    The Annals of otology, rhinology & laryngology. Supplement 01/2001; 185:60-2.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: On the basis of the good predictions for phonemes correct, we conclude that closed-set feature identification may successfully predict phoneme identification in an open-set word recognition task. For word recognition, however, the PCM model underpredicted observed performance, and the addition of a mental lexicon (ie, the SPAMR model) was needed for a good match to data averaged across 7 adults with CIs. The predictions for words correct improved with the addition of a lexicon, providing support for the hypothesis that lexical information is used in open-set spoken word recognition by CI users. The perception of words more complex than CNCs is also likely to require lexical knowledge (Frisch et al, this supplement, pp 60-62) In the future, we will use the performance off individual CI users on psychophysical tasks to generate predicted vowel and consonant confusion matrices to be used to predict open-set spoken word recognition.
    The Annals of otology, rhinology & laryngology. Supplement 01/2001; 185:68-70.

Publication Stats

5k Citations
254.90 Total Impact Points

Institutions

  • 1974–2013
    • Indiana University Bloomington
      • Department of Psychological and Brain Sciences
      Bloomington, Indiana, United States
  • 2001–2008
    • University of Michigan
      • Department of Linguistics
      Ann Arbor, Michigan, United States
  • 1997–2005
    • Indiana University-Purdue University Indianapolis
      • Department of Otolaryngology-Head and Neck Surgery
      Indianapolis, IN, United States
    • Washington University in St. Louis
      • Department of Psychology
      Saint Louis, MO, United States
  • 1999–2004
    • Indiana University East
      Indiana, United States
    • Northwestern University
      • Department of Linguistics
      Evanston, IL, United States
    • Indiana University-Purdue University School of Medicine
      • Department of Otolaryngology
      Indianapolis, Indiana, United States