Karen Iler Kirk

University of Illinois, Urbana-Champaign, Urbana, Illinois, United States

Are you Karen Iler Kirk?

Claim your profile

Publications (91)97.62 Total impact

  • Haihong Liu · Sha Liu · Karen Iler Kirk · Jie Zhang · Wentong Ge · Jun Zheng · Zhicheng Liu · Xin Ni
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective of the present study was to investigate the longitudinal performance on open-set word perception in Mandarin children with cochlear implants (CIs). Prospective cohort study. One hundred and five prelingually deaf children implanted with CIs participated in the study. The Standard-Chinese Version of Monosyllabic Lexical Neighborhood Test (LNT) and Multisyllabic Lexical Neighborhood Test (MLNT) were used as open-set word perception evaluation tools. Evaluations were administrated at 6, 12, 24, 36, 48, 60, 72, and 84 months post CI stimulation, respectively. (1) Spoken word perception performance of congenitally deaf children with CIs improved significantly over time. (2) The fastest improvement occurred in the first 36 months after initial activation, then the improvement slowed down and the final peak score of 81.7% correct was achieved at 72 months after initial activation. (3) Early implanted children exhibited better longitudinal performance. (4) Lexical factors affected consistently in each evaluation session. For lexically harder words, such as monosyllabic hard words, there was substantial room for improvement even after long-term use of CI. (1) CI continuously provided significant benefits in word perception to children with severe/profound sensorineural hearing loss. (2) Age at implantation and Mandarin lexical factor affected longitudinal performance significantly. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
    No preview · Article · Jul 2015 · International journal of pediatric otorhinolaryngology
  • Nancy M Young · Karen Iler Kirk

    No preview · Article · Feb 2013 · Otology & neurotology: official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss.
    Full-text · Article · Jun 2012 · Journal of the American Academy of Audiology
  • Source
    Vidya Krull · Xin Luo · Karen Iler Kirk
    [Show abstract] [Hide abstract]
    ABSTRACT: Understanding speech in background noise, talker identification, and vocal emotion recognition are challenging for cochlear implant (CI) users due to poor spectral resolution and limited pitch cues with the CI. Recent studies have shown that bimodal CI users, that is, those CI users who wear a hearing aid (HA) in their non-implanted ear, receive benefit for understanding speech both in quiet and in noise. This study compared the efficacy of talker-identification training in two groups of young normal-hearing adults, listening to either acoustic simulations of unilateral CI or bimodal (CI+HA) hearing. Training resulted in improved identification of talkers for both groups with better overall performance for simulated bimodal hearing. Generalization of learning to sentence and emotion recognition also was assessed in both subject groups. Sentence recognition in quiet and in noise improved for both groups, no matter if the talkers had been heard during training or not. Generalization to improvements in emotion recognition for two unfamiliar talkers also was noted for both groups with the simulated bimodal-hearing group showing better overall emotion-recognition performance. Improvements in sentence recognition were retained a month after training in both groups. These results have potential implications for aural rehabilitation of conventional and bimodal CI users.
    Full-text · Article · Apr 2012 · The Journal of the Acoustical Society of America
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To examine multimodal spoken word-in-sentence recognition in children. Two experiments were undertaken. In Experiment 1, the youngest age with which the multimodal sentence recognition materials could be used was evaluated. In Experiment 2, lexical difficulty and presentation modality effects were examined, along with test-retest reliability and validity in normal-hearing children and those with cochlear implants. Normal-hearing children as young as 3.25 years and those with cochlear implants just under 4 years who have used their device for at least 1 year were able to complete the multimodal sentence testing. Both groups identified lexically easy words in sentences more accurately than lexically hard words across modalities, although the largest effects occurred in the auditory-only modality. Both groups displayed audiovisual integration with the highest scores achieved in the audiovisual modality, followed sequentially by auditory-only and visual-only modalities. Recognition of words in sentences was correlated with recognition of words in isolation. Preliminary results suggest fair-to-good test-retest reliability. The results suggest that children's audiovisual word-in-sentence recognition can be assessed using the materials developed for this investigation. With further development, the materials hold promise for becoming a test of multimodal sentence recognition for children with hearing loss.
    Full-text · Article · Apr 2011 · Journal of Speech Language and Hearing Research
  • Nan Mai Wang · Che-Ming Wu · Karen Iler Kirk
    [Show abstract] [Hide abstract]
    ABSTRACT: This investigation aimed to examine the effects of word frequency and lexical neighborhood density on spoken word recognition of monosyllables and disyllables in Mandarin by normal hearing children and children with cochlear implants. The lexical characteristics were incorporated from the Neighborhood Activation Model (NAM), which suggests that words in the mental lexicon are organized into similarity neighborhoods. The difficulty of a listener's task is affected by the frequency of the target word and the density of the lexical neighbors from which that word must be identified. The Monosyllabic Lexical Neighborhood Test and the Disyllabic Lexical Neighborhood Test in Mandarin Chinese (Mandarin LNT and MLNT) were developed to take into account the effects of these linguistic and cognitive demands on speech perception performance. Three stages were conducted in this investigation. In the first stage, Mandarin words of monosyllables and disyllables were selected and their lexical properties were calculated from the CHILDES database. Four lexically "easy" and four lexically "hard" word lists in Mandarin LNT as well as two word lists across lexical properties among disyllables were determined based on their relative word frequencies and neighborhood densities. In the second stage, word stimuli were verified by 30 children of the NH group and 36 children from the CI group. In the third stage, the inter-list equivalency and test-retest reliability of word lists across lexical properties were determined, and the correlations of Mandarin LNT and MLNT with other measures and inter-rater reliability were also investigated. Word recognition scores were higher among disyllables than among monosyllables. Lexically "easy" disyllabic words were better recognized than their "hard" counterparts and the monosyllables among two groups of children. However, no lexical effects on word recognition of Mandarin monosyllables were observed for either group. No significant differences were found among word lists in each combination of syllable structure and lexical property. Inter-rater reliability, inter-list equivalency, and test-retest reliability were revealed. The Mandarin LNT and MLNT were found to be highly reliable measures of spoken word recognition (r = 0.84; p < 0.01) with acceptable equivalency between lists (r = 0.638-0.876). Lexical effects on Mandarin word recognition were only demonstrated among disyllabic words by NH and the CI children, while Mandarin homophones appearing in monosyllabic words were suggested. Lexical effects on spoken word recognition in Mandarin are not substantially demonstrated as in English, but the Mandarin LNT and MLNT provided reliable information on the spoken word recognition of pediatric CI users in the initial stage after implantation as well as in the rehabilitation progress.
    No preview · Article · Aug 2010 · International journal of pediatric otorhinolaryngology
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This study is the first in a series designed to develop and norm new theoretically motivated sentence tests for children. The purpose was to examine the independent contributions of word frequency (i.e., how often words occur in language) and lexical density (the number of similar sounding words or "neighbors" to a target word) to the perception of key words in the new sentence set. Twenty-four children with normal hearing aged 5 to 12 yrs served as participants; they were divided into four equal age-matched groups. The stimuli consisted of 100 semantically neutral sentences that were 5 to 7 words in length. Each sentence contained 3 key words that were controlled for word frequency and lexical density. Words with few neighbors come from sparse neighborhoods, whereas words with many neighbors come from dense neighborhoods. The key words within a sentence belonged to one of the four lexical categories: (1) high-frequency sparse, (2) low-frequency dense, (3) high-frequency dense, and (4) low-frequency sparse. Participants were administered the sentence list and the 300 key words in isolation at 65 dB SPL. Each participant group was tested in spectrally matched noise at one of the four signal-to-noise ratios (SNRs -2, 0, 2, and 4 dB). The percent of words correctly identified was calculated as a function of SNR, key word context (sentences vs. words), and key word lexical category. SNR had a significant effect on the recognition of key words in sentences and in isolation; performance improved at higher SNRs. There were significant main effects of word frequency and lexical density as well as a significant interaction between the two lexical factors. In isolation, high-frequency words were recognized more accurately than low-frequency words. In both word and sentence contexts, sparse words yielded greater accuracy than dense words, irrespective of word frequency. There was a modest but significant negative correlation between lexical density and the recognition of words in isolation and in sentences. Word frequency and lexical density seem to influence word recognition independently in children with normal hearing. This is similar to earlier results in adults with normal hearing. In addition, there seems to be an interaction between the two factors, with lexical density being more heavily weighted than word frequency. These results give us further insight into the way children organize and access words from long-term lexical memory in a relational way. Our results showed that lexical effects were most evident at poorer SNRs. This may have important implications for assessing spoken-word recognition performance in children with sensory aids because they typically receive a degraded auditory signal.
    Full-text · Article · Sep 2009 · Ear and hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: The acquisition of speech perception and speech production skills emerges over a protracted time course in congenitally deaf children with multichannel cochlear implants (CT). Only through comprehensive, longitudinal studies can the full impact of cochlear implantation be assessed. in this study, the performance of CI users was examined longitudinally on a battery of speech perception measures and compared with subjects with profound hearing loss who used conventional hearing aids (HA). the average performance of the multichannel cochlear implant users gradually increased over time and continued to improve even after 5 years of CI use. Speech intelligibility was assessed from recordings of the subjects' elicited speech and played to panels of listeners. Intelligibility was scored in terms of percentage of words correctly understood. the average scores for subjects who had used their CI for 4 years or more exceeded 40%.
    No preview · Article · Jul 2009 · Acta Oto-Laryngologica
  • [Show abstract] [Hide abstract]
    ABSTRACT: There is evidence that familiarity with the talker facilitates speech recognition (see Nygaard, 2005 for reviews). Identifying talker voices from an auditory speech signal is difficult for cochlear implant users due to the lack of spectral detail conveyed by their processors. Recently, Sheffert and Olson (2004) demonstrated facilitation in voice learning when listeners with normal hearing were trained using audiovisual speech. In the present study, normal hearing listeners were presented with speech processed through a 12-channel noise band cochlear implant simulator and trained to recognize voices in an auditory-only or audiovisual format. Listeners in the audiovisual format required fewer sessions to reach criterion (80% accuracy) than those in the auditory-only format. Voice learning varied as a function of the talkers included in the training set. For the talkers that were more difficult to identify, audiovisual training yielded greater improvements in voice identification than did auditory-only training. After training, participants completed a word recognition task using processed sentences produced by training talkers and novel talkers. Talker familiarity did not influence word recognition performance for either training group. This may be due to ceiling effects with the 12-channel noise band processor. Additional testing in noise is underway.
    No preview · Article · May 2009 · The Journal of the Acoustical Society of America
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The increased access to sound that cochlear implants have provided to profoundly deaf children has allowed them to develop English speech and language skills more successfully than using hearing aids alone. The purpose of this study was to determine how well early postimplant language skills were able to predict later language ability. Thirty children who received a cochlear implant between the years 1991 and 2000 were study participants. The Reynell Developmental Language Scales (RDLS) and the Clinical Evaluation of Language Fundamentals (CELF) were used as language measures. Results revealed that early receptive language skills as measured using the RDLS were good predictors of later core language ability assessed by the CELF. Alternatively, early expressive language skills were not found to be good predictors of later language performance. The age at which a child received an implant was found to have a significant impact on the early language measures, but not the later language measure, or on the ability of the RDLS to predict performance on the CELF measure.
    Full-text · Article · Aug 2008 · Audiology and Neurotology
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This study demonstrated that children who receive a cochlear implant below the age of 2 years obtain higher mean receptive and expressive language scores than children implanted over the age of 2 years. The purpose of this study was to compare the receptive and expressive language skills of children who received a cochlear implant before 1 year of age to the language skills of children who received an implant between 1 and 3 years of age. Standardized language measures, the Reynell Developmental Language Scale (RDLS) and the Preschool Language Scale (PLS), were used to assess the receptive and expressive language skills of 91 children who received an implant before their third birthday. The mean receptive and expressive language scores for the RDLS and the PLS were slightly higher for the children who were implanted below the age of 2 years compared with the children who were implanted over 2 years old. For the PLS, both the receptive and expressive mean standard scores decreased with increasing age at implantation.
    Full-text · Article · May 2008 · Acta Oto-Laryngologica
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examined how prelingually deafened children with cochlear implants combine visual information from lipreading with auditory cues in an open-set speech perception task. A secondary aim was to examine lexical effects on the recognition of words in isolation and in sentences. Fifteen children with cochlear implants served as participants in this study. Participants were administered two tests of spoken word recognition. The LNT assessed isolated word recognition in an auditory-only format. The AV-LNST assessed recognition of key words in sentences in a visual-only, auditory-only and audiovisual presentation format. On each test, lexical characteristics of the stimulus items were controlled to assess the effects of lexical competition. The children also were administered a test of receptive vocabulary knowledge. The results revealed that recognition of key words was significantly influenced by presentation format. Audiovisual speech perception was best, followed by auditory-only and visual-only presentation, respectively. Lexical effects on spoken word recognition were evident for isolated words, but not when words were presented in sentences. Finally, there was a significant relationship between auditory-only and audiovisual word recognition and language knowledge. The results demonstrate that children with cochlear implants obtain significant benefit from audiovisual speech integration, and suggest such tests should be included in test batteries intended to evaluate cochlear implant outcomes.
    Full-text · Article · Dec 2007 · Audiological Medicine
  • Nan Mai Wang · Tsun Sheng Huang · Che-Ming Wu · Karen Iler Kirk
    [Show abstract] [Hide abstract]
    ABSTRACT: Cochlear implantation is an established method of auditory rehabilitation for severely and profoundly hearing impaired individuals. Although numerous studies have examined communication outcomes in pediatric cochlear implant (CI) recipients, data concerning the benefits of cochlear implantation in children who speak Mandarin Chinese are lacking. This study examined communication outcomes in 29 Mandarin-speaking children implanted at Chung Gung Memorial Hospital. A prospective between-groups design was used to compare communication outcomes as a function of age at time of implantation. Children in the Younger group were implanted before 3 years of age, whereas children in the Older group were implanted after 3 years of age. Outcome measures assessed auditory thresholds, speech perception, speech intelligibility, receptive and expressive language skills, communication barriers, and communication mode. Correlation analysis was used to examine the relationship between communication outcome and age at implantation. Children in the Younger group demonstrated a significant level of difference on Mandarin vowels, consonants, tones, and open-set speech perception compared with the children in the Older group. Between-group differences were also shown on receptive and expressive language skills. But, no significant differences were noted on speech intelligibility or in self-ratings of communication barriers. A larger proportion of children in the Younger group used oral communication and were educated in mainstream classrooms. Communication mode change of the Younger group reached a significant level after cochlear implant. Speech perception performance was negatively correlated with age at implantation as well as chronological age. Mandarin-speaking children can obtain substantial communication benefits from cochlear implantation, with earlier implantation yielding superior results.
    No preview · Article · Dec 2007 · International Journal of Pediatric Otorhinolaryngology
  • Source
    Marcia J Hay-McCutcheon · David B Pisoni · Karen Iler Kirk
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examined the speech perception skills of a younger and older group of cochlear implant recipients to determine the benefit that auditory and visual information provides for speech understanding. Retrospective review. Pre- and postimplantation speech perception scores from the Consonant-Nucleus-Consonant (CNC), the Hearing In Noise sentence Test (HINT), and the City University of New York (CUNY) tests were analyzed for 34 postlingually deafened adult cochlear implant recipients. Half were elderly (i.e., >65 y old) and other half were middle aged (i.e., 39-53 y old). The CNC and HINT tests were administered using auditory-only presentation; the CUNY test was administered using auditory-only, vision-only, and audiovisual presentation conditions No differences were observed between the two age groups on the CNC and HINT tests. For a subset of individuals tested with the CUNY sentences, we found that the preimplantation speechreading scores of the younger group correlated negatively with auditory-only postimplant performance. Additionally, older individuals demonstrated a greater reliance on the integration of auditory and visual information to understand sentences than did the younger group On average, the auditory-only speech perception performance of older cochlear implant recipients was similar to the performance of younger adults. However, variability in speech perception abilities was observed within and between both age groups. Differences in speechreading skills between the younger and older individuals suggest that visual speech information is processed in a different manner for elderly individuals than it is for younger adult cochlear implant recipients.
    Full-text · Article · Oct 2005 · The Laryngoscope
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With broadening candidacy criteria for cochlear implantation, a greater number of pediatric candidates have usable residual hearing in their nonimplanted ears. This population potentially stands to benefit from continued use of conventional amplification in their nonimplanted ears. The purposes of this investigation were to evaluate whether children with residual hearing in their nonimplanted ears benefit from bilateral use of cochlear implants and hearing aids and to investigate the time course of adaptation to combined use of the devices together. Pediatric cochlear implant recipients with severe sensorineural hearing loss in their nonimplanted ears served as participants. Ten children continued to use hearing aids in their nonimplanted ears after cochlear implantation; 12 children used their cochlear implants exclusively. Participants were tested longitudinally on spoken word recognition measures at 6-month intervals. The children who continued wearing hearing aids were tested in three sensory aid conditions: cochlear implants alone, hearing aids alone, and cochlear implants in conjunction with hearing aids. The children who did not continue hearing aid use were tested after surgery in their only aided condition, cochlear implant alone. The results suggest that children with severe hearing loss who continued using hearing aids in their nonimplanted ears benefited from combining the acoustic input received from a hearing aid with the input received from a cochlear implant, particularly in background noise. However, this benefit emerged with experience. Our findings suggest that it is appropriate to encourage pediatric cochlear implant recipients with severe hearing loss to continue wearing an appropriately fitted hearing aid in the nonimplanted ear to maximally benefit from bilateral stimulation.
    Full-text · Article · Sep 2005 · Ear and Hearing
  • [Show abstract] [Hide abstract]
    ABSTRACT: The Audiovisual Lexical Neighborhood Sentence Test (AVLNST), a new, recorded speech recognition test for children with sensory aids, was administered in multiple presentation modalities to children with normal hearing and vision. Each sentence consists of three key words whose lexical difficulty is controlled according to the Neighborhood Activation Model (NAM) of spoken word recognition. According to NAM, the recognition of spoken words is influenced by two lexical factors: the frequency of occurrence of individual words in a language, and how phonemically similar the target word is to other words in the listeners lexicon. These predictions are based on auditory similarity only, and thus do not take into account how visual information can influence the perception of speech. Data from the AVLNST, together with those from recorded audiovisual versions of isolated word recognition measures, the Lexical Neighborhood, and the Multisyllabic Lexical Neighborhood Tests, were used to examine the influence of visual information on speech perception in children. Further, the influence of top-down processing on speech recognition was examined by evaluating performance on the recognition of words in isolation versus words in sentences. [Work supported by the American Speech-Language-Hearing Foundation, the American Hearing Research Foundation, and the NIDCD, T32 DC00012 to Indiana University.]
    No preview · Article · Sep 2005 · The Journal of the Acoustical Society of America
  • [Show abstract] [Hide abstract]
    ABSTRACT: Profound sensorineural hearing loss secondary to cochlear dysplasia presents a number of surgical challenges during cochlear implantation. The standard transmastoid-facial recess approach can be performed in the majority of cases. In cases of common cavity deformity, the transmastoid labyrinthotomy approach has a number of advantages. A high incidence of CSF gushers occurs in this population but can be managed by creating a small cochleostomy and tightly sealing the cochleostomy with connective tissue. Acceptable postoperative speech perception results can be expected.
    No preview · Article · Jun 2005 · Operative Techniques in Otolaryngology-Head and Neck Surgery
  • Source
    Rachael Frush Holt · Karen Iler Kirk
    [Show abstract] [Hide abstract]
    ABSTRACT: The primary goals of this investigation were to examine the speech and language development of deaf children with cochlear implants and mild cognitive delay and to compare their gains with those of children with cochlear implants who do not have this additional impairment. We retrospectively examined the speech and language development of 69 children with pre-lingual deafness. The experimental group consisted of 19 children with cognitive delays and no other disabilities (mean age at implantation = 38 months). The control group consisted of 50 children who did not have cognitive delays or any other identified disability. The control group was stratified by primary communication mode: half used total communication (mean age at implantation = 32 months) and the other half used oral communication (mean age at implantation = 26 months). Children were tested on a variety of standard speech and language measures and one test of auditory skill development at 6-month intervals. The results from each test were collapsed from blocks of two consecutive 6-month intervals to calculate group mean scores before implantation and at 1-year intervals after implantation. The children with cognitive delays and those without such delays demonstrated significant improvement in their speech and language skills over time on every test administered. Children with cognitive delays had significantly lower scores than typically developing children on two of the three measures of receptive and expressive language and had significantly slower rates of auditory-only sentence recognition development. Finally, there were no significant group differences in auditory skill development based on parental reports or in auditory-only or multimodal word recognition. The results suggest that deaf children with mild cognitive impairments benefit from cochlear implantation. Specifically, improvements are evident in their ability to perceive speech and in their reception and use of language. However, it may be reduced relative to their typically developing peers with cochlear implants, particularly in domains that require higher level skills, such as sentence recognition and receptive and expressive language. These findings suggest that children with mild cognitive deficits be considered for cochlear implantation with less trepidation than has been the case in the past. Although their speech and language gains may be tempered by their cognitive abilities, these limitations do not appear to preclude benefit from cochlear implant stimulation, as assessed by traditional measures of speech and language development.
    Full-text · Article · May 2005 · Ear and Hearing
  • Source
    Miranda Cleary · David B Pisoni · Karen Iler Kirk
    [Show abstract] [Hide abstract]
    ABSTRACT: The perception of voice similarity was examined in 5-year-old children with normal hearing sensitivity and in pediatric cochlear implant users, 5-12 years of age. Recorded sentences were manipulated to form a continuum of similar-sounding voices. An adaptive procedure was then used to determine how acoustically different, in terms of average fundamental and formant frequencies, 2 sentences needed to be for a child to categorize the sentences as spoken by 2 different talkers. The average spectral characteristics of 2 utterances (including their fundamental frequencies) needed to differ by at least 11%-16% (2-2.5 semitones) for normal-hearing children to perceive the voices as belonging to different talkers. Introducing differences in the linguistic content of the 2 sentences to be compared did not change performance. Although several children with cochlear implants performed similarly to normal-hearing children, most found the task very difficult. Pediatric cochlear implant users who scored above the group mean of 64% of words correct on a monosyllabic open-set word identification task categorized the voices more like children with normal hearing sensitivity.
    Preview · Article · Mar 2005 · Journal of Speech Language and Hearing Research
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: An experimental procedure was developed to investigate word-learning skills of children who use cochlear implants (CIs). Using interactive play scenarios, 2- to 5-year olds were presented with sets of objects (Beanie Baby stuffed animals) and words for their names that corresponded to salient perceptual attributes (e.g., "horns" for a goat). Their knowledge of the word-object associations was measured immediately after exposure and then following a 2-hour delay. Children who use cochlear implants performed more poorly than age-matched children with typical hearing both receptively and expressively. Both groups of children showed retention of the word-object associations in the delayed testing conditions for words that were previously known. Our findings suggest that although pediatric CI users may have impaired phonological processing skills, their long-term memory for familiar words may be similar to children with typical hearing. Further, the methods that developed in this study should be useful for investigating other aspects of word learning in children who use CIs.
    Full-text · Article · Jan 2005 · The Volta review

Publication Stats

3k Citations
97.62 Total Impact Points


  • 2015
    • University of Illinois, Urbana-Champaign
      • Department of Speech and Hearing Science
      Urbana, Illinois, United States
  • 2012-2013
    • University of Iowa
      • Department of Communication Sciences and Disorders
      Iowa City, Iowa, United States
  • 2007-2011
    • Purdue University
      • Department of Speech, Language and Hearing Sciences
      ウェストラファイエット, Indiana, United States
  • 1995-2010
    • Indiana University-Purdue University Indianapolis
      • Department of Otolaryngology-Head and Neck Surgery
      Indianapolis, Indiana, United States
  • 1996-2005
    • Indiana University School of Medicine
      • Otolaryngology–Head & Neck Surgery
      Indianapolis, Indiana, United States
  • 1995-2003
    • Indiana University Bloomington
      • Department of Psychological and Brain Sciences
      Bloomington, Indiana, United States
  • 2002
    • University of Indianapolis
      Indianapolis, Indiana, United States