Estimating the Intelligibility of Speakers with Dysarthria

Department of Communicative Disorders and Waisman Center, University of Wisconsin-Madison, Madison, WI 53706, USA.
Folia Phoniatrica et Logopaedica (Impact Factor: 0.59). 02/2006; 58(3):217-28. DOI: 10.1159/000091735
Source: PubMed


Many speakers with dysarthria have reduced intelligibility, and improving intelligibility is often a primary intervention objective. Consequently, measurement of intelligibility provides important information that is useful for clinical decision-making. The present study compared two different measures of intelligibility obtained in audio-only and audio-visual modalities for 4 different speakers with dysarthria (2 with mild-moderate dysarthria; 2 with severe dysarthria) secondary to cerebral palsy. A total of 80 college-aged listeners provided word-by-word transcriptions and made percent estimates of intelligibility which served as dependent variables. Group results showed that transcription measures were higher than percent estimates of intelligibility overall. There was also an interaction between speakers and measures of intelligibility, indicating that the difference between transcription scores and percent estimates varied among individual speakers. Results revealed a significant main effect for presentation modality, with the audio-visual modality having slightly higher scores than the audio-only modality; however, presentation modality did not interact with speakers or with measures of intelligibility. Results suggest that standard clinical measurement of intelligibility using orthographic transcription may be more consistent than the use of more subjective percent estimates.

20 Reads
  • Source
    • "Since our data include full sentence prompts from the TIMIT database, single-word tests of intelligibility were not applicable. Hustad (2006) suggested that orthographic transcriptions provide a more accurate predictor of intelligibility of dysarthric speech than the more subjective estimates used in clinical settings, e.g., Enderby (1983). That study had 80 listeners who transcribed audio and showed that intelligibility (as measured by the proportion of correct words identified in transcription according to Yorkston and Beukelman (1981)) increased from 61.9% given only acoustic stimuli to 66.75% given audiovisual stimuli on the transcription task in normal speech. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a system that transforms the speech signals of speakers with physical speech disabilities into a more intelligible form that can be more easily understood by listeners. These transformations are based on the correction of pronunciation errors by the removal of repeated sounds, the insertion of deleted sounds, the devoicing of unvoiced phonemes, the adjustment of the tempo of speech by phase vocoding, and the adjustment of the frequency characteristics of speech by anchor-based morphing of the spectrum. These transformations are based on observations of disabled articulation including improper glottal voicing, lessened tongue movement, and lessened energy produced by the lungs. This system is a substantial step towards full automation in speech transformation without the need for expert or clinical intervention.Among human listeners, recognition rates increased up to 191% (from 21.6% to 41.2%) relative to the original speech by using the module that corrects pronunciation errors. Several types of modified dysarthric speech signals are also supplied to a standard automatic speech recognition system. In that study, the proportion of words correctly recognized increased up to 121% (from 72.7% to 87.9%) relative to the original speech, across various parameterizations of the recognizer. This represents a significant advance towards human-to-human assistive communication software and human–computer interaction.
    Computer Speech & Language 09/2013; 27(6):1163–1177. DOI:10.1016/j.csl.2012.11.001 · 1.75 Impact Factor
  • Source

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: the reduction in speech intelligibility is considered one of the main characteristics of individuals with speech disorders, and is an important issue for clinical and research investigation. In spite of its relevance, the literature does not present a consensus on how to measure speech intelligibility. Besides the diversity of existent methods, another important issue refers to the influence of certain variables on these measurements and, consequently, on the interpretation of the results. Aim: to investigate evidence on the agreement between speech intelligibility measurements, obtained through different methods, used in the assessment of speech disorders, and to identify the effect of variables related to assessment procedures or to the listener. A critical review of articles indexed in the databases Medline, Web of Science, Lilacs and Scielo, until October 2007, was carried out. The key-word used to perform the search was speech intelligibility. Conclusion: there was no evidence of agreement between the speech intelligibility measurements obtained through different methods in the investigated literature. This fact limits the comparison between clinic and research results on speech intelligibility of individuals with speech disorders. Besides that, it was observed that some variables can interfere in these measurements, such as: type of task and speech stimulus, signal presentation mode, type of required answer and listener's experience with the speaker. These must be considered when interpreting the results of speech intelligibility tests.
Show more

Similar Publications