[show abstract][hide abstract] ABSTRACT: This paper describes a database of dysarthric speech produc ed by 19 speakers with cerebral palsy. Speech materials consist of 765 isolated words per speaker: 300 distinct uncommon words and 3 repetitions of digits, computer commands, radio alphabet and common words. Data is recorded through an 8-microphone array and one digital video camera. Our database provides a fundamental resource for automatic speech recognition devel- opment for people with neuromotor disability. Research on articulation errors in dysarthria will benefit clinical tre atments and contribute to our knowledge of neuromotor mechanisms in speech production. Data files are available via secure ftp up on request.
INTERSPEECH 2008, 9th Annual Conference of the International Speech Communication Association, Brisbane, Australia, September 22-26, 2008; 01/2008
[show abstract][hide abstract] ABSTRACT: Web browsers play an important role in the accessibility of the Web by people with disabilities. The features that Web browsers
provide to control the rendering of content, navigate and orient to document structure, and control the automatic behaviors
will determine the types of accessibility techniques Web authors can use to make their resources more accessible and the level
of usability that will be available to people with disabilities in accessing Web resources. Web browsers play a critical role
Web applications more accessible.
[show abstract][hide abstract] ABSTRACT: This paper describes the results of our first experiments in small and medium vocabulary dysarthric speech recognition, using the database being recorded by our group under the Universal Ac-cess initiative. We develop and test speaker-dependent, word– and phone-level speech recognizers utilizing the Hidden Markov Model architecture; the models are trained exclusively on dysarthric speech produced by individuals diagnosed with cerebral palsy. The experi-ments indicate that (a) different system configurations (being word vs. phone based, number of states per HMM, number of Gaussian components per state-specific observation probability density etc.) give useful performance (in terms of recognition accuracy) for dif-ferent speakers and different task-vocabularies, and (b) for subjects with very low intelligibility, speech recognition outperforms human listeners on recognizing dysarthric speech.
[show abstract][hide abstract] ABSTRACT: Automatic dictation software with reasonably high word recognition accuracy is now widely avail- able to the general public. Many people with gross motor impairment, including some people with cerebral palsy and closed head injuries, have not enjoyed the benet of these advances, because their general motor impairment includes a component of dysarthria: reduced speech intelligibility caused by neuromotor impairment. These motor impairments often preclude normal use of a keyboard. For this reason, case studies have shown that some dysarthric users may nd it easier, instead of a key- board, to use a small-vocabulary automatic speech recognition system, with code words representing letters and formatting commands, and with acoustic speech recognition models carefully adapted to the speech of the individual user. Development of each individualized speech recognition system remains extremely labor-intensive, because so little is understood about the general characteristics of dysarthric speech. We propose to study the general audio and visual characteristics of articulation errors in dysarthric speech, and to apply the results of our scientic study to the development of speaker-independent large-vocabulary and small-vocabulary audio and audiovisual dysarthric speech recognition systems. Scientic Merit This project will research word-based, phone-based, and phonologic-feature-based audio and audiovisual speech recognition models for both small-vocabulary and large-vocabulary speech recog- nizers, designed to be used for unrestricted text entry on a personal computer. The models will be based on audio and video analysis of phonetically balanced speech samples from a group of speakers with dysarthria. Analysis will include speakers with reduced intelligibility caused by dysarthria, categorized into the following groups: very low intelligibility (0-25% intelligibility, as rated by hu- man listeners), low intelligibility (25-50%), moderate intelligibility (50-75%), and high intelligibility (75-100%). Interactive phonetic analysis will seek to describe the talker-dependent characteristics of articulation error in dysarthria; based on analysis of preliminary data, we hypothesize that manner of articulation errors, place of articulation errors, and voicing errors are approximately independent events. Preliminary experiments also suggest that dieren t dysarthric users will require dramatically dieren t speech recognition architectures, because the symptoms of dysarthria vary so much from subject to subject. We propose to develop and test at least three categories of audio-only and audio- visual speech recognition algorithms for dysarthric users: phone-based and whole-word recognizers using hidden Markov models (HMMs), phonologic-feature-based and whole-word recognizers using support vector machines (SVMs), and hybrid SVM-HMM recognizers. The models will be evaluated to determine, rst, overall recognition accuracy of each algorithm, second, changes in accuracy due to learning, third, group dierences in accuracy due to severity of dysarthria, and fourth, depen- dence of accuracy on vocabulary size. The results of this research will contribute to scientic and technological knowledge about the acoustic and visual properties of dysarthric speech. Broader Impacts This research will provide the foundation for constructing a speech recognition tool for practical use by computer users with neuromotor disabilities. Tools and data developed in this research will all be released open-source, and will be designed so that, if successful, the technology developed for this proposal may be easily ported to an open-source audiovisual speech recognition system for dysarthric users.