Jon Gunderson

University of Illinois, Urbana-Champaign, Urbana, Illinois, United States

Are you Jon Gunderson?

Claim your profile

Publications (10)5.16 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes the results of our first experiments in small and medium vocabulary dysarthric speech recognition, using the database being recorded by our group under the Universal Ac-cess initiative. We develop and test speaker-dependent, word– and phone-level speech recognizers utilizing the Hidden Markov Model architecture; the models are trained exclusively on dysarthric speech produced by individuals diagnosed with cerebral palsy. The experi-ments indicate that (a) different system configurations (being word vs. phone based, number of states per HMM, number of Gaussian components per state-specific observation probability density etc.) give useful performance (in terms of recognition accuracy) for dif-ferent speakers and different task-vocabularies, and (b) for subjects with very low intelligibility, speech recognition outperforms human listeners on recognizing dysarthric speech.
    01/2009;
  • Jon Gunderson
    [Show abstract] [Hide abstract]
    ABSTRACT: The problem with many automated web accessibility testing tools is that they assume a repair oriented approach to web accessibility. The functional web accessibility approach is based on best practices design approach to creating web resources. The best practices build upon the use of web standards to increase developer acceptance, since developers benefit from the design efficiencies of web standards while they build highly accessible websites based on best practices coding techniques. Automated testing tools can be used to look for the best practices coding patterns to verify accessibility. The best practices are essentially effective techniques to implement web accessibility standards like Section 508 or guidelines like the W3C Web Content Accessibility Guidelines.
    Universal Access in Human-Computer Interaction. Addressing Diversity, 5th International Conference, UAHCI 2009, Held as Part of HCI International 2009, San Diego, CA, USA, July 19-24, 2009. Proceedings, Part I; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a database of dysarthric speech produc ed by 19 speakers with cerebral palsy. Speech materials consist of 765 isolated words per speaker: 300 distinct uncommon words and 3 repetitions of digits, computer commands, radio alphabet and common words. Data is recorded through an 8-microphone array and one digital video camera. Our database provides a fundamental resource for automatic speech recognition devel- opment for people with neuromotor disability. Research on articulation errors in dysarthria will benefit clinical tre atments and contribute to our knowledge of neuromotor mechanisms in speech production. Data files are available via secure ftp up on request.
    INTERSPEECH 2008, 9th Annual Conference of the International Speech Communication Association, Brisbane, Australia, September 22-26, 2008; 01/2008
  • Jon Gunderson
    [Show abstract] [Hide abstract]
    ABSTRACT: Web browsers play an important role in the accessibility of the Web by people with disabilities. The features that Web browsers provide to control the rendering of content, navigate and orient to document structure, and control the automatic behaviors will determine the types of accessibility techniques Web authors can use to make their resources more accessible and the level of usability that will be available to people with disabilities in accessing Web resources. Web browsers play a critical role in accessibility as Web 2.0 widgets created out of HTML, CSS, and JavaScripting by supporting new W3C technologies to make Web applications more accessible.
    12/2007: pages 163-193;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies the speech of three talkers with spastic dysarthria caused by cerebral palsy. All three subjects share the symptom of low intelligibility, but causes differ. First, all subjects tend to reduce or delete word-initial consonants; one subject deletes all consonants. Second, one subject exhibits a painstaking stutter. Two algorithms were used to develop automatic isolated digit recognition systems for these subjects. HMM-based recognition was successful for two subjects, but failed for the subject who deletes all consonants. Conversely, digit recognition experiments assuming a fixed word length (using SVMs) were successful for two subjects, but failed for the subject with the stutter
    Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on; 06/2006
  • Source
    Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2006, Portland, Oregon, USA, October 23-25, 2006; 01/2006
  • Source
    Jon Gunderson
    The Internet Encyclopedia, 04/2004; , ISBN: 9780471482963
  • Jon Gunderson
    [Show abstract] [Hide abstract]
    ABSTRACT: Web browsers and multimedia players play a critical role in making Web content accessible to people with disabilities. Access to Web content requires that Web browsers provide users with final control over the styling of rendered content, the type of content rendered and the execution of automated behaviors. The features available in Web browsers determine the extent to which users can orient themselves and navigate the structure of Web resources. The World Wide Web Consortium (W3C) User Agent Guidelines are part of the W3C Web Accessibility Initiative, the guidelines provide a comprehensive resource to Web browser and multimedia developers on the features needed to render Web content more accessibly to people with disabilities. UAAG 1.0 was developed over a period of four years and included extensive reviews to demonstrate that the proposed requirements can be implemented.
    Universal Access in the Information Society 03/2004; 3:38-47. · 0.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Automatic dictation software with reasonably high word recognition accuracy is now widely avail- able to the general public. Many people with gross motor impairment, including some people with cerebral palsy and closed head injuries, have not enjoyed the benet of these advances, because their general motor impairment includes a component of dysarthria: reduced speech intelligibility caused by neuromotor impairment. These motor impairments often preclude normal use of a keyboard. For this reason, case studies have shown that some dysarthric users may nd it easier, instead of a key- board, to use a small-vocabulary automatic speech recognition system, with code words representing letters and formatting commands, and with acoustic speech recognition models carefully adapted to the speech of the individual user. Development of each individualized speech recognition system remains extremely labor-intensive, because so little is understood about the general characteristics of dysarthric speech. We propose to study the general audio and visual characteristics of articulation errors in dysarthric speech, and to apply the results of our scientic study to the development of speaker-independent large-vocabulary and small-vocabulary audio and audiovisual dysarthric speech recognition systems. Scientic Merit This project will research word-based, phone-based, and phonologic-feature-based audio and audiovisual speech recognition models for both small-vocabulary and large-vocabulary speech recog- nizers, designed to be used for unrestricted text entry on a personal computer. The models will be based on audio and video analysis of phonetically balanced speech samples from a group of speakers with dysarthria. Analysis will include speakers with reduced intelligibility caused by dysarthria, categorized into the following groups: very low intelligibility (0-25% intelligibility, as rated by hu- man listeners), low intelligibility (25-50%), moderate intelligibility (50-75%), and high intelligibility (75-100%). Interactive phonetic analysis will seek to describe the talker-dependent characteristics of articulation error in dysarthria; based on analysis of preliminary data, we hypothesize that manner of articulation errors, place of articulation errors, and voicing errors are approximately independent events. Preliminary experiments also suggest that dieren t dysarthric users will require dramatically dieren t speech recognition architectures, because the symptoms of dysarthria vary so much from subject to subject. We propose to develop and test at least three categories of audio-only and audio- visual speech recognition algorithms for dysarthric users: phone-based and whole-word recognizers using hidden Markov models (HMMs), phonologic-feature-based and whole-word recognizers using support vector machines (SVMs), and hybrid SVM-HMM recognizers. The models will be evaluated to determine, rst, overall recognition accuracy of each algorithm, second, changes in accuracy due to learning, third, group dierences in accuracy due to severity of dysarthria, and fourth, depen- dence of accuracy on vocabulary size. The results of this research will contribute to scientic and technological knowledge about the acoustic and visual properties of dysarthric speech. Broader Impacts This research will provide the foundation for constructing a speech recognition tool for practical use by computer users with neuromotor disabilities. Tools and data developed in this research will all be released open-source, and will be designed so that, if successful, the technology developed for this proposal may be easily ported to an open-source audiovisual speech recognition system for dysarthric users.
  • Source