A Voice-Input Voice-Output Communication Aid for People With Severe Speech Impairment

ABSTRACT A new form of augmentative and alternative communication (AAC) device for people with severe speech impairment the voice-input voice-output communication aid (VIVOCA) is described. The VIVOCA recognizes the disordered speech of the user and builds messages, which are converted into synthetic speech. System development was carried out employing user-centered design and development methods, which identified and refined key requirements for the device. A novel methodology for building small vocabulary, speaker-dependent automatic speech recognizers with reduced amounts of training data, was applied. Experiments showed that this method is successful in generating good recognition performance (mean accuracy 96%) on highly disordered speech, even when recognition perplexity is increased. The selected message-building technique traded off various factors including speed of message construction and range of available message outputs. The VIVOCA was evaluated in a field trial by individuals with moderate to severe dysarthria and confirmed that they can make use of the device to produce intelligible speech output from disordered speech input. The trial highlighted some issues which limit the performance and usability of the device when applied in real usage situations, with mean recognition accuracy of 67% in these circumstances. These limitations will be addressed in future work.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Over the past decade, several speech-based electronic assistive technologies (EATs) have been developed that target users with dysarthric speech. These EATs include vocal command & control systems, but also voice-input voice-output communication aids (VIVOCAs). In these systems, the vocal interfaces are based on automatic speech recognition systems (ASR), but this approach requires much training data and detailed annotation. In this work we evaluate an alternative approach, which works by mining utterance-based representations of speech for recurrent acoustic patterns, with the goal of achieving usable recognition accuracies with less speaker-specific training data. Comparisons with a conventional ASR system on dysarthric speech databases show that the proposed approach offers a substantial reduction in the amount of training data needed to achieve the same recognition accuracies. Index Terms: vocal user interface, dysarthric speech, non-negative matrix factorisation
    IEEE Spoken Language Technology Workshop; 12/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Dysarthria is a frequently occurring motor speech disorder which can be caused by neurological trauma, cerebral palsy, or degenerative neurological diseases. Because dysarthria affects phonation, articulation, and prosody, spoken communication of dysarthric speakers gets seriously restricted, affecting their quality of life and confidence. Assistive technology has led to the development of speech applications to improve the spoken communication of dysarthric speakers. In this field, this paper presents an approach to improve the accuracy of HMM-based speech recognition systems. Because phonatory dysfunction is a main characteristic of dysarthric speech, the phonemes of a dysarthric speaker are affected at different levels. Thus, the approach consists in finding the most suitable type of HMM topology (Bakis, Ergodic) for each phoneme in the speaker's phonetic repertoire. The topology is further refined with a suitable number of states and Gaussian mixture components for acoustic modelling. This represents a difference when compared with studies where a single topology is assumed for all phonemes. Finding the suitable parameters (topology and mixtures components) is performed with a Genetic Algorithm (GA). Experiments with a well-known dysarthric speech database showed statistically significant improvements of the proposed approach when compared with the single topology approach, even for speakers with severe dysarthria.
    Computational and Mathematical Methods in Medicine 10/2013; 2013:297860. DOI:10.1155/2013/297860 · 1.02 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The power cepstrum-based parameters for steady-state visually evoked potential (SSVEP) is proposed. To precisely represent the characteristics of frequency responses of a visually stimulated electroencephalography (EEG) signal, power cepstrum analysis is adopted to estimate the parameters in low-dimensional space. To represent the frequency responses of SSVEP, the log-magnitude spectrum of an EEG signal is estimated by fast Fourier transform. Subsequently, the discrete cosine transform is applied to linearly transform the log-magnitude spectrum into the cepstrum domain, and then generate a set of coefficients. Finally, a Bayesian decision model with a Gaussian mixture model is adopted to classify the responses of SSVEP. The experimental results demonstrated that the proposed approach was able to improve performance compared with previous approaches and was suitable for use in brain computer interface applications.
    Electronics Letters 05/2014; 50(10):735-737. DOI:10.1049/el.2014.0173 · 1.07 Impact Factor