Conference Paper

Parallel sequential running neural network and its application to automatic speech recognition

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A novel parallel sequential running neural network (PSRNN) is developed. It consists of subnets of the same construction. The subnet was trained by different tokens sequentially. The neural network makes recognition by subnets in the order of training. PSRNN performs better than multilayer perceptron (MLP). It can learn adaptively and expand easily. The authors applied PSRNN to the work of speaker-independent isolated word recognition. The system was trained by 45 persons to recognize ten Chinese digits. Performance was 97% when tested by another 10 persons

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
We develop a technique to test the hypothesis that multilayered, feed-forward networks with few units on the first hidden layer generalize better than networks with many units in the first layer. Large networks are trained to perform a classification task and the redundant units are removed (“pruning”) to produce the smallest network capable of performing the task. A technique for inserting layers where pruning has introduced linear inseparability is also described. Two tests of ability to generalize are used—the ability to classify training inputs corrupted by noise and the ability to classify new patterns from each class. The hypothesis is found to be false for networks trained with noisy inputs. Pruning to the minimum number of units in the first layer produces networks which correctly classify the training set but generalize poorly compared with larger networks.
Article
A new neural-network architecture called the parallel, self-organizing, hierarchical neural network (PSHNN) is presented. The new architecture involves a number of stages in which each stage can be a particular neural network (SNN). At the end of each stage, error detection is carried out, and a number of input vectors are rejected. Between two stages there is a nonlinear transformation of input vectors rejected by the previous stage. The new architecture has many desirable properties, such as optimized system complexity (in the sense of minimized self-organizing number of stages), high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all stages operate simultaneously without waiting for data from other stages during testing. The experiments performed indicated the superiority of the new architecture over multilayered networks with back-propagation training.
A real-time speaker independent isolated word recognition system
  • B N Tie-Heng
  • R Mei-Ling
A real-time speaker independent isolated word recognition system
  • Yu Tie-Cheng
  • Bi Ning
  • Rong Mei-Ling