January 1992
·
9 Reads
·
12 Citations
Various algorithms based on “neural network” (NN) ideas have been proposed as alternatives to hidden Markov models (HMMs) for automatic speech recognition. We first consider the conceptual differences and relative strengths of NN and HMM approaches, then examine a recurrent computation, motivated by HMMs, that can be regarded as a new kind of neural network specially suitable for dealing with patterns with sequential structure. This “alphanet” exposes interesting relationships between NNs and discriminative training of HMMs, and suggests methods for properly integrating the training of non-linear feed-forward data transformations with the rest of an HMM-style speech recognition system. We conclude that NNs and HMMs are not distinct, so there is no simple choice of one or the other. However, there are many detailed choices to be made, and many experiments to be done.