
Source Available from: umbc.edu
[Show abstract] [Hide abstract]
ABSTRACT: The aim of the work is to develop a flexible and efficient
approach to the classification of the ratio of voiced to unvoiced
excitation sources in continuous speech. To achieve this aim we adopt a
probabilistic neural network approach. This is accomplished by designing
a multilayer perceptron classifier trained by steepest descent
minimization of the least relative entropy (LRE) cost function. By using
the LRE cost function we can directly output the ratio, as a
probability, of excitation source, voiced to unvoiced, for a given
speech segment. These output probabilities can then be used directly in
other applications, such as low bit rate coders Preview · Conference Paper · Feb 1999

Source Available from: psu.edu
[Show abstract] [Hide abstract]
ABSTRACT: This paper presents a time delay neural network (TDNN) model designed for the prediction of nitrogen oxides (NOx ) and carbon monoxide (CO) emissions from a fossil fuel power plant. NOx and CO emissions of the plant are determined as a function of other related timeseries such as air flow rates and oxygen levels that are measured during the system operation. Correlation analysis is performed on the data to determine the location and the spread of crosscorrelation between pairs of variables and this information is used to form a variable tapped delay line at the input of the network. We also introduce a neural network based preprocessor which employs an iterative regularization scheme to recover missing portions of CO data that are censored due to saturation of the measuring device. Prediction after training with the restored data set is observed to be significantly more accurate. Keywords: Time delay neural networks, environmental application, missing data prediction, NOx and CO pred... Preview · Article · Aug 1998 · Integrated Computer Aided Engineering

No preview · Article · Jun 1997 · Neurocomputing

Source Available from: umbc.edu
[Show abstract] [Hide abstract]
ABSTRACT: We introduce a unified statistical framework for realtime signal
processing with neural networks by using a recent extension of maximum
likelihood (ML) estimation, partial likelihood (PL) estimation theory,
which allows for (i) dependent observations, and (ii) processing of data
using only the information that is available at the time of processing.
For a general neural network conditional distribution model, we
establish a fundamental informationtheoretic relationship for PL
estimation, and obtain large sample properties of PL for the general
case of dependent observations. We consider applications of PL to
prediction and channel equalization Preview · Conference Paper · Jun 1996

[Show abstract] [Hide abstract]
ABSTRACT: We formulate the adaptive channel equalization as a conditional probability distribution learning problem. Conditional probability density function of the transmitted signal given the received signal is parametrized by a sigmoidal perceptron. In this framework, we use relative entropy (Kullback Leibler distance) between the true and the estimated distributions as the cost function to be minimized. The true probabilities are approximated by their stochastic estimators resulting in a stochastic relative entropy cost function. This function is wellformed in the sense of Wittner and Denker, therefore gradient descent on this cost function is guaranteed to find a solution. The consistency and asymptotic normality of this learning scheme are shown via Maximum Partial Likelihood estimation of logistic models. As a practical example, we demonstrate that the resulting algorithm successfully equalizes multipath channels. 1. INTRODUCTION Adaptive equalization techniques developed during the la... No preview · Article · Jun 1995

Source Available from: psu.edu
[Show abstract] [Hide abstract]
ABSTRACT: We present the general formulation for the adaptive equalization
by distribution learning introduced by Adali (see Proc. IEEE Int. Conf.
Acoust., Speech, Signal Processing, vol.3, p.297300, April 1994) In
this framework, adaptive equalization can be viewed as a parametrized
conditional distribution estimation problem where the parameter
estimation is achieved by learning on a multilayer perceptron (MLP).
Depending on the definition of the conditioning event set either
supervised or unsupervised (blind) algorithms in either recurrent or
feedforward networks result. We derive the least relative entropy (LRE)
algorithm for binary data communications and analyze its statistical and
dynamical properties. Particularly, we show that LRE learning is
consistent and asymptotically normal by working in the partial
likelihood estimation framework, and that the algorithm can always
recover from convergence at the wrong extreme as opposed to the MSE
based MLP's by working within an extension of the wellformed cost
functions framework of Wittner and Denker (1988). We present simulation
examples to demonstrate this fact Preview · Conference Paper · Jun 1995

Source Available from: umbc.edu
[Show abstract] [Hide abstract]
ABSTRACT: Presents the general formulation for adaptive equalization by
distribution learning in which conditional probability mass function
(PMF) of the transmitted signal given the received is parametrized by a
general neural network structure. The parameters of the PMF are computed
by minimization of the accumulated relative entropy (ARE) cost function.
The equivalence of ARE minimization to maximum partial loglikelihood
(MPLL) estimation is established under certain regularity conditions
which enables the authors to bypass the requirement that the true
conditionals be known. The large sample properties of MPLL estimator are
obtained under further regularity conditions, and the binary case with
sigmoidal perceptron as the conditional PMF model is shown to be a
special case of the new framework. Results are presented which show that
the multilayer perceptron (MLP) equalizer based on ARE minimization can
always recover from convergence at the wrong extreme whereas the mean
square error (MSE) based MLP can not Preview · Conference Paper · Jan 1995

[Show abstract] [Hide abstract]
ABSTRACT: We formulate the adaptive channel equalization as a conditional probability distribution learning problem. Conditional probability density function of the transmitted signal given the received signal is parametrized by a sigmoidal perceptron. In this framework, we use relative entropy (KullbackLeibler distance) between the true and the estimated distributions as the cost function to be minimized. The true probabilities are approximated by their stochastic estimators resulting in a stochastic relative entropy cost function. This function is wellformed in the sense of Wittner and Denker (1988), therefore gradient descent on this cost function is guaranteed to find a solution. The consistency and asymptotic normality of this learning scheme are shown via maximum partial likelihood estimation of logistic models. As a practical example, we demonstrate that the resulting algorithm successfully equalizes multipath channels No preview · Conference Paper · May 1994