Kevin Wagner

Kevin Wagner

About

41
Publications
644
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
221
Citations
Introduction

Publications

Publications (41)
Conference Paper
We consider distributed prediction of signals modeled as autoregressive processes. A signal from this class is assumed to be received in different network nodes with different delays and attenuations. Each node can be affected by additive noise of different strength. Additionally, some nodes are affected by multipath effects and interference. Nodes...
Article
The complex colored water-filling algorithm for gain allocation has been shown to provide improved mean square error convergence performance, relative to standard complex proportionate-type normalized least mean square algorithms. This algorithm requires sorting operations and matrix multiplication on the order of the size of the impulse response a...
Conference Paper
Diffusion strategies for learning across networks which minimize the transient regime mean-square deviation across all nodes are presented. The problem of choosing combination coefficients which minimize the mean-square deviation at all given time instances results in a quadratic program with linear constraints. The implementation of the optimal pr...
Conference Paper
An extension of complex proportionate-type normalized least mean square algorithms is proposed and derived. This new algorithm called the complex proportionate-type affine projection algorithm helps the estimation of unknown impulse responses when the input signal is colored. The derivation of the complex proportionate-type affine projection algori...
Chapter
This chapter presents a systematic approach to calculate the computational complexity for an arbitrary PtNLMS algorithm. It also discusses the computational complexity for a variety of algorithms. The computational complexity of the NLMS, PNLMS, SPNLMS, ASPNLMS, MPNLMS, EPNLMS, AMPNLMS, AEPNLMS, z2-proportionate, WF, CWF suboptimal gain allocation...
Chapter
This chapter presents several new PtNLMS algorithms that attempt to find an optimal gain such that a user-defined criterion is minimized. While each of these algorithms uses diverse assumptions and have diverse forms, the commonality between these new algorithms is the approach that was taken in their derivation. First, the authors assume white inp...
Chapter
This chapter presents a review of LMS analysis techniques used to predict and measure the performance of the LMS algorithm. Two analysis techniques are employed to analyze the LMS algorithm. The first LMS analysis technique relies on the small adaptation step-size assumption. This assumption results in a recursion for the weight deviation vector, w...
Chapter
This chapter presents an adaptation of the μ-law for compression of weight estimates using the output square error. Additionally, it explains a simplification of the adaptive μ-law algorithm, which approximates the logarithmic function with a piecewise linear function. A section presents the MSE versus iteration of the ASPNLMS, AMPNLMS and AEPNLMS...
Chapter
This chapter extends the PtNLMS analysis techniques used to analyze the LMS algorithm to the PtNLMS algorithm. It analyzes the transient and steady-state properties of two PtNLMS algorithms for the white input. One initial goal of research is to analyze the PNLMS algorithm in both the steady-state and transient regimes. In the process of analyzing...
Chapter
This chapter introduces the weight deviation (WD) recursion for the PtLMS algorithm. It presents derivation of the conditional PDF of WD for the PtLMS algorithm. In this section, the conditional PDF of the current WDs given the preceding WDs is generated. The conditional PDF is derived for colored input signals when noise is present as well as when...
Chapter
This conclusion chapter of Proportionate-type Normalized Least Mean Square Algorithms talks about the work presented in the book. It was shown that minimizing the mean square weight deviation minimizes an upper bound for the mean square error. Minimizing the mean square weight deviation resulted in gains that depended only on corresponding weight d...
Chapter
This chapter presents the PtNLMS algorithm, which is extended from real-valued signals to complex-valued signals. The resulting algorithm is named the complex PtNLMS (cPtNLMS) [WAG 12b] algorithm. The cPtNLMS algorithm is derived as a special case of the complex proportionate-type affine projection (cPtAP) algorithm. The chapter proposes several si...
Chapter
This chapter introduces proportionate-type normalized least mean square (PtNLMS) algorithms in preparation for performing analysis of these algorithms. It begins by presenting applications for PtNLMS algorithms as the motivation for why analysis and design of PtNLMS algorithms is a worthwhile cause. The chapter outlines a historical review of relev...
Conference Paper
The concept of self-orthogonalizing adaptation is extended from the least mean square algorithm to the general case of complex proportionate type normalized least mean square algorithms. The derived algorithm requires knowledge of the input signal's covariance matrix. Implementation of the algorithm using a fixed transform such as the discrete cosi...
Article
Dynamic programming equations are derived which characterize the optimal value functions for a partially observed constrained Markov decision process problem with both total cost and probabilistic criteria. More specifically, the goal is to minimize an expected total cost subject to a constraint on the probability that another total cost exceeds a...
Book
The topic of this book is proportionate-type normalized least mean squares (PtNLMS) adaptive filtering algorithms, which attempt to estimate an unknown impulse response by adaptively giving gains proportionate to an estimate of the impulse response and the current measured error. These algorithms offer low computational complexity and fast converge...
Conference Paper
A complex colored water-filling algorithm is derived for gain allocation in proportionate-type NLMS filtering under the assumption that the input signal is Gaussian and the covariance and pseudo-covariance are known. The algorithm is derived by minimizing the mean square weight deviation at every time instance, where the weight deviation is defined...
Conference Paper
A complex proportionate-type normalized least mean square algorithm is derived by minimizing the second norm of the weighted difference between the current estimate of the impulse response and the estimate at the next time step under the constraint that the adaptive filter a posteriori output is equal to the measured output. The weighting function...
Conference Paper
Learning curves of the complex proportionate-type normalized least mean square (CPtNLMS), simplified CPtNLMS, and one-gain CPtNLMS algorithms are compared for different complex input signals and complex impulse responses. Cases when the algorithms show different behaviors are presented. In addition the water-filling algorithm for optimal adaptation...
Article
In this work, the conditional probability density function of the current weight deviations given the preceding weight deviations is generated for a wide array of proportionate type least mean square algorithms. The conditional probability density function is derived for colored input signals when noise is present as well as when noise is absent. A...
Conference Paper
In this paper, the conditional probability density function of the current weight deviations given the preceding weight deviations is generated for a wide array of proportionate type least mean square algorithms. Additionally, the application of using the conditional probability density function to calculate the steady-state joint conditional proba...
Article
In the past, ad hoc methods have been used to choose gains in proportionate-type normalized least mean-square algorithms without strong theoretical under-pinnings. In this correspondence, a theoretical framework and motivation for adaptively choosing gains is presented, such that the mean-square error will be minimized at any given time. As a resul...
Conference Paper
In previous work, a water-filling algorithm was proposed which sought to minimize the mean square error (MSE) at any given time by optimally choosing the gains (i.e. step-sizes) each time instance. This work relied on the assumption that the input signal was white. In this paper, an algorithm is derived which operates when the in put signal is colo...
Conference Paper
In this work, two suboptimal algorithms were introduced for gain allocation in the colored input case. Each algorithm offers a reduction in computational complexity by removing the sorting function needed in the original algorithm. For stationary colored input signals, the suboptimal algorithms offer improved MSE convergence performance relative to...
Conference Paper
Using the proportionate-type steepest descent algorithm we represent the current weight deviations in terms of initial weight deviations. Then we attempt to minimize the mean square output error with respect to the gains at a given instant. The corresponding optimal average gains are found using a water-filling procedure. The stochastic counterpart...
Conference Paper
In this paper, we present a proportionate-type normalized least mean square algorithm which operates by choosing adaptive gains at each time step in a manner designed to maximize the joint conditional probability that the next-step coefficient estimates reach their optimal values. We compare and show that the performance of the joint maximum condit...
Conference Paper
Recently, we have proposed three schemes for gain allocation in proportionate-type NLMS algorithms for fast decay at all time. The gain allocation schemes are based on: (1) maximization of one-step decay of the mean square output error, (2) maximization of one-step conditional probability density for true weight values, and (3) adaptation of ¿-law...
Conference Paper
In this paper, we propose three new proportionate-type NLMS algorithms: the water filling algorithm, the feasible water filling algorithm, and the adaptive mu-law proportionate NLMS (MPNLMS) algorithm. The water filling algorithm attempts to choose the optimal gains at each time step. The optimal gains are found by minimizing the mean square error...
Conference Paper
In this paper, we present a proportionate-type normalized least mean square algorithm which operates by choosing adaptive gains at each time step in a manner designed to maximize the conditional probability that the next-step coefficient estimates reach their optimal values. We compare and show that the performance of the maximum conditional probab...
Conference Paper
Using reasonable approximations we do analytical analysis of properties of the proportionate normalized least mean square algorithm (PNLMS) in the case of white and stationary input signal. This work extends ideas applied to the simplified-PNLMS algorithm to the PNLMS algorithm. In particular, the analysis incorporates the max function employed by...
Conference Paper
To date no theoretical results have been developed to predict the performance of the proportionate normalized least mean square (PNLMS) algorithm or any of its cousin algorithms such as the mu-law PNLMS (MPNLMS), and the e-law PNLMS (EPNLMS). In this paper we develop an analytic approach to predicting the performance of the simplified PNLMS algorit...
Conference Paper
The dynamic programming approach is applied to a partially observed constrained Markov decision process problem with both total cost and probabilistic criteria. The Markov decision process is partially observed, but it is assumed that the constraint costs are available to the controller, i.e., they are fully observed. The problem is motivated by an...
Conference Paper
In this paper, a unified framework for representing proportionate type algorithms is presented. This novel representation enables a systematic approach to the problem of design and analysis of proportionate type algorithms. Within this unified framework, the feasibility of predicting the performance of a stochastic proportionate algorithm by analyz...
Article
Importance sampling is a technique used to reduce the number of Monte Carlo trials needed to attain a given estimation error. Recently a technique called Chernoff importance sampling was developed which enabled greater reduction of Monte Carlos by a Chernoff-like bound. In this work we extend the results of Chernoff importance sampling to the estim...
Conference Paper
Recently, it has been shown that the proportionate- type LMS adaptive filters are converging for the sufficiently small adaptation stepsize parameter and white input. In addition to this, a theoretical connection between proportionate-type steepest descent algorithms and proportionate-type stochastic algorithms for a constant gain matrix has been r...

Network

Cited By