Conference Paper

An improved proportionate NLMS algorithm based on the l0 norm

Telecommun. Dept., Univ. Politeh. of Bucharest, Bucharest, Romania
DOI: 10.1109/ICASSP.2010.5495903 Conference: Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on
Source: IEEE Xplore

ABSTRACT The proportionate normalized least-mean-square (PNLMS) algorithm was developed in the context of network echo cancellation. It has been proven to be efficient when the echo path is sparse, which is not always the case in real-world echo cancellation. The improved PNLMS (IPNLMS) algorithm is less sensitive to the sparseness character of the echo path. This algorithm uses the l1 norm to exploit sparseness of the impulse response that needs to be identified. In this paper, we propose an IPNLMS algorithm based on the l0 norm, which represents a better measure of sparseness than the l1 norm. Simulation results prove that the proposed algorithm outperforms the original IPNLMS algorithm.


Available from: Silviu Ciochina, May 28, 2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper provides an analysis of the steady-state behavior of two biased adaptive algorithms recently introduced for listening room compensation, the biased filtered-x normalized least mean squares (Fx-BNLMS) and the biased filtered-x improved proportionate NLMS (Fx-BIPNLMS). We give theoretical results that show that the biased algorithms can outperform the unbiased ones in terms of the mean square error, especially in low signal-to-noise ratio (SNR) scenarios. Moreover, for impulse responses exhibiting high sparse-ness, the improved proportionate algorithms achieve faster convergence than the standard NLMS. Thereby, the advantages of the Fx-BIPNLMS algorithm are justified theoretically in terms of the excess mean square error. Simulation results show that there is a relatively good match between theory and practice, especially for low μ values.
    ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 05/2014
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Modern signal processing (SP) methods rely very heavily on probability and statistics to solve challenging SP problems. Expectations and demands are constantly rising, and SP methods are now expected to deal with ever more complex models, requiring ever more sophisticated computational inference techniques. This has driven the development of statistical SP methods based on stochastic simulation and optimization. Stochastic simulation and optimization algorithms are computationally intensive tools for performing statistical inference in models that are analytically intractable and beyond the scope of deterministic inference methods. They have been recently successfully applied to many difficult problems involving complex statistical models and sophisticated (often Bayesian) statistical inference techniques. This paper presents a tutorial on stochastic simulation and optimization methods in signal and image processing and points to some interesting research problems. The paper addresses a variety of high-dimensional Markov chain Monte Carlo (MCMC) methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms. It also discusses a range of optimization methods that have been adopted to solve stochastic problems, as well as stochastic methods for deterministic optimization. Subsequently, areas of overlap between simulation and optimization, in particular optimization-within-MCMC and MCMC-driven optimization are discussed.