Conference Paper

An improved proportionate NLMS algorithm based on the l0 norm

Telecommun. Dept., Univ. Politeh. of Bucharest, Bucharest, Romania
DOI: 10.1109/ICASSP.2010.5495903 Conference: Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on
Source: IEEE Xplore


The proportionate normalized least-mean-square (PNLMS) algorithm was developed in the context of network echo cancellation. It has been proven to be efficient when the echo path is sparse, which is not always the case in real-world echo cancellation. The improved PNLMS (IPNLMS) algorithm is less sensitive to the sparseness character of the echo path. This algorithm uses the l1 norm to exploit sparseness of the impulse response that needs to be identified. In this paper, we propose an IPNLMS algorithm based on the l0 norm, which represents a better measure of sparseness than the l1 norm. Simulation results prove that the proposed algorithm outperforms the original IPNLMS algorithm.

Download full-text


Available from: Silviu Ciochina, Oct 09, 2015
136 Reads
  • Source
    • "Furthermore, the l0 norm family algorithms have recently become popular for sparse system identification. A new PNLMS algorithm based on the l0 norm was proposed to represent a better measure of sparseness than the l1 norm in a PNLMS-type algorithm [6]. Benesty demonstrated that PNLMS could be deduced from a basis pursuit perspective [7]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a new family of proportionate normalized least mean square (PNLMS) adaptive algorithms that improve the performance of identifying block-sparse systems is proposed. The main proposed algorithm, called block-sparse PNLMS (BS-PNLMS), is based on the optimization of a mixed l2,1 norm of the adaptive filter coefficients. It is demonstrated that both the NLMS and the traditional PNLMS are special cases of BS-PNLMS. Meanwhile, a block-sparse improved PNLMS (BS-IPNLMS) is also derived for both sparse and dispersive impulse responses. Simulation results demonstrate that the proposed BS-PNLMS and BS-IPNLMS algorithms outperformed the NLMS, PNLMS and IPNLMS algorithms with only a modest increase in computational complexity.
  • Source
    • "where µ > 0 is the step size and ε > 0 is a correction to prevent stability issues or to compensate for the presence of noise in the input x n [15]. Despite the increasing awareness [8]– [11] captured by these modern adaptive rules, the methodology employed in their derivation (plain gradient analysis [14]) is not the most appropriate for non-linear and non-convex cost functions, such as the ℓ p -norm (in the range 0 < p ≤ 1). "
    [Show abstract] [Hide abstract]
    ABSTRACT: This letter presents the exact normalized least-mean-square (NLMS) algorithm for the lp-norm-regularized square error, a popular choice for the identification of sparse systems corrupted by additive noise. The resulting exact lp-NLMS algorithm manifests differences to the original one, such as an independent update for each weight, a new sparsity-promoting compensated update, and the guarantee of stable convergence for any configuration (regardless the choice of lp norm and sparsity-tradeoff constant). Simulation results show that the exact lp-NLMS is stable and it outperforms the original one, thus validating the optimality of the proposed methodology.
    IEEE Signal Processing Letters 01/2014; DOI:10.1109/LSP.2014.2360889 · 1.75 Impact Factor
  • Source
    • "N |}. Other possible choices for the matrix Q k are found in recent studies (See, e.g., in [4] [6]). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a novel adaptive filtering algorithm based on an iterative use of (i) the proximity operator and (ii) the parallel variable-metric projection. Our time-varying cost function is a weighted sum of squared distances (in a variable-metric sense) plus a possibly nonsmooth penalty term, and the proposed algorithm is derived along the idea of proximal forward-backward splitting in convex analysis. For application to sparse-system identification problems, we employ the (weighted) ℓ<sub>1</sub> norm as the penalty term, leading to a time-varying soft-thresholding operator. As the simple example of the proposed algorithm, we present the variable-metric affine projection algorithm composed with the time-varying soft-thresholding operator. Numerical examples demonstrate that the proposed algorithms notably outperform their counterparts without soft-thresholding both in convergence speed and steady-state mismatch, while the extra computational complexity due to the additional soft-thresholding is negligibly low.
    Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on; 04/2010
Show more