Article

Convex regularized recursive maximum correntropy algorithm

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this brief, a robust and sparse recursive adaptive filtering algorithm, called convex regularized recursive maximum correntropy (CR-RMC), is derived by adding a general convex regularization penalty term to the maximum correntropy criterion (MCC). An approximate expression for automatically selecting the regularization parameter is also introduced. Simulation results show that the CR-RMC can significantly outperform the original recursive maximum correntropy (RMC) algorithm especially when the underlying system is very sparse. Compared with the convex regularized recursive least squares (CR-RLS) algorithm, the new algorithm also shows strong robustness against impulsive noise. The CR-RMC also performs much better than other LMS-type sparse adaptive filtering algorithms based on MCC.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Moreover, the standard RLS algorithm still has the instability problem because of the inverse operation [4]. In [5]- [9], robust RLS algorithms have been considered as insensitivity to noise, and in [10] the RLS algorithm is incorporated into the maximum correntropy criterion (MCC), resulting in the recursive MCC (RMCC) algorithm to achieve enhanced performance. ...
... Such merits can promote the algorithm to further reduce the complexity. Therefore, the DCD-ASE algorithm has lower computational complexity as compared to the algorithm in [10], [16]. Table II lists the computational complexities of existing algorithms and the proposed algorithm. ...
... Next, a comparison of the algorithms is shown in Fig. 3, where the RMCC [10], IWF [11], and DCD-RMCC [16] algorithms are used as the benchmark. With a similar convergence rate, the proposed algorithms can achieve smaller misalignment in impulsive noise. ...
Preprint
Full-text available
The Andrew's sine function is a robust estimator, which has been used in outlier rejection and robust statistics. However, the performance of such estimator does not receive attention in the field of adaptive filtering techniques. Two Andrew's sine estimator (ASE)-based robust adaptive filtering algorithms are proposed in this brief. Specifically, to achieve improved performance and reduced computational complexity, the iterative Wiener filter (IWF) is an attractive choice. A novel IWF based on ASE (IWF-ASE) is proposed for impulsive noises. To further reduce the computational complexity, the leading dichotomous coordinate descent (DCD) algorithm is combined with the ASE, developing DCD-ASE algorithm. Simulations on system identification demonstrate that the proposed algorithms can achieve smaller misalignment as compared to the conventional IWF, recursive maximum correntropy criterion (RMCC), and DCD-RMCC algorithms in impulsive noise. Furthermore, the proposed algorithms exhibit improved performance in partial discharge (PD) denoising.
... Recently, designing adaptive filters for strictly sparse systems has attracted many attentions. Inspired by the compressed sensing, the zero-attracting (ZA) principle has been introduced in sparse adaptive algorithms [13][14][15][16][17][18][19][20][21], which achieve improved performance by accelerating the vanishing of inactive taps, such as wireless and underwater acoustic channels [22][23][24]. In [13], the ZA least mean squares (ZA-LMS) was derived by incorporating a l 1 norm regularization term into the quadratic cost function of the standard LMS algorithm. ...
... In [15], the correntropy induced metric (CIM) MCC (CIMMCC) algorithm has been designed by exploiting the CIM to approximate the l 0 norm. In parallel to the ZA-type gradient descent algorithms, ZA-based RLS/RMCC algorithms have also been investigated [16][17][18][19][20][21]. In [16], the SPARLS has been obtained by solving the l 1 norm regularized least squares problem via expectation-maximization (EM). ...
... A l 1 norm has been introduced in the RLS/RMCC cost function, leading to ZA-RLS and ZA-RMCC in [17,18]. In [19,20], convex regularized RLS/RMCC (CR-RLS/CR-RMCC) algorithms were obtained by adopting a general regularization function instead of an l 1 norm. Considering that the steady-state property of the MCC degrades in the case of non-zero mean noise situations, a sparse recursive GMCC with variable center (RGMCC-VC) has been proposed [21]. ...
Preprint
The sparse adaptive algorithms under maximum correntropy criterion (MCC) have been developed and available for practical use due to their robustness against outliers (or impulsive noises). Especially, the proportionate updating (PU) mechanism has been widely adopted in MCC algorithms to exploit the system sparsity. In this paper, we propose a proportionate recursive MCC (PRMCC) algorithm for the sparse system identification, in which an independent weight update is assigned to each tap according to the magnitude of that estimated filter coefficient. Its stability condition and mean-square performance have been analyzed via Taylor expansion approach. Based on the analysis, theoretical optimal values of the parameters are obtained in nonstationary environments. Considering that the trace controller in the proportionate matrix implies a trade-off between speed of convergence and final misadjustment, we further propose an adaptive convex combination of two PRMCC filters (CPRMCC). Analytical expressions are derived showing that the method performs at least as well as the best component filter in steady-state. Simulation results in a system identification setting show the validity of proposed algorithms and theoretical analysis.
... These algorithms exhibit strong suppression capabilities against pulse noise. Considering scenarios involving sparse systems, scholars have also proposed the convex regularized recursive maximum correntropy (CR-RMC) algorithm as mentioned in [21]. Inspired by the previous discussions, this paper introduces sparsity enhancements to the RMEE algorithm by incorporating a more general convex function as a sparse penalty term in the cost function. ...
... Then, (21) can be rewritten as ...
Article
Full-text available
It is well known that the recursive least squares (RLS) algorithm is renowned for its rapid convergence and excellent tracking capability. However, its performance is significantly compromised when the system is sparse or when the input signals are contaminated by impulse noise. Therefore, in this paper, the minimum error entropy (MEE) criterion is introduced into the cost function of the RLS algorithm in this paper, with the aim of counteracting the interference from impulse noise. To address the sparse characteristics of the system, we employ a universally applicable convex function to regularize the cost function. The resulting new algorithm is named the convex regularization recursive minimum error entropy (CR-RMEE) algorithm. Simulation results indicate that the performance of the CR-RMEE algorithm surpasses that of other similar algorithms, and the new algorithm excels not only in scenarios with sparse systems but also demonstrates strong robustness against pulse noise.
... However, the regular adaptive algorithms have no significant advantage in sparse system identification due to no use of sparse characteristic. Thus, there are a lot of sparse adaptive algorithms, which are derived by utilizing a priori knowledge of the sparsity in recent decades (Chen et al. 2009, Eksioglu 2011, Kalouptsidis 2011, Gu et al. 2013, Zhang et al. 2016, Shi and Zhao 2018, Shi et al. 2019. A sparse least mean square (LMS) algorithm was derived by adding a convex approximation for the 0 l norm penalty to the original cost function (Gu et al. 2013). ...
... In addition, a sparse recursive least squares (RLS) algorithm was developed by adding a weighted 1 l norm penalty to the RLS cost function (Eksioglu 2011). In more recent years, a robust and sparse convex regularized recursive maximum correntropy (CR-RMC) is derived by using a general convex function to regularize the maximum correntropy criterion (MCC), which shows strong robustness in non-Gaussian noise environments (Zhang et al. 2016). ...
Article
Recently, a robust maximum total correntropy (MTC) adaptive filtering algorithm has been used in errors-in-variables (EIV) model in which both input and output data are contaminated with noises. As an extension of the maximum correntropy criterion (MCC), the MTC algorithm shows desirable performance in non-Gaussian noise environments. However, the MTC algorithm may suffer from performance deterioration in the sparse system. To overcome this drawback, a robust and sparse adaptive filtering algorithm, called zero attracting maximum total correntropy (ZA-MTC), is derived by adding a l1 norm penalty term to the maximum total correntropy algorithm in this brief. In addition, in the reweighted version, a log-sum function is employed to replace the l1 norm penalty term. Simulation results demonstrate the advantages of the proposed algorithms under sparsity assumptions on the unknown parameter vector.
... For input vectors we assume that E[u n u t ] = 0 when n ̸ = t, and {u n } are generated from a Gaussian process with covariance matrix R u = I M . For random-walk model (9) with Θ = σ 2 θ I M . Three different distributions for the measurement noise v n are considered, including: pN (0, 10), i.e., impulsive noise condition with white Gaussian background noise, and impulse probability of occurrence p = 0.1. ...
Article
Full-text available
Recently, maximum Versoria criterion-based adaptive algorithms have been introduced as a new solution for robust adaptive filtering. This paper studies the steady-state tracking analysis of an adaptive filter with maximum Versoria criterion (MVC) in a non-stationary (Markov time-varying) system. Our analysis relies on the energy conservation method. Both Gaussian and general non-Gaussian noise are considered, and for both cases, the closed-form expression for steady-state excess mean square error (EMSE) is derived. Regardless of noise type, unlike the stationary environment, the EMSE curves are not increasing functions of step-size parameter. The validity of the theoretical results is justified via simulation.
... In indoor localization, the maximum correntropy criterion can be employed to estimate the uncertainty of the location and incorporate it into the design of the filter. Reference [22] introduced a method for automatically selecting regularization parameters in the maximum correntropy criterion (MCC) by incorporating a general convex regularization penalty term in the situation where pre-existing knowledge, such as state constraints, is available. In [23], the authors proposed a novel approach by combining the maximum correntropy criterion Kalman filter (MCC-KF) with the estimation projection method. ...
Article
Full-text available
Indoor positioning is a key technology in today’s intelligent environments, and it plays a crucial role in many application areas. This paper proposed an unscented Kalman filter (UKF) based on the maximum correntropy criterion (MCC) instead of the minimum mean square error criterion (MMSE). This innovative approach is applied to the loose coupling of the Inertial Navigation System (INS) and Ultra-Wideband (UWB). By introducing the maximum correntropy criterion, the MCCUKF algorithm dynamically adjusts the covariance matrices of the system noise and the measurement noise, thus enhancing its adaptability to diverse environmental localization requirements. Particularly in the presence of non-Gaussian noise, especially heavy-tailed noise, the MCCUKF exhibits superior accuracy and robustness compared to the traditional UKF. The method initially generates an estimate of the predicted state and covariance matrix through the unscented transform (UT) and then recharacterizes the measurement information using a nonlinear regression method at the cost of the MCC. Subsequently, the state and covariance matrices of the filter are updated by employing the unscented transformation on the measurement equations. Moreover, to mitigate the influence of non-line-of-sight (NLOS) errors positioning accuracy, this paper proposes a k-medoid clustering algorithm based on bisection k-means (Bikmeans). This algorithm preprocesses the UWB distance measurements to yield a more precise position estimation. Simulation results demonstrate that MCCUKF is robust to the uncertainty of UWB and realizes stable integration of INS and UWB systems.
... The least mean fourth (LMF) [15] and the least mean p-power [16] algorithms outperform the LMS algorithm in sub-Gaussian noises environment. To address super-Gaussian noises (e.g., heavy-tailed impulse noises, Laplace, α-stable, etc.), typical cost functions such as mixed-norm [17], [18], M-estimate cost [19], [20], and correntropy [21]- [24] are utilized. Error entropy, as a widely known theory [25], takes higher order moments into account. ...
Preprint
Error entropy is a important nonlinear similarity measure, and it has received increasing attention in many practical applications. The default kernel function of error entropy criterion is Gaussian kernel function, however, which is not always the best choice. In our study, a novel concept, called generalized error entropy, utilizing the generalized Gaussian density (GGD) function as the kernel function is proposed. We further derivate the generalized minimum error entropy (GMEE) criterion, and a novel adaptive filtering called GMEE algorithm is derived by utilizing GMEE criterion. The stability, steady-state performance, and computational complexity of the proposed algorithm are investigated. Some simulation indicate that the GMEE algorithm performs well in Gaussian, sub-Gaussian, and super-Gaussian noises environment, respectively. Finally, the GMEE algorithm is applied to acoustic echo cancelation and performs well.
... To overcome this problem, the conventional PCA has been proposed to reduce the dimensions of the feature set. The results have proved that the conventional PCA is quite simple in terms of computational complexity and could be easily manipulated with the feature vector [6,32,33,64,66]. However, this method has shown less accuracy when the feature vector follows a Gaussian distribution [21,36]. ...
Article
Full-text available
Analyzing a large number of features set for the classification process entails cost and complexity. To reduce this burden, dimensionality reduction has been applied to the extracted set of features as a preprocessing step. Among dimensionality reduction algorithms, many methods fail to handle high-dimensional data and they increase information loss and are sensitive to outliers. Therefore, this research proposes a new supervised dimensionality reduction method developed using an improved formation of linear discriminant analysis with diagonal eigenvalues (LDA-DE) that simultaneously preserves the information and addresses the issues of the classification process. The proposed framework focuses on reducing the dimension of extracted features set by computing the scattered matrices from the class labels and the diagonal eigenvalue matrix. Methods to eliminate duplicate rows and columns, to avoid feature overwriting, and to remove outliers are included in the newly developed LDA-DE method. The new LDA-DE method implemented with a fuzzy random forest classifier is tested on two datasets—MIO-TCD and BIT-Vehicle—to classify the moving vehicles. The performance of our LDA-DE method is compared with five state-of-the-art dimensionality reduction methods. The experimental confusion matrix results show that the LDA-DE method generates the reduced feature vector of the objects to a maximum extent. Further, the newly developed LDA-DE method achieves the best reduction results with optimal performance parameter values (lowest mean and standard deviation and highest f-measure and accuracy) and minimal data processing time than the state-of-the-art methods, promising its application for a fast and effective dimensionality reduction for moving vehicle classification.
... Although MCC and MEE can be equivalent under certain conditions, the computational complexity of MCC is lower [22]. MCC is therefore more popular than MEE and has been successfully applied to kernel adaptive filtering such as the kernel maximum correntropy (KMC) algorithm which is derived by employing MCC to the kernel least mean square (KLMS) [23], the recursive maximum correntropy criterion (RMC) algorithm which is the RLS algorithm based on MCC [24], and the kernel version of RMC named kernel recursive maximum correntropy (KRMC) [25]. We can easily find that the only parameter in the MCC is the kernel width. ...
Article
Full-text available
With the rapid development of information theoretic learning, the maximum correntropy criterion (MCC) has been widely used in time series prediction area. Especially, the kernel recursive least squares (KRLS) based on MCC is studied recently due to its online recursive form and the ability to resist noise and be robust in non-Gaussian environments. However, it is not always an optimal choice that using the correntropy, which is calculated by default Gaussian kernel function, to describe the local similarity between variables. Besides, the computational burden of MCC based KRLS will raise as data size increases, thus causing difficulties in accommodating time-varying environments. Therefore, this paper proposes a quantized generalized MCC (QGMCC) to solve the above problem. Specifically, a generalized MCC (GMCC) is utilized to enhance the accuracy and flexibility in calculating the correntropy. In order to solve the problem of computational complexity, QGMCC quantizes the input space and upper bounds the network size by vector quantization (VQ) method. Furthermore, QGMCC is applied to KRLS and forming a computationally efficient and precisely predictive algorithm. After that, the improved algorithm named quantized kernel recursive generalized maximum correntropy (QKRGMC) is set up and the derivation process is also given. Experimental results of one benchmark dataset and two real-world datasets are present to verify the effectiveness of the online prediction algorithm.
... To tackle presence of non-Gaussian noise in the environment and also benefit from advantages of recursive algorithms, an RLS-type algorithm based on MCC (RMCC) was proposed in [6]. Later on, this algorithm was regularized in [7] by adding a general convex function to MCC in order to deal with the problem of sparse system identification (sparse systems have many near-zero coefficients). In addition, a kernel recursive adaptive filtering based on MCC was proposed in [8] for tackling both system non-linearity and presence of non-Gaussian noise in the environment. ...
... 16,17 One useful way to extract similarities among objectives is to formulate optimization problems based on information theoretic learning cost functions. [18][19][20][21][22][23][24][25][26][27][28][29][30] Maximum correntropy criterion (MCC) is one of the useful measures of similarity that has been considered in learning problems recently. 25 In this paper, we investigate multitask learning over adaptive networks under different situations that are contrary from the former works. ...
Article
Full-text available
Adaptive networks solve distributed optimization problems in which all agents of the network are interested to collaborate with their neighbors to learn a similar task. Collaboration is useful when all agents seek a similar task. However, in many applications, agents may belong to different clusters that seek dissimilar tasks. In this case, nonselective collaboration will lead to damaging results that are worse than noncooperative solution. In this paper, we contribute in problems that several clusters of interconnected agents are interested in learning multiple tasks. To address multitask learning problem, we consider an information theoretic criterion called correntropy in a distributed manner providing a novel adaptive combination policy that allows agents to learn which neighbors they should cooperate with and which other neighbors they should reject. In doing so, the proposed algorithm enables agents to recognize their clusters and to achieve improved learning performance compared with noncooperative strategy. Stability analysis in the mean sense and also a closed-form relation determining the network error performance in steady-state mean-square-deviation is derived. Simulation results illustrate the theoretical findings and match well with theory. Copyright
Article
In wireless communications and vehicle communications, it is useful to use adaptive filtering techniques for channel estimation, beamforming and echo cancellation. In this paper, we propose a general constrained adaptive filtering (GCAF) algorithm for single channel estimation and beamforming, which is obtained by integrating a general and adaptive loss function into the constrained adaptive filtering (CAF) framework. By selecting the parameter in the GCAF, it can approximate to several popular CAF algorithms. Then, the convergence, stability boundary and the stability analysis of the mean squared-deviation have been analyzed and presented in detail. Additionally, The complexity of the GCAF is presented and compared with the existing algorithms. The proposed GCAF is used for single-input and single-output (SISO) channel estimation and beamforming under different noises, and the tracking performance of the GCAF is also analyzed. The simulation results demonstrate that the GCAF algorithm outperforms the typically adaptive filtering algorithms and can effectively approxiamte the similar algorithms under heavy-tailed noises, which makes the proposed GCAF more robust and general.
Article
In practice, when complex multi-agent networks are used for parameter estimation and tracking, we often face the issue of spatial anisotropy of observing conditions, e.g., different (heterogeneous) noise distributions at different nodes. In this setting, existing cost functions may excel at specialized nodes, and struggle with others, leading to an overall deteriorating performance, sometimes even inferior to the mean square error (MSE) criterion. The aim of the present paper is to propose a robust network-based adaptive filtering algorithm capable of accommodating such intricate environments. Leveraging the inherent versatility of Gaussian mixture model (GMM) to fit any probability distribution, we model the additive noise at each node accordingly. Then a diffusion algorithm founded on recursive maximum log-likelihood function (RMLF) is put forward, denoted as DRMLF. Thanks to the universal adaptability of GMM, the DRMLF can consistently deliver excellent performance across multiple nodes with diverse noise profiles within the complex networks. Simulations undoubtedly demonstrate that the DRMLF outperforms the other commonly used diffusion RLS-type algorithms over complex networks in intricate environments. A thorough analysis of both mean and mean square convergence is also conducted in detail correspondingly.
Article
Full-text available
Based on the criterion of minimum error entropy, this paper proposes a novel conjugate gradient algorithm, called MEE-CG. This algorithm has robust performance under non-Gaussian interference. Theoretical analysis and experimental results demonstrate that the proposed algorithm displays more robust performance than the conventional conjugate gradient methods on the basis of the mean square error and the maximum correntropy criterion. Compared with the stochastic gradient minimum error entropy algorithm and the recursive minimum error entropy algorithm, the proposed algorithm provides a trade-off between computational complexity and convergence speed.
Chapter
The communication channel estimation between unmanned systems has always been a concern of researchers, especially the channel estimation of broadband wireless communication and underwater acoustic communication. Due to its sparsity, more and more researchers use adaptive filtering algorithms combined with sparse constraints for sparse channel estimation. In this paper, a sparse adaptive filtering algorithm based on correntropy induced metric (CIM) penalty is proposed, and the maximum multi-kernel correntropy criterion (MMKCC) is used to replace the minimum mean square error criterion and the maximum correntropy criteria for robust channel estimation. Specifically, the MMKCC is used to suppress complex impulse noise, and the CIM is used to effectively utilize channel sparsity. The effectiveness of the proposed method is confirmed by computer simulation.
Preprint
Full-text available
p>A General Constrained Adaptive Filtering (GCAF) algorithm is proposed via constructing a general and adaptive loss function to find out a solution of constraint optimizing problem. By selecting appropriate parameters, the GCAF algorithm can be utilized to approximate different adaptive algorithms and has higher performance than its approximated algorithms. The steady state mean squared-deviation of GCAF algorithm is derived and analyzed. Also, its complexity is presented. Simulation results demonstrated that the performance of devised GCAF method can outperform most of typically adaptive algorithms by choosing optimal parameters. </p
Preprint
Full-text available
p>A General Constrained Adaptive Filtering (GCAF) algorithm is proposed via constructing a general and adaptive loss function to find out a solution of constraint optimizing problem. By selecting appropriate parameters, the GCAF algorithm can be utilized to approximate different adaptive algorithms and has higher performance than its approximated algorithms. The steady state mean squared-deviation of GCAF algorithm is derived and analyzed. Also, its complexity is presented. Simulation results demonstrated that the performance of devised GCAF method can outperform most of typically adaptive algorithms by choosing optimal parameters. </p
Article
Error entropy is a well-known learning criterion in information theoretic learning (ITL), and it has been successfully applied in robust signal processing and machine learning. To date, many robust learning algorithms have been devised based on the minimum error entropy (MEE) criterion, and the Gaussian kernel function is always utilized as the default kernel function in these algorithms, which is not always the best option. To further improve learning performance, two concepts using a mixture of two Gaussian functions as kernel functions, called mixture error entropy and mixture quantized error entropy, are proposed in this paper. We further propose two new recursive least-squares algorithms based on mixture minimum error entropy (MMEE) and mixture quantized minimum error entropy (MQMEE) optimization criteria. The convergence analysis, steady-state mean-square performance, and computational complexity of the two proposed algorithms are investigated. In addition, the reason why the mixture mechanism (mixture correntropy and mixture error entropy) can improve the performance of adaptive filtering algorithms is explained. Simulation results show that the proposed new recursive least-squares algorithms outperform other RLS-type algorithms, and the practicality of the proposed algorithms is verified by the electro-encephalography application.
Article
Recently, the constrained adaptive filtering algorithms with strong robustness to non-Gaussian noise have been widely studied. Among them, the robust constrained least mean M-estimate (CLMM) algorithm has significant performance in the background of impulse noise base on the well anti-impulse noise characteristic of the M-estimate function. However, there is an irreconcilable contradiction between the steady-state error and the convergence rate of CLMM. To solve this contradiction, this paper introduces the modified Huber function (MHF) into the constrained recursive least squares (CRLS) algorithm and develops the constrained recursive least M-estimate (CRLM) algorithm, which fully combines the superior convergence performance of CRLS and the anti-impulse noise characteristic of the MHF. We furthermore propose an enhanced version of the CRLM, namely robust CRLM (RCRLM), which is robust to round-off error. Then, We analyze the mean and mean square stability and transient, steady-state NMSD of the proposed RCRLM algorithm under some simplified assumptions. Simulation results in various non-Gaussian noise scenarios show that the proposed RCRLM is superior to some existing constraint algorithms in terms of convergence speed, steady-state error and tracking ability, and the theoretical analysis results are also verified.
Article
In recent years, the distributed estimation problem based on the diffusion strategy has received more and more attention, where the node filters cooperate with other in-network node filters to estimate the unknown parameter vector. In this paper, we propose a Diffusion Bias Compensated Recursive Maximum Correntropy Criterion (DBCRMCC) algorithm based on the idea of distributed estimation for adaptive filtering in network containing input noise and non-Gaussian output noise. The Recursive Maximum Correntropy Criterion (RMCC) is an adaptive filtering algorithm with Maximum Correntropy Criterion (MCC) as the cost function which is robust to large outliers. Considering that the estimates directly obtained by the RMCC algorithm are biased when the input is disturbed by noise, this paper proposes the DBCRMCC algorithm under some reasonable assumptions by using the principle of unbiased estimation and combining it with a diffusion strategy. Through bias-compensation for biased estimates and cooperation among node filters, the asymptotic unbiased estimates of the unknown parameters can be obtained. The simulation results show that the proposed DBCRMCC algorithm has acceptable convergence speed and estimation accuracy in the environment where the input signal is noisy and the output noise is non-Gaussian.
Article
We propose a constrained least lncosh (CLL) adaptive filtering algorithm, which, as we show, provides better performance than other algorithms in impulsive noise environment. The proposed CLL algorithm is derived via incorporating a lncosh function in a constrained optimization problem under non-Gaussian noise environment. The lncosh cost function is a natural logarithm of a hyperbolic cosine function, and it can be considered as a combination of mean-square error and mean-absolute-error criteria. The theoretical analysis of convergence and steady-state mean-squared-deviation of the CLL algorithm in identification scenarios is presented. The theoretical analysis agrees well with simulation results and these results verify that the CLL algorithm possesses superior performance and higher robustness than other CAF algorithms under various non-Gaussian impulsive noises.
Article
We propose a recursive constrained least lncosh (RCLL) adaptive algorithm to combat the impulsive noises. In general, the lncosh function is used to develop a new algorithm within the context of constrained adaptive filtering via solving a linear constrained optimization problem, where the lncosh function is a natural logarithm of hyperbolic cosine function, which can be regarded as a combination of mean-square-error (MSE) and mean-absolute-error (MAE) criteria. Compared with other typical recursive methods, the proposed RCLL algorithm can obtain superior steady state behavior and better robustness for combating impulsive noises. Besides, the mean-square convergence condition and theoretical transient mean-square-deviation of the RCLL algorithm is presented. Simulation results verified the theoretical analysis in non-Gaussian noises and shown the superior performance of the proposed RCLL algorithm.
Article
In this paper, a robust adaptive algorithm, called recursive minimum error entropy, is derived under the minimum error entropy criterion. The new algorithm can perform robustly under impulsive noise. Theoretical analyses and numerical simulations revealed that the new algorithm performs better than the recursive least squares and recursive maximum correntropy algorithm.
Article
The recursive-least-squares (RLS) algorithm is one of the most representative adaptive filtering algorithms. 1\ell _1-norm full-recursive RLS has also been successfully applied to various sparsity-related areas. However, computing the autocorrelation matrix inverse in the 1\ell _1-norm full-recursive RLS generates numerical instability that results in divergence. In addition, the regularization coefficient calculation for 1\ell _1-norm often requires actual channel information or relies on empirical methods. The iterative Wiener filter (IWF) has a similar performance to the RLS algorithm and does not require the inverse of the autocorrelation matrix. Therefore, IWF can be used as a numerically stable RLS. This paper proposes 1\ell _1-norm IWF for sparse channel estimation using the IWF and 1\ell _1-norm. The algorithm proposed in this paper includes a realistic regularization coefficient calculation that does not require actual channel information. The simulation shows that the sparse channel estimation performance of the proposed algorithm is similar to the conventional 1\ell _1-norm full-recursive RLS using real channel information as well as being superior in terms of numerical stability.
Article
Full-text available
In this paper, we propose a new sparse channel estimator robust to impulsive noise environments. For this kind of estimator, the convex regularized recursive maximum correntropy (CR-RMC) algorithm has been proposed. However, this method requires information about the true sparse channel to find the regularization coefficient for the convex regularization penalty term. In addition, the CR-RMC has a numerical instability in the finite-precision cases that is linked to the inversion of the auto-covariance matrix. We propose a new method for sparse channel estimation robust to impulsive noise environments using an iterative Wiener filter. The proposed algorithm does not need information about the true sparse channel to obtain the regularization coefficient for the convex regularization penalty term. It is also numerically more robust, because it does not require the inverse of the auto-covariance matrix.
Article
In this paper, a robust correntropy-based adaptive learning algorithm, called the adaptive kernel recursive maximum correntropy criterion (AK-RMCC), is proposed by considering an adaptive kernel size based on the Kullback-Leibler divergence minimization. Simulation results confirm that the proposed algorithm can perform better than other adaptive algorithms especially when the environment is disturbed by nonGaussian noises. The proposed algorithm is beneficial for using in helicopters, since the pilot speech signal is contaminated by the acoustic impulsive noises that are originated from rotor blades.
Article
In this paper, we investigate the transient performance of the proposed distributed multitask learning algorithm that is developed based on maximum correntropy criterion. In the first stage, we derive the proposed multitask learning algorithm in which the correntropy-based combination matrix determines which sensors should collaborate together and which sensors should stop the collaboration. In the second stage, according to the variance relation of the error vector, we derive a closed-form relation that shows the transition of mean-square-deviation learning performance. We also find the lower and upper bounds of the step size that ensure the stability of the multitask learning algorithm. The theoretical finding of the transient performance is shown to fit a well match with simulation results.
Article
Constrained adaptive filtering algorithms inculding constrained least mean square (CLMS), constrained affine projection (CAP) and constrained recursive least squares (CRLS) have been extensively studied in many applications. Most existing constrained adaptive filtering algorithms are developed under mean square error (MSE) criterion, which is an ideal optimality criterion under Gaussian noises. This assumption however fails to model the behavior of non-Gaussian noises found in practice. Motivated by the robustness and simplicity of maximum correntropy criterion (MCC) in non-Gaussian impulsive noises, this paper proposes a new adaptive filtering algorithm called constrained maximum correntropy criterion (CMCC). Specifically, CMCC incorporates a linear constraint into a MCC filter to solve a constrained optimization problem explicitly. The proposed adaptive filtering algorithm is easy to implement and has low computational complexity, and in terms of convergence accuracy (say lower mean square deviation) and stability, can significantly outperform those MSE based constrained adaptive algorithms in presence of heavy-tailed impulsive noises. Additionally, the mean square convergence behaviors are studied under energy conservation relation, and a sufficient condition to ensure the mean square convergence and the steady-state mean square deviation (MSD) of the proposed algorithm are obtained. Simulation results confirm the theoretical predictions under both Gaussian and non- Gaussian noises, and demonstrate the excellent performance of the novel algorithm by comparing it with other conventional methods.
Article
Full-text available
The maximum correntropy criterion (MCC) has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE) criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable) noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.
Article
Full-text available
As a robust nonlinear similarity measure in kernel space, correntropy has received increasing attention in domains of machine learning and signal processing. In particular, the maximum correntropy criterion (MCC) has recently been successfully applied in robust regression and filtering. The default kernel function in correntropy is the Gaussian kernel, which is, of course, not always the best choice. In this work, we propose a generalized correntropy that adopts the generalized Gaussian density (GGD) function as the kernel (not necessarily a Mercer kernel), and present some important properties. We further propose the generalized maximum correntropy criterion (GMCC), and apply it to adaptive filtering. An adaptive algorithm, called the GMCC algorithm, is derived, and the mean square convergence performance is studied. We show that the proposed algorithm is very stable and can achieve zero probability of divergence (POD). Simulation results confirm the theoretical expectations and demonstrate the desirable performance of the new algorithm.
Article
Full-text available
The thermodynamic properties of crystals can be routinely calculated by density functional theory calculations combining with quasi-harmonic approximation. Based on the method developed recently by Wu and Wentzcovitch (Phys Rev B 79:104304, 2009) and Wu (Phys Rev B 81:172301, 2010), we are able to further ab initio include anharmonic effect on thermodynamic properties of crystals by one additional canonical ensemble with numbers of particle, volume and temperature fixed (NVT) molecular dynamic simulations. Our study indicates that phonon-phonon interaction causes the renormalized phonon frequencies of wadsleyite decrease with temperature. This is consistent with the Raman experimental observation. The anharmonic free energy of wadsleyite is negative and its heat capacity at constant pressure can exceed the Dulong-Petit limit at high temperature. The anharmonicity still significantly affects thermodynamic properties of wadsleyite at pressure and temperature conditions correspond to the transition zone.
Article
Full-text available
In order to improve the performance of Least Mean Square (LMS) based system identification of sparse systems, a new adaptive algorithm is proposed which utilizes the sparsity property of such systems. A general approximating approach on l0l_0 norm -- a typical metric of system sparsity, is proposed and integrated into the cost function of the LMS algorithm. This integration is equivalent to add a zero attractor in the iterations, by which the convergence rate of small coefficients, that dominate the sparse system, can be effectively improved. Moreover, using partial updating method, the computational complexity is reduced. The simulations demonstrate that the proposed algorithm can effectively improve the performance of LMS-based identification algorithms on sparse system.
Article
Full-text available
The authors propose a new approach for the adaptive identification of sparse systems. This approach improves on the recursive least squares (RLS) algorithm by adding a sparsity inducing weighted ℓ1 norm penalty to the RLS cost function. Subgradient analysis is utilised to develop the recursive update equations for the calculation of the optimum system estimate, which minimises the regularised cost function. Two new algorithms are introduced by considering two different weighting scenarios for the ℓ1 norm penalty. These new ℓ1 relaxation-based RLS algorithms emphasise sparsity during the adaptive filtering process, and they allow for faster convergence than standard RLS when the system under consideration is sparse. The authors test the performance of the novel algorithms and compare it with standard RLS and other adaptive algorithms for sparse system identification.
Conference Paper
Full-text available
We propose a new approach to adaptive system identification when the system model is sparse. The approach applies l(1) relaxation, common in compressive sensing, to improve the performance of LMS-type adaptive methods. This results in two new algorithms, the zero-attracting LMS (ZA-LMS) and the reweighted zero-attracting LMS (RZA-LMS). The ZA-LMS is derived via combining a l(1) norm penalty on the coefficients into the quadratic LMS cost function, which generates a zero attractor in the LMS iteration. The zero attractor promotes sparsity in taps during the filtering process, and therefore accelerates convergence when identifying sparse systems. We prove that the ZA-LMS can achieve lower mean square error than the standard LMS. To further improve the filtering performance, the RZA-LMS is developed using a reweighted zero attractor. The performance of the RZA-LMS is superior to that of the ZA-LMS numerically. Experiments demonstrate the advantages of the proposed filters in both convergence rate and steady-state behavior under sparsity assumptions on the true coefficient vector. The RZA-LMS is also shown to be robust when the number of non-zero taps increases.
Article
Full-text available
In this letter, the RLS adaptive algorithm is consid- ered in the system identification setting. The RLS algorithm is regularized using a general convex function of the system impulse response estimate. The normal equations corresponding to the convex regularized cost function are derived, and a recursive al- gorithm for the update of the tap estimates is established. We also introduce a closed-form expression for selecting the regularization parameter. With this selection of the regularization parameter, we show that the convex regularized RLS algorithm performs as well as, and possibly better than, the regular RLS when there is a constraint on the value of the convex function evaluated at the true weight vector. Simulations demonstrate the superiority of the convex regularized RLS with automatic parameter selection over regular RLS for the sparse system identification setting. Index Terms—Adaptive filter, convex regularization, l1 norm, l0 norm, RLS, sparsity.
Article
Full-text available
Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L(1) norm when outliers occur.
Article
Full-text available
Algorithms for the estimation of a channel whose impulse response is characterized by a large number of zero tap coefficients are developed and compared. Estimation is conducted in a two-stage fashion where an estimate of the non-zero taps is followed by channel estimation. Tap detection is transformed into an equivalent on-off keying detection problem. Several tap detection algorithms are investigated which tradeoff between complexity and performance. The proposed methods are compared to an unstructured least squares channel estimate as well as a structured approach based on matching pursuit. Three schemes in particular are developed: a sphere decoder based scheme, a Viterbi algorithm based method and a simpler iterative approach. The latter offers a better tradeoff between estimation accuracy and computational cost. A joint estimation and zero tap detection scheme is also considered. All solutions exhibit a significant gain in terms of mean-squared error and bit error rate over conventional schemes which do not exploit the sparse nature of the channel, as well as the matching pursuit approach which does endeavor to exploit the sparsity
Article
Full-text available
The optimality of second-order statistics depends heavily on the assumption of Gaussianity. In this paper, we elucidate further the probabilistic and geometric meaning of the recently defined correntropy function as a localized similarity measure. A close relationship between correntropy and M-estimation is established. Connections and differences between correntropy and kernel methods are presented. As such correntropy has vastly different properties compared with second-order statistics that can be very useful in non-Gaussian signal processing, especially in the impulsive noise environment. Examples are presented to illustrate the technique.
Article
Half-Title Page Wiley Series Page Title Page Copyright Page Dedication Page Table of Contents Preface Acknowledgements Abbreviations and Symbols Notation
Article
To remain competitive, enterprises must build and manage product/service customisation and dynamic collaborative networks to respond to market needs in a flexible manner with competitive price and high product/service quality. This study deals with the emerging research problem of optimally dynamic combined decisions: retail price, stock depletion time/service level (SL) and replenishment schedule/quantity in a decentralised two-echelon perishable product collaborative network comprising a single supplier and a single retailer (buyer) over a finite multi-period planning horizon. Analytical solutions are derived using calculus with dynamic programming under two different trading policies, namely retailer-managed inventory with price-only (RMIPO) mode and vendor-managed inventory with consignment contract (VMICC) mode. In both models, shortages are allowed and are fully backlogged. The validity of the proposed approach is demonstrated using the case of a simple supply chain comprising a regional seafood supplier and a local branch store of a national retail chain. The results show that the VMICC policy yields lower retail price, larger replenishment quantity, higher SL and greater channel-wide profit than the RMIPO policy and achieves a win-win situation for both parties in the supply chain. Additionally, consumers benefit from lower retail price and higher SL in the VMICC policy.
Article
In this letter, a robust kernel adaptive algorithm, called the kernel recursive maximum correntropy (KRMC), is derived in kernel space and under the maximum correntropy criterion (MCC). The proposed algorithm is particularly useful for nonlinear and non-Gaussian signal processing, especially when data contain large outliers or disturbed by impulsive noises. The superior performance of KRMC is confirmed by simulation results about short-term chaotic time series prediction in alpha-stable noise environments.
Article
The maximum correntropy criterion (MCC) has received increasing attention in signal processing and machine learning due to its robustness against outliers (or impulsive noises). Some gradient based adaptive filtering algorithms under MCC have been developed and available for practical use. The fixed-point algorithms under MCC are, however, seldom studied. In particular, too little attention has been paid to the convergence issue of the fixed-point MCC algorithms. In this letter, we will study this problem and give a sufficient condition to guarantee the convergence of a fixed-point MCC algorithm.
Article
A new ent-kaurane diterpenoid, 6α,16α-dihyroxy-ent-kaurane (1), was isolated from the stems of Ixora amplexicaulis, together with (24R)-6β-hydroxy-24-ethyl-cholest-4-en-3-one (2), 7β-hydroxysitosterol (3), maslinic acid (4), 3,3'-bis(3,4-dihydro-4-hydroxy-6-methoxy-2H-1-benzopyran) (5) and protocatechuric acid (6). Their structures were established by extensive spectroscopic analysis, including 2D NMR techniques. Compounds 2-5 were isolated from the genus Ixora for the first time and 6 obtained originally from I. amplexicaulis.
Article
In this paper, we describe two related algorithms that provide both rigid and non-rigid point set registration with different computational complexity and accuracy. The first algorithm utilizes a nonlinear similarity measure known as correntropy. The measure combines second and high order moments in its decision statistic showing improvements especially in the presence of impulsive noise. The algorithm assumes that the correspondence between the point sets is known, which is determined with the surprise metric. The second algorithm mitigates the need to establish a correspondence by representing the point sets as probability density functions (PDF). The registration problem is then treated as a distribution alignment. The method utilizes the Cauchy-Schwarz divergence to measure the similarity/distance between the point sets and recover the spatial transformation function needed to register them. Both algorithms utilize information theoretic descriptors; however, correntropy works at the realizations level, whereas Cauchy-Schwarz divergence works at the PDF level. This allows correntropy to be less computationally expensive, and for correct correspondence, more accurate. The two algorithms are robust against noise and outliers and perform well under varying levels of distortion. They outperform several well-known and state-of-the-art methods for point set registration.
Article
Sparse adaptive channel estimation problem is one of the most important topics in broadband wireless communications systems due to its simplicity and robustness. So far many sparsity-aware channel estimation algorithms have been developed based on the well-known minimum mean square error (MMSE) criterion, such as the zero-attracting least mean square (ZALMS), which are robust under Gaussian assumption. In non-Gaussian environments, however, these methods are often no longer robust especially when systems are disturbed by random impulsive noises. To address this problem, we propose in this work a robust sparse adaptive filtering algorithm using correntropy induced metric (CIM) penalized maximum correntropy criterion (MCC) rather than conventional MMSE criterion for robust channel estimation. Specifically, MCC is utilized to mitigate the impulsive noise while CIM is adopted to exploit the channel sparsity efficiently. Both theoretical analysis and computer simulations are provided to corroborate the proposed methods.
Article
The steady-state excess mean square error (EMSE) of the adaptive filtering under the maximum correntropy criterion (MCC) has been studied. For Gaussian noise case, we establish a fixed-point equation to solve the exact value of the steady-state EMSE, while for non-Gaussian noise case, we derive an approximate analytical expression for the steady-state EMSE, based on a Taylor expansion approach. Simulation results agree with the theoretical calculations quite well.
Conference Paper
Correntropy as a new similarity function with its excellent property has been used for many case. It's estimator as a computing tool of Correntropy in sense of mean value, this character has some defect for tracking time-varying parameters in some special case. Thus the estimator with forgetting factor of Correntropy (Correntropy-FF) is researched by us first in this paper. Correntropy-FF is as an estimator of Correntropy in sense of weight average. It can promise some important factor for effect of estimate results. A kind of recursive learning algorithm is deduced based on the Correntropy-FF as a cost function. In general, the current result is impacted by nearing it in the time varying system, so kernel function in Correntropy-FF playing this role as Correntropy. The simulation results on traffic network prediction indicate that the effective performance and available of the proposed algorithm.
Article
In a recent paper, we developed a novel quantized kernel least mean square algorithm, in which the input space is quantized (partitioned into smaller regions) and the network size is upper bounded by the quantization codebook size (number of the regions). In this paper, we propose the quantized kernel least squares regression, and derive the optimal solution. By incorporating a simple online vector quantization method, we derive a recursive algorithm to update the solution, namely the quantized kernel recursive least squares algorithm. The good performance of the new algorithm is demonstrated by Monte Carlo simulations.
Article
Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explores their relation is still an open problem. In this paper, we develop a half-quadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of half-quadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an (1)(\ell_1)-regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an (1)(\ell_1)-regularized error detection method by learning from uncorrupted data iteratively. We also show that the (1)(\ell_1)-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse representation in terms of M-estimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.
Article
This paper exemplifies that the use of multiple kernels leads to efficient adaptive filtering for nonlinear systems. Two types of multikernel adaptive filtering algorithms are proposed. One is a simple generalization of the kernel normalized least mean square (KNLMS) algorithm [2], adopting a coherence criterion for dictionary designing. The other is derived by applying the adaptive proximal forward-backward splitting method to a certain squared distance function plus a weighted block l1 norm penalty, encouraging the sparsity of an adaptive filter at the block level for efficiency. The proposed multikernel approach enjoys a higher degree of freedom than those approaches which design a kernel as a convex combination of multiple kernels. Numerical examples show that the proposed approach achieves significant gains particularly for nonstationary data as well as insensitivity to the choice of some design-parameters.
Article
Low-rank matrix recovery algorithms aim to recover a corrupted low-rank matrix with sparse errors. However, corrupted errors may not be sparse in real-world problems and the relationship between L1 regularizer on noise and robust M-estimators is still unknown. This paper proposes a general robust framework for low-rank matrix recovery via implicit regularizers of robust M-estimators, which are derived from convex conjugacy and can be used to model arbitrarily corrupted errors. Based on the additive form of half-quadratic optimization, proximity operators of implicit regularizers are developed such that both low-rank structure and corrupted errors can be alternately recovered. In particular, the dual relationship between the absolute function in L1 regularizer and Huber M-estimator is studied, which establishes a relationship between robust low-rank matrix recovery methods and M-estimators based robust principal component analysis methods. Extensive experiments on synthetic and real-world datasets corroborate our claims and verify the robustness of the proposed framework.
Article
Kernel adaptive filters have drawn increasing attention due to their advantages such as universal nonlinear approximation with universal kernels, linearity and convexity in Reproducing Kernel Hilbert Space (RKHS). Among them, the kernel least mean square (KLMS) algorithm deserves particular attention because of its simplicity and sequential learning approach. Similar to most conventional adaptive filtering al- gorithms, the KLMS adopts the mean square error (MSE) as the adaptation cost. However, the mere second-order statistics is often not suitable for nonlinear and non-Gaussian situations. Therefore, various non-MSE criteria, which involve higher- order statistics, have received an increasing interest. Recently, the correntropy, as an alternative of MSE, has been successfully used in nonlinear and non-Gaussian signal processing and machine learning domains. This fact motivates us in this paper to develop a new kernel adaptive algorithm, called the kernel maximum correntropy (KMC), which combines the advantages of the KLMS and maximum correntropy criterion (MCC). We also study its convergence and self-regularization proper- ties by using the energy conservation relation. The superior performance of the new algorithm has been demonstrated by simulation experiments in the noisy frequency doubling problem.
Article
As a new measure of similarity, the correntropy can be used as an objective function for many applications. In this letter, we study Bayesian estimation under maximum correntropy (MC) criterion. We show that the MC estimation is, in essence, a smoothed maximum a posteriori (MAP) estimation, including the MAP and the minimum mean square error (MMSE) estimation as the extreme cases. We also prove that under a certain condition, when the kernel size in correntropy is larger than some value, the MC estimation will have a unique optimal solution lying in a strictly concave region of the smoothed posterior distribution.
Article
In this paper, we investigate the holographic descriptions of two kinds of black rings, the neutral doubly rotating black ring and the dipole charged black ring. For generic nonextremal black rings, the information of holographic CFT duals, including the central charges and left- and right-moving temperatures, could be read from the thermodynamics at the outer and inner horizons, as suggested in arXiv:1206.2015. To confirm these pictures, we study the extreme black rings in the well-established formalism. We compute the central charges of dual CFTs by doing asymptotic symmetry group analysis in the stretched horizon formalism, and find exact agreements. Moreover, we study the superradiant scattering of a scalar field off the near-extremal black rings and obtain the scattering amplitudes, which are in good match with the CFT predictions.
Conference Paper
Correntropy has been recently defined as a localised similarity measure between two random variables, exploiting higher order moments of the data. This paper presents the use of correntropy as a cost function for minimizing the error between the desired signal and the output of an adaptive filter, in order to train the filter weights.We have shown that this cost function has the computational simplicity of the popular LMS algorithm, along with the robustness that is obtained by using higher order moments for error minimization. We apply this technique for system identification and noise cancellation configurations. The results demonstrate the advantages of the proposed cost function as compared to LMS algorithm, and the recently proposed minimum error entropy (MEE) cost function.
Article
In the context of acoustic echo cancellation (AEC), it is shown that the level of sparseness in acoustic impulse responses can vary greatly in a mobile environment. When the response is strongly sparse, convergence of conventional approaches is poor. Drawing on techniques originally developed for network echo cancellation (NEC), we propose a class of AEC algorithms that can not only work well in both sparse and dispersive circumstances, but also adapt dynamically to the level of sparseness using a new sparseness-controlled approach. Simulation results, using white Gaussian noise (WGN) and speech input signals, show improved performance over existing methods. The proposed algorithms achieve these improvement with only a modest increase in computational complexity.
Article
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the “redundant” data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.