Fast Variational Sparse Bayesian Learning With Automatic Relevance Determination for Superimposed Signals
ABSTRACT In this work, a new fast variational sparse Bayesian learning (SBL) approach with automatic relevance determination (ARD) is proposed. The sparse Bayesian modeling, exemplified by the relevance vector machine (RVM), allows a sparse regression or classification function to be constructed as a linear combination of a few basis functions. It is demonstrated that, by computing the stationary points of the variational update expressions with noninformative (ARD) hyperpriors, a fast version of variational SBL can be constructed. Analysis of the computed stationary points indicates that SBL with Gaussian sparsity priors and noninformative hyperpriors corresponds to removing components with signal-to-noise ratio below a 0 dB threshold; this threshold can also be adjusted to significantly improve the convergence rate and sparsity of SBL. It is demonstrated that the pruning conditions derived for fast variational SBL coincide with those obtained for fast marginal likelihood maximization; moreover, the parameters that maximize the variational lower bound also maximize the marginal likelihood function. The effectiveness of fast variational SBL is demonstrated with synthetic as well as with real data.
- SourceAvailable from: Y s Yeun[Show abstract] [Hide abstract]
ABSTRACT: This paper considers the binary classification with the probit model under the expectation–maximization (EM) algorithm. Usually, in the Bayesian approach of the probit model, the latent variables are introduced to handle with the intractable problem. For each training sample, there is a corresponding latent variable. However, the EM algorithm requires matrix inversions which demand for the expansive computational cost when the number of training samples is large. To overcome this problem, we employ the group latent-variable approach where for each of the training samples there are corresponding multiple latent variables instead of just one. The major advantage of this approach, which is originated from Bayesian backfitting, is that there are no requirements for matrix inversions in the EM algorithm for the probit model. In this paper, to obtain sparse classifiers the Laplacian prior is employed and the method to control the degree of sparseness is presented. Although the sparsity of the classifier is not determined in a full automatic way, it can be controlled by specifying just one parameter. In other words, we are free to choose the degree of sparseness. The proposed method is compared with support vector machine, relevance vector machine, and generalized LASSO. Index Terms— Bayesian backfitting, binary classification, expectation–maximization (EM) algorithm, Laplacian prior, latent variable, probit model, sparseness.
Conference Paper: Adaptive variational sparse Bayesian estimation[Show abstract] [Hide abstract]
ABSTRACT: This paper presents an online version of the widely used sparse Bayesian learning (SBL) algorithm. Exploiting the variational Bayes framework, an efficient online SBL algorithm is constructed, that acts as a fully automatic learning method for the adaptive estimation of sparse time-varying signals. The new method is based on second order statistics and comprises a simple, automated sparsity-imposing mechanism, different from that of other known schemes. The effectiveness of the proposed online Bayesian algorithm is illustrated using experimental results conducted on synthetic data. These results show that the proposed scheme achieves faster initial convergence and superior estimation performance compared to other related state-of-the-art schemes.ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 05/2014
- [Show abstract] [Hide abstract]
ABSTRACT: In this paper we address the problem of sparse signal reconstruction. We propose a new algorithm that determines the signal support applying statistical thresholding to accept the active components of the model. This adaptive decision test is integrated into the sparse Bayesian learning method, improving its accuracy and reducing convergence time. Moreover, we extend the formulation to accept multiple measurement sequences of signal contaminated by structured noise in addition to white noise. We also develop analytical expressions to evaluate the algorithm estimation error as a function of the problem sparsity and indeterminacy. By simulations, we compare the performance of the proposed algorithm with respect to other existing methods. We show a practical application processing real data of a polarimetric radar to separate the target signal from the clutter.IEEE Transactions on Signal Processing 11/2013; 61(21):5430-5443. DOI:10.1109/TSP.2013.2278811 · 3.20 Impact Factor