Fast Variational Sparse Bayesian Learning With Automatic Relevance Determination for Superimposed Signals

Dept. of Electr. Eng., Princeton Univ., Princeton, NJ, USA
IEEE Transactions on Signal Processing (Impact Factor: 2.79). 01/2012; 59(12):6257 - 6261. DOI: 10.1109/TSP.2011.2168217
Source: IEEE Xplore


In this work, a new fast variational sparse Bayesian learning (SBL) approach with automatic relevance determination (ARD) is proposed. The sparse Bayesian modeling, exemplified by the relevance vector machine (RVM), allows a sparse regression or classification function to be constructed as a linear combination of a few basis functions. It is demonstrated that, by computing the stationary points of the variational update expressions with noninformative (ARD) hyperpriors, a fast version of variational SBL can be constructed. Analysis of the computed stationary points indicates that SBL with Gaussian sparsity priors and noninformative hyperpriors corresponds to removing components with signal-to-noise ratio below a 0 dB threshold; this threshold can also be adjusted to significantly improve the convergence rate and sparsity of SBL. It is demonstrated that the pruning conditions derived for fast variational SBL coincide with those obtained for fast marginal likelihood maximization; moreover, the parameters that maximize the variational lower bound also maximize the marginal likelihood function. The effectiveness of fast variational SBL is demonstrated with synthetic as well as with real data.

46 Reads
  • Source
    • "The sparsity inducing prior we will use, was originally used for compressed sensing reconstruction via sparse Bayesian learning [23], [24], [28], [33]–[35]. The channel tap coefficients are modelled as i.i.d. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent progress in wireless receiver design has been towards iterative processing, where channel estimation and decoding is considered a joint optimization problem. Sparse channel estimation is another recent advancement, which exploits the inherent structure of those wireless channels that are composed of a small number of multipath components. In this work we design iterative receivers which incorporate sparse channel estimation. State-of-the-art sparse channel estimators simplify the estimation problem to be a finite basis selection problem by restricting the multipath delays to the discrete domain (i.e. to a grid). Our main contribution is a receiver without such a restriction; the delays are estimated directly as continuous values. As a result, our receiver does not suffer from the leakage effect which destroys sparsity when the delays are restricted to the discrete domain. We discuss certain connections between continuous and discrete domain sparse estimation methods. Our receivers outperform state-of-the-art sparse channel estimation iterative receivers in terms of bit error rate.
    Preview · Article · Jul 2015
  • Source
    • "Thus, estimating the parameters and is crucial to the algorithm . There are many optimization methods which can be used to estimate these parameters, such as bound-optimization methods [15], fast marginal likelihood maximization [25], and variational methods [26]. In this work, we use the expectation maximization (EM) method to estimate them, since we find the resulting algorithm can provide better recovery performance in our application. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Energy consumption is an important issue in continuous wireless telemonitoring of physiological signals. Compressed sensing (CS) is a promising framework to address it, due to its energy-efficient data compression procedure. However, most CS algorithms have difficulty in data recovery due to non-sparsity characteristic of many physiological signals. Block sparse Bayesian learning (BSBL) is an effective approach to recover such signals with satisfactory recovery quality. However, it is time-consuming in recovering multichannel signals, since its computational load almost linearly increases with the number of channels. This work proposes a spatiotemporal sparse Bayesian learning algorithm to recover multichannel signals simultaneously. It not only exploits temporal correlation within each channel signal, but also exploits inter-channel correlation among different channel signals. Furthermore, its computational load is not significantly affected by the number of channels. The proposed algorithm was applied to brain computer interface (BCI) and EEG-based driver's drowsiness estimation. Results showed that the algorithm had both better recovery performance and much higher speed than BSBL. Particularly, the proposed algorithm ensured that the BCI classification and the drowsiness estimation had little degradation even when data were compressed by 80%, making it very suitable for continuous wireless telemonitoring of multichannel signals.
    Full-text · Article · Nov 2014 · IEEE Transactions on Neural Systems and Rehabilitation Engineering
  • Source
    • ", µ N ] T is the posterior mean value of w, µ ¬i results from µ after the exclusion of its i-th element, and expectation · is w.r.t. the variational distributions q(·) of the parameters appearing within each pair of brackets. Notice that each element w i of w is treated separately and thus q(w i ) constitutes an individual factor in the rightmost hand side of (13), as opposed to having a single factor q(w) for the whole vector w, as in [27]; this is beneficial for the development of the adaptive schemes that will be presented in the next Section. Working in a similar manner for the noise precision β, we get that q(β) is a Gamma distribution expressed as "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently, a number of mostly $\ell_1$-norm regularized least squares type deterministic algorithms have been proposed to address the problem of \emph{sparse} adaptive signal estimation and system identification. From a Bayesian perspective, this task is equivalent to maximum a posteriori probability estimation under a sparsity promoting heavy-tailed prior for the parameters of interest. Following a different approach, this paper develops a unifying framework of sparse \emph{variational Bayes} algorithms that employ heavy-tailed priors in conjugate hierarchical form to facilitate posterior inference. The resulting fully automated variational schemes are first presented in a batch iterative form. Then it is shown that by properly exploiting the structure of the batch estimation task, new sparse adaptive variational Bayes algorithms can be derived, which have the ability to impose and track sparsity during real-time processing in a time-varying environment. The most important feature of the proposed algorithms is that they completely eliminate the need for computationally costly parameter fine-tuning, a necessary ingredient of sparse adaptive deterministic algorithms. Extensive simulation results are provided to demonstrate the effectiveness of the new sparse variational Bayes algorithms against state-of-the-art deterministic techniques for adaptive channel estimation. The results show that the proposed algorithms are numerically robust and exhibit in general superior estimation performance compared to their deterministic counterparts.
    Full-text · Article · Jan 2014 · IEEE Transactions on Signal Processing
Show more