Conference Paper

Fast adaptive variational sparse Bayesian learning with automatic relevance determination.

DOI: 10.1109/ICASSP.2011.5946760 Conference: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, May 22-27, 2011, Prague Congress Center, Prague, Czech Republic
Source: DBLP

ABSTRACT In this work a new adaptive fast variational sparse Bayesian learning (V-SBL) algorithm is proposed that is a variational counterpart of the fast marginal likelihood maximization approach to SBL. It allows one to adaptively construct a sparse regression or classification function as a linear combination of a few basis functions by minimizing the variational free energy. In the case of non-informative hyperpriors, also referred to as automatic relevance determination, the minimization of the free energy can be efficiently realized by computing the fixed points of the update expressions for the variational distribution of the sparsity parameters. The criteria that establish convergence to these fixed points, termed pruning conditions, allow an efficient addition or removal of basis functions; they also have a simple and intuitive interpretation in terms of a component’s signal-to-noise ratio. It has been demonstrated that this interpretation allows a simple empirical adjustment of the pruning conditions, which in turn improves sparsity of SBL and drastically accelerates the convergence rate of the algorithm. The experimental evidence collected with synthetic data demonstrates the effectiveness of the proposed learning scheme.

0 Followers
 · 
90 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper a novel algorithm for estimation and tracking of multipath components for range estimation using signals with low bandwidth is discussed. In multipath rich environments ranging becomes a challenging problem when used with low bandwidth signals: unless multipath interference is resolved, large ranging errors are typical. In this work the estimation and tracking of individual multipath components is studied. The new technique combines sparse Bayesian learning and variational Bayesian parameter estimation with Kalman filtering. While the former is used to detect and estimate the individual components, the Kalman filtering is used to track the estimated signals. Two assumptions are compared: independence of multipath components, typical for classical multipath estimation schemes, versus correlation between the propagation paths. The later has been found to improve component tracking and estimation at the cost of increased computational complexity. The performance of the algorithm is investigated using synthetic, as well as real measurement data collected during flight trials. Significantly improved ranging performance can be obtained as compared to the standard correlation-based ranging.
    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy; 05/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: In sparse Bayesian learning (SBL) approximate Bayesian inference is applied to find sparse estimates from observations corrupted by additive noise. Current literature only vaguely considers the case where the noise level is unknown a priori. We show that for most state-of-the-art reconstruction algorithms based on the fast inference scheme noise precision estimation results in increased computational complexity and reconstruction error. We propose a three-layer hierarchical prior model which allows for the derivation of a fast inference algorithm that estimates the noise precision with no complexity increase. Numerical results show that it matches or surpasses other algorithms in terms of reconstruction error.
    2013 Asilomar Conference on Signals, Systems and Computers; 11/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we provide an algorithm allowing to solve the variational Bayesian issue as a functional optimization problem. The main contribution of this paper is to transpose a classical iterative algorithm of optimization in the metric space of probability densities involved in the Bayesian methodology. The main advantage of this methodology is that it allows to address large dimensional inverse problems by unsupervised algorithms. The interest of our algorithm is enhanced by its application to large dimensional linear inverse problems involving sparse objects. Finally, we provide simulation results. First we show the good numerical performances of our method by comparing it with classical ones on a small tomographic problem. On a second time we treat a large dimensional dictionary learning problem and compare our method with a wavelet based one. keywords: ill-posed inverse problems, variational bayesian methodol-ogy, sparse signal reconstruction, infinite dimensional convex optimization 1. Introduction. The recent development of information technologies has in-creased the expansion of inverse problems for very large dimensional datasets. Indeed whereas the 90's decade have seen the introduction of image reconstruction problems, the current main interest is on 3D sequences (3D + T), thus on large dimensional sets of data. There is therefore a significant growth in the number of measurements in the involved problems. One has frequently to treat the reconstruction of more than one million data. At the same time, the signal processing techniques have helped to over-come the limitations of measuring instruments as they supplied the design of systems involving indirect measures. These new equipments introduced in exchange novel sig-nal processing challenges, such as super resolution deconvolution, source separation or tomographic reconstruction. All these problems are ill posed, the only information contained in the data and in the model of acquisition are not sufficient to obtain a good estimation of the unknown objects.
    SIAM Journal on Imaging Sciences 10/2014; 7(4). DOI:10.1137/140966575 · 2.87 Impact Factor

Full-text (2 Sources)

Download
54 Downloads
Available from
May 31, 2014