Conference Paper

Fast adaptive variational sparse Bayesian learning with automatic relevance determination.

DOI: 10.1109/ICASSP.2011.5946760 Conference: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, May 22-27, 2011, Prague Congress Center, Prague, Czech Republic
Source: DBLP

ABSTRACT

In this work a new adaptive fast variational sparse Bayesian learning (V-SBL) algorithm is proposed that is a variational counterpart of the fast marginal likelihood maximization approach to SBL. It allows one to adaptively construct a sparse regression or classification function as a linear combination of a few basis functions by minimizing the variational free energy. In the case of non-informative hyperpriors, also referred to as automatic relevance determination, the minimization of the free energy can be efficiently realized by computing the fixed points of the update expressions for the variational distribution of the sparsity parameters. The criteria that establish convergence to these fixed points, termed pruning conditions, allow an efficient addition or removal of basis functions; they also have a simple and intuitive interpretation in terms of a component’s signal-to-noise ratio. It has been demonstrated that this interpretation allows a simple empirical adjustment of the pruning conditions, which in turn improves sparsity of SBL and drastically accelerates the convergence rate of the algorithm. The experimental evidence collected with synthetic data demonstrates the effectiveness of the proposed learning scheme.

Download full-text

Full-text

Available from: Dmitriy Shutin
  • Source
    • "This can be seen by substitutingˆα l = ∞ into (5) and (6) which leads to a zero variance and mean for the lth model weight and removes the influence of the corresponding basis function. The terms ς l and ρ 2 l in (10) can be efficiently computed without explicitly computing the matrix inversion in (9) for each l, which is given in [7]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work a new online learning algorithm that uses automatic relevance determination (ARD) is proposed for fast adaptive non linear filtering. A sequential decision rule for inclusion or deletion of basis functions is obtained by applying a recently proposed fast variational sparse Bayesian learning (SBL) method. The proposed scheme uses a sliding window estimator to process the data in an online fashion. The noise variance can be implicitly estimated by the algorithm. It is shown that the described method has better mean square error (MSE) performance than a state of the art kernel re cursive least squares (Kernel-RLS) algorithm when using the same number of basis functions.
    Full-text · Conference Paper · Jun 2011
  • Source
    • "depend exclusively on the measurement t and the matrixˆS that essentially determines how well a basis function " aligns " or correlates with the other basis functions in the active dictionary. Notice that the ratio ω 2 l /ς l can be interpreted as an estimate of the component signal-to-noise ratio 3 SNR l = ω 2 l /ς l [10]. Thus, SBL prunes a component if its estimated SNR is below 0dB. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work a new adaptive fast variational sparse Bayesian learning (V-SBL) algorithm is proposed that is a variational counterpart of the fast marginal likelihood maximization approach to SBL. It allows one to adaptively construct a sparse regression or classification function as a linear combination of a few basis functions by minimizing the variational free energy. In the case of non-informative hyperpriors, also referred to as automatic relevance determination, the minimization of the free energy can be efficiently realized by computing the fixed points of the update expressions for the variational distribution of the sparsity parameters. The criteria that establish convergence to these fixed points, termed pruning conditions, allow an efficient addition or removal of basis functions; they also have a simple and intuitive interpretation in terms of a component’s signal-to-noise ratio. It has been demonstrated that this interpretation allows a simple empirical adjustment of the pruning conditions, which in turn improves sparsity of SBL and drastically accelerates the convergence rate of the algorithm. The experimental evidence collected with synthetic data demonstrates the effectiveness of the proposed learning scheme.
    Full-text · Conference Paper · Jan 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we provide an algorithm adapted to variational Bayesian approximation. The main contribution is to transpose a classical iterative algorithm of optimization in the metric space of measures involved in Bayesian methodology. Once given the convergence properties of this algorithm, we consider its application to large dimensional inverse problems, especially for unsupervised reconstruction. The interest of our method is enhanced by its application to large dimensional linear inverse problems involving sparse objects. Finally, we provide simulation results. First we show the good numerical performances of our method compared to classical ones on a small example. Then we deal with a large dimensional dictionary learning problem.
    Full-text · Article · Oct 2014 · SIAM Journal on Imaging Sciences
Show more