Conference Paper

Fast adaptive variational sparse Bayesian learning with automatic relevance determination.

DOI: 10.1109/ICASSP.2011.5946760 Conference: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, May 22-27, 2011, Prague Congress Center, Prague, Czech Republic
Source: DBLP

ABSTRACT In this work a new adaptive fast variational sparse Bayesian learning (V-SBL) algorithm is proposed that is a variational counterpart of the fast marginal likelihood maximization approach to SBL. It allows one to adaptively construct a sparse regression or classification function as a linear combination of a few basis functions by minimizing the variational free energy. In the case of non-informative hyperpriors, also referred to as automatic relevance determination, the minimization of the free energy can be efficiently realized by computing the fixed points of the update expressions for the variational distribution of the sparsity parameters. The criteria that establish convergence to these fixed points, termed pruning conditions, allow an efficient addition or removal of basis functions; they also have a simple and intuitive interpretation in terms of a component’s signal-to-noise ratio. It has been demonstrated that this interpretation allows a simple empirical adjustment of the pruning conditions, which in turn improves sparsity of SBL and drastically accelerates the convergence rate of the algorithm. The experimental evidence collected with synthetic data demonstrates the effectiveness of the proposed learning scheme.

0 Bookmarks
 · 
61 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work a new online learning algorithm that uses automatic relevance determination (ARD) is proposed for fast adaptive non linear filtering. A sequential decision rule for inclusion or deletion of basis functions is obtained by applying a recently proposed fast variational sparse Bayesian learning (SBL) method. The proposed scheme uses a sliding window estimator to process the data in an online fashion. The noise variance can be implicitly estimated by the algorithm. It is shown that the described method has better mean square error (MSE) performance than a state of the art kernel re cursive least squares (Kernel-RLS) algorithm when using the same number of basis functions.
    Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on; 06/2011 · 4.63 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work, a variational Bayesian framework for efficient training of echo state networks (ESNs) with automatic regularization and delay&sum (D&S) readout adaptation is proposed. The algorithm uses a classical batch learning of ESNs. By treating the network echo states as fixed basis functions parameterized with delay parameters, we propose a variational Bayesian ESN training scheme. The variational approach allows for a seamless combination of sparse Bayesian learning ideas and a variational Bayesian space-alternating generalized expectation-maximization (VB-SAGE) algorithm for estimating parameters of superimposed signals. While the former method realizes automatic regularization of ESNs, which also determines which echo states and input signals are relevant for "explaining" the desired signal, the latter method provides a basis for joint estimation of D&S readout parameters. The proposed training algorithm can naturally be extended to ESNs with fixed filter neurons. It also generalizes the recently proposed expectation-maximization-based D&S readout adaptation method. The proposed algorithm was tested on synthetic data prediction tasks as well as on dynamic handwritten character recognition.
    Neural Computation 12/2011; 24(4):967-95. · 1.76 Impact Factor

Full-text (2 Sources)

View
28 Downloads
Available from
May 31, 2014