Article

An approximate inference with Gaussian process to latent functions from uncertain data.

DAMAS Laboratory, Computer Science and Software Engineering Department, Laval University, Canada
Neurocomputing (Impact Factor: 2.01). 05/2011; 74:1945-1955. DOI: 10.1016/j.neucom.2010.09.024
Source: DBLP

ABSTRACT Most formulations of supervised learning are often based on the assumption that only the outputs data are uncertain. However, this assumption might be too strong for some learning tasks. This paper investigates the use of Gaussian processes to infer latent functions from a set of uncertain input–output examples. By assuming Gaussian distributions with known variances over the inputs–outputs and using the expectation of the covariance function, it is possible to analytically compute the expected covariance matrix of the data to obtain a posterior distribution over functions. The method is evaluated on a synthetic problem and on a more realistic one, which consist in learning the dynamics of a cart–pole balancing task. The results indicate an improvement of the mean squared error and the likelihood of the posterior Gaussian process when the data uncertainty is significant.

Full-text

Available from: Patrick Dallaire, Apr 08, 2014
0 Followers
 · 
121 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial wireless channel prediction is important for future wireless networks, and in particular for proactive resource allocation at different layers of the protocol stack.Various sources of uncertainty must be accounted for during modeling and to provide robust predictions. We investigate two frameworks, classical Gaussian processes (cGP) and uncertain Gaussian processes (uGP), and analyze the impact of location uncertainty during both learning/training and prediction/testing phases. We observe that cGP generally fails both in terms of learning the channel parameters and in predicting the channel in the presence of location uncertainties. In contrast, uGP explicitly considers the location uncertainty and is able to learn and predict the wireless channel.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Particle filter is one of the most widely applied stochastic sampling tools for state estimation problems in practice. However, the proposal distribution in the traditional particle filter is the transition probability based on state equation, which would heavily affect estimation performance in that the samples are blindly drawn without considering the current observation information. Additionally, the fixed particle number in the typical particle filter would lead to wasteful computation, especially when the posterior distribution greatly varies over time. In this paper, an advanced adaptive nonparametric particle filter is proposed by incorporating gaussian process based proposal distribution into KLD-Sampling particle filter framework so that the high-qualified particles with adaptively KLD based quantity are drawn from the learned proposal with observation information at each time step to improve the approximation accuracy and efficiency. Our state estimation experiments on univariate nonstationary growth model and two-link robot arm show that the adaptive nonparametric particle filter outperforms the existing approaches with smaller size of particles.
    Proceedings - IEEE International Conference on Robotics and Automation 01/2012; DOI:10.1109/ICRA.2012.6224840
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work we propose a heteroscedastic generalization to RVM, a fast Bayesian framework for regression, based on some recent similar works. We use variational approximation and expectation propagation to tackle the problem. The work is still under progress and we are examining the results and comparing with the previous works.