An approximate inference with Gaussian process to latent functions from uncertain data.

DAMAS Laboratory, Computer Science and Software Engineering Department, Laval University, Canada
Neurocomputing (Impact Factor: 2.08). 05/2011; 74:1945-1955. DOI: 10.1016/j.neucom.2010.09.024
Source: DBLP


Most formulations of supervised learning are often based on the assumption that only the outputs data are uncertain. However, this assumption might be too strong for some learning tasks. This paper investigates the use of Gaussian processes to infer latent functions from a set of uncertain input–output examples. By assuming Gaussian distributions with known variances over the inputs–outputs and using the expectation of the covariance function, it is possible to analytically compute the expected covariance matrix of the data to obtain a posterior distribution over functions. The method is evaluated on a synthetic problem and on a more realistic one, which consist in learning the dynamics of a cart–pole balancing task. The results indicate an improvement of the mean squared error and the likelihood of the posterior Gaussian process when the data uncertainty is significant.

Download full-text


Available from: Patrick Dallaire, Apr 08, 2014
22 Reads
  • Source
    • "In this paper, we build on and adapt the framework from [13], [14] to CQM prediction in wireless networks. Our main contributions are as follows: • We show that not considering location uncertainty leads to poor learning of the channel parameters and poor prediction of CQM values at other locations, especially when location uncertainties are heterogeneous; • We relate and unify existing GP methods that account for uncertainty during both learning and prediction, by operating directly on an input set of distributions, rather than an input set of locations; • We describe and delimit proper choices for mean functions and covariance functions in this unified framework, so as to incorporate location uncertainty in both learning and prediction; and • We demonstrate the use of the proposed framework for a spatial resource allocation application. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial wireless channel prediction is important for future wireless networks, and in particular for proactive resource allocation at different layers of the protocol stack.Various sources of uncertainty must be accounted for during modeling and to provide robust predictions. We investigate two frameworks, classical Gaussian processes (cGP) and uncertain Gaussian processes (uGP), and analyze the impact of location uncertainty during both learning/training and prediction/testing phases. We observe that cGP generally fails both in terms of learning the channel parameters and in predicting the channel in the presence of location uncertainties. In contrast, uGP explicitly considers the location uncertainty and is able to learn and predict the wireless channel.
    IEEE Transactions on Wireless Communications 01/2015; DOI:10.1109/TWC.2015.2481879 · 2.50 Impact Factor
  • Source
    • "A first approach to tackling input noise within GP modeling is to use Taylor approximation of the GP posterior. Based on the second order expansion, Girard and Murray-Smith [6] approximated posterior moments and for linear and squared exponential kernels they provided analytical expressions; this method has been extended to other kernel functions by Dallaire et al. [4]. Using a known input noise Girard and Murray-Smith [5], proposed different approximations such as approximate moments, exact moments with linear and squared exponential kernels, and they also used a Monte Carlo integration of the noise. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Input noise is common in situations when data either is coming from unreliable sensors or previous outputs are used as current inputs. Nevertheless, most regression algorithms do not model input noise, inducing thus bias in the regression. We present a method that corrects this bias by repeated regression estimations. In simulation extrapolation we perturb the inputs with additional input noise and by observing the effect of this addition on the result, we estimate what would the prediction be without the input noise. We extend the examination to a non-parametric probabilistic regression, inference using Gaussian processes. We conducted experiments on both synthetic data and in robotics, i.e., learning the transition dynamics of a dynamical system; showing significant improvements in the accuracy of the prediction.
    Symbolic and Numeric Algorithms for Scientific Computing, Timisoara, Romania; 01/2014
  • Source
    • "It is a nonparametric method which represents a gaussian distribution over functions. A regression problem could be solved by the gaussian process as follows [14] [19]: Suppose the training set with m data instances is Dataset = < X training , y training >, where the input matrix X training = [x 1 , x 2 , · · · , x m ] consists of n-feature input instances x i (i = 1,2,...m), and y training = [y 1 , y 2 , · · · , y m ] is the output vector which is generated by y i = g(x i ) + ε (8) g is a nonlinear function and ε ∼ N(0, σ 2 ε ). Since the joint distribution of output variable vector y is a multi-variable gaussian distribution [17], given a test input x * , a predictive density over the target output y * is specified as a conditional gaussian distribution according to the training set: p(y * |x * , Dataset) = N(y * ; µ(x * , Dataset), Σ(x * , "
    [Show abstract] [Hide abstract]
    ABSTRACT: Particle filter is one of the most widely applied stochastic sampling tools for state estimation problems in practice. However, the proposal distribution in the traditional particle filter is the transition probability based on state equation, which would heavily affect estimation performance in that the samples are blindly drawn without considering the current observation information. Additionally, the fixed particle number in the typical particle filter would lead to wasteful computation, especially when the posterior distribution greatly varies over time. In this paper, an advanced adaptive nonparametric particle filter is proposed by incorporating gaussian process based proposal distribution into KLD-Sampling particle filter framework so that the high-qualified particles with adaptively KLD based quantity are drawn from the learned proposal with observation information at each time step to improve the approximation accuracy and efficiency. Our state estimation experiments on univariate nonstationary growth model and two-link robot arm show that the adaptive nonparametric particle filter outperforms the existing approaches with smaller size of particles.
    Proceedings - IEEE International Conference on Robotics and Automation 01/2012; DOI:10.1109/ICRA.2012.6224840
Show more