Latent Force Models.

Journal of Machine Learning Research - Proceedings Track 01/2009; 5:9-16.
Source: DBLP

ABSTRACT Purely data driven approaches for machine learning present diculties when data is scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic ap- proaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the is- sue of how to parameterize the system. In this paper, we present a hybrid approach us- ing Gaussian processes and dierential equa- tions to combine data driven modelling with a physical model of the system. We show how dierent, physically-inspired, kernel func- tions can be developed through sensible, sim- ple, mechanistic assumptions about the un- derlying system. The versatility of our ap- proach is illustrated with three case studies from computational biology, motion capture and geostatistics.

  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we revisit batch state estimation through the lens of Gaussian process (GP) regression. We consider continuous-discrete estimation problems wherein a trajectory is viewed as a one-dimensional GP, with time as the independent variable. Our continuous-time prior can be defined by any nonlinear, time-varying stochastic differential equation driven by white noise; this allows the possibility of smoothing our trajectory estimates using a variety of vehicle dynamics models (e.g., `constant-velocity'). We show that this class of prior results in an inverse kernel matrix (i.e., covariance matrix between all pairs of measurement times) that is exactly sparse (block-tridiagonal) and that this can be exploited to carry out GP regression (and interpolation) very efficiently. When the prior is based on a linear, time-varying stochastic differential equation and the measurement model is also linear, this GP approach is equivalent to classical, discrete-time smoothing (at the measurement times); when a nonlinearity is present, we iterate over the whole trajectory to maximize accuracy. We test the approach experimentally on a simultaneous trajectory estimation and mapping problem using a mobile robot dataset.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Latent variable models provide valuable compact rep-resentations for learning and inference in many computer vision tasks. However, most existing models cannot di-rectly encode prior knowledge about the specific problem at hand. In this paper, we introduce a constrained latent variable model whose generated output inherently accounts for such knowledge. To this end, we propose an approach that explicitly imposes equality and inequality constraints on the model's output during learning, thus avoiding the computational burden of having to account for these con-straints at inference. Our learning mechanism can exploit non-linear kernels, while only involving sequential closed-form updates of the model parameters. We demonstrate the effectiveness of our constrained latent variable model on the problem of non-rigid 3D reconstruction from monocular images, and show that it yields qualitative and quantitative improvements over several baselines.
    International Conference on Computer Vision and Pattern Recognition (CVPR); 06/2012

Full-text (2 Sources)

Available from
May 21, 2014