Latent Force Models.

Journal of Machine Learning Research - Proceedings Track 01/2009; 5:9-16.
Source: DBLP


Purely data driven approaches for machine learning present diculties when data is scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic ap- proaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the is- sue of how to parameterize the system. In this paper, we present a hybrid approach us- ing Gaussian processes and dierential equa- tions to combine data driven modelling with a physical model of the system. We show how dierent, physically-inspired, kernel func- tions can be developed through sensible, sim- ple, mechanistic assumptions about the un- derlying system. The versatility of our ap- proach is illustrated with three case studies from computational biology, motion capture and geostatistics.

Download full-text


Available from: David Luengo, Oct 05, 2015
23 Reads
  • Source
    • "However, most approaches are restricted to encoding this knowledge in the latent space. For instance, sparse coding [13] [32] encourages sparsity of the latent representation; the Gaussian Process Dynamical Model (GPDM) [30] [28] allows to model dynamics in the latent space, although the dynamics are learned rather than explicitly encoded in the model; [29] introduces topological constraints to the GPLVM, but once more acting directly on the latent space; [1] imposes physics-based constraints on the latent space in the form of differential equations. However, imposing constraints on the latent variables does not guarantee that equivalent ones are satisfied in the output space. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Latent variable models provide valuable compact rep-resentations for learning and inference in many computer vision tasks. However, most existing models cannot di-rectly encode prior knowledge about the specific problem at hand. In this paper, we introduce a constrained latent variable model whose generated output inherently accounts for such knowledge. To this end, we propose an approach that explicitly imposes equality and inequality constraints on the model's output during learning, thus avoiding the computational burden of having to account for these con-straints at inference. Our learning mechanism can exploit non-linear kernels, while only involving sequential closed-form updates of the model parameters. We demonstrate the effectiveness of our constrained latent variable model on the problem of non-rigid 3D reconstruction from monocular images, and show that it yields qualitative and quantitative improvements over several baselines.
    International Conference on Computer Vision and Pattern Recognition (CVPR); 06/2012
  • Source
  • Source
    Proceedings of the International Conference on Artificial Intelligence and Statistics;
Show more