Efficient Multioutput Gaussian Processes through Variational Inducing Kernels.

Journal of Machine Learning Research - Proceedings Track 01/2010; 9:25-32.
Source: DBLP

ABSTRACT Interest in multioutput kernel methods is increas- ing, whether under the guise of multitask learn- ing, multisensor networks or structured output data. From the Gaussian process perspective a multioutput Mercer kernel is a covariance func- tion over correlated output functions. One way of constructing such kernels is based on convolution processes (CP). A key problem for this approach is efficient inference. ´ Alvarez and Lawrence re- cently presented a sparse approximation for CPs that enabled efficient inference. In this paper, we extend this work in two directions: we in- troduce the concept of variational inducing func- tions to handle potential non-smooth functions involved in the kernel CP construction and we consider an alternative approach to approximate inference based on variational methods, extend- ing the work by Titsias (2009) to the multiple output case. We demonstrate our approaches on prediction of school marks, compiler perfor- mance and financial time series.

0 0
  • Source
  • [show abstract] [hide abstract]
    ABSTRACT: A central task of Bayesian machine learning is to infer the posterior distribution of hidden random variables given observations and calculate expectations with respect to this distribution. However, this is often computationally intractable so that people have to seek approximation schemes. Deterministic approximate inference techniques are an alternative of the stochastic approximate inference methods based on numerical sampling, namely Monte Carlo techniques, and during the last 15 years, many advancements in this field have been made. This paper reviews typical deterministic approximate inference techniques, some of which are very recent and need further explorations. With an aim to promote research in deterministic approximate inference, we also attempt to identify open problems that may be helpful for future investigations in this field.
    Neural Computing and Applications · 1.17 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: In this paper we shall discuss an extension to Gaussian process (GP) regression models, where the measurements are modeled as linear functionals of the underlying GP and the estimation objective is a general linear operator of the process. We shall show how this framework can be used for modeling physical processes involved in measurement of the GP and for encoding physical prior information into regression models in form of stochastic partial differential equations (SPDE). We shall also illustrate the practical applicability of the theory in a simulated application.
    Artificial Neural Networks and Machine Learning - ICANN 2011 - 21st International Conference on Artificial Neural Networks, Espoo, Finland, June 14-17, 2011, Proceedings, Part II; 01/2011

Full-text (2 Sources)

Available from
Dec 3, 2012