Article

Functional learning through kernels

10/2009;
Source: arXiv

ABSTRACT This paper reviews the functional aspects of statistical learning theory. The main point under consideration is the nature of the hypothesis set when no prior information is available but data. Within this framework we first discuss about the hypothesis set: it is a vectorial space, it is a set of pointwise defined functions, and the evaluation functional on this set is a continuous mapping. Based on these principles an original theory is developed generalizing the notion of reproduction kernel Hilbert space to non hilbertian sets. Then it is shown that the hypothesis set of any learning machine has to be a generalized reproducing set. Therefore, thanks to a general ?representer theorem?, the solution of the learning problem is still a linear combination of a kernel. Furthermore, a way to design these kernels is given. To illustrate this framework some examples of such reproducing sets and kernels are given.

Download full-text

Full-text

Available from: Xavier Mary, Aug 15, 2014
0 Followers
 · 
108 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We demonstrate that a reproducing kernel Hilbert space of functions on a separable absolute Borel space or an analytic subset of a Polish space is separable if it possesses a Borel measurable feature map.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a definition of generalized semi-inner products (g.s.i.p.). By relating them to duality mappings from a normed vector space to its dual space, a characterization for all g.s.i.p. satisfying this definition is obtained. We then study the Riesz representation of continuous linear functionals via g.s.i.p. As applications, we establish a representer theorem and characterization equation for the minimizer of a regularized learning from finite or infinite samples in Banach spaces of functions.
    Journal of Mathematical Analysis and Applications 12/2010; 372(1):181-196. DOI:10.1016/j.jmaa.2010.04.075 · 1.12 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a technique to improve iterative kernel principal component analysis (KPCA) robust to outliers due to undesirable artifacts such as noises, alignment errors, or occlusion. The proposed iterative robust KPCA (rKPCA) links the iterative updating and robust estimation of principal directions. It inherits good properties from these two ideas for reducing the time complexity, space complexity, and the influence of these outliers on estimating the principal directions. In the asymptotic stability analysis, we also show that our iterative rKPCA converges to the weighted kernel principal kernel components from the batch rKPCA. Experimental results are presented to confirm that our iterative rKPCA achieves the robustness as well as time saving better than batch KPCA.
    Neurocomputing 11/2011; 74:3921-3930. DOI:10.1016/j.neucom.2011.08.008 · 2.01 Impact Factor