Article

Functional learning through kernels

10/2009;
Source: arXiv

ABSTRACT This paper reviews the functional aspects of statistical learning theory. The main point under consideration is the nature of the hypothesis set when no prior information is available but data. Within this framework we first discuss about the hypothesis set: it is a vectorial space, it is a set of pointwise defined functions, and the evaluation functional on this set is a continuous mapping. Based on these principles an original theory is developed generalizing the notion of reproduction kernel Hilbert space to non hilbertian sets. Then it is shown that the hypothesis set of any learning machine has to be a generalized reproducing set. Therefore, thanks to a general ?representer theorem?, the solution of the learning problem is still a linear combination of a kernel. Furthermore, a way to design these kernels is given. To illustrate this framework some examples of such reproducing sets and kernels are given.

0 Bookmarks
 · 
79 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a technique to improve iterative kernel principal component analysis (KPCA) robust to outliers due to undesirable artifacts such as noises, alignment errors, or occlusion. The proposed iterative robust KPCA (rKPCA) links the iterative updating and robust estimation of principal directions. It inherits good properties from these two ideas for reducing the time complexity, space complexity, and the influence of these outliers on estimating the principal directions. In the asymptotic stability analysis, we also show that our iterative rKPCA converges to the weighted kernel principal kernel components from the batch rKPCA. Experimental results are presented to confirm that our iterative rKPCA achieves the robustness as well as time saving better than batch KPCA.
    Neurocomputing. 01/2011; 74:3921-3930.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Human experts constitute pattern classes of natural objects based on their observed appearance. Automatic systems for pattern recognition may be designed on a structural description derived from sensor observations. Alternatively, training sets of examples can be used in statistical learning procedures. They are most powerful for vectorial object representations. Unfortunately, structural descriptions do not match well with vectorial representations. Consequently it is difficult to combine the structural and statistical approaches to pattern recognition.Structural descriptions may be used to compare objects. This leads to a set of pairwise dissimilarities from which vectors can be derived for the purpose of statistical learning. The resulting dissimilarity representation bridges thereby the structural and statistical approaches.The dissimilarity space is one of the possible spaces resulting from this representation. It is very general and easy to implement. This paper gives a historical review and discusses the properties of the dissimilarity space approaches illustrated by a set of examples on real world datasets.
    Pattern Recognition Letters 05/2012; 33(7):826–832. · 1.27 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Functional and longitudinal data are becoming more and more common in practice. This paper focuses on sparse and irregular longitudinal data with a multicategory response. The predictor consists of sparse and irregular observations, potentially contaminated with measurement errors, on the predictor trajectory. To deal with this type of complicated predictors, we borrow the strength of large margin classifiers in statistical learning for classification of sparse and irregular longitudinal data. In particular, we propose functional robust truncated-hinge-loss support vector machines to perform multicategory classification with the aid of functional principal component analysis.
    Journal of Computational and Graphical Statistics 04/2013; 22(2):379-395. · 1.27 Impact Factor

Full-text (2 Sources)

View
0 Downloads
Available from
Aug 15, 2014