Functional Learning Through Kernel

Source: arXiv


This paper reviews the functional aspects of statistical learning theory. The main point under consideration is the nature of the hypothesis set when no prior information is available but data. Within this framework we first discuss about the hypothesis set: it is a vectorial space, it is a set of pointwise defined functions, and the evaluation functional on this set is a continuous mapping. Based on these principles an original theory is developed generalizing the notion of reproduction kernel Hilbert space to non hilbertian sets. Then it is shown that the hypothesis set of any learning machine has to be a generalized reproducing set. Therefore, thanks to a general ?representer theorem?, the solution of the learning problem is still a linear combination of a kernel. Furthermore, a way to design these kernels is given. To illustrate this framework some examples of such reproducing sets and kernels are given.

Download full-text


Available from: Xavier Mary, Aug 15, 2014
10 Reads
  • Source
    • "Let T : L 2 (X ) → H = Im(S) ⊂ R X , g → f = T g and H be dense in L 2 (H). We have (see, e.g., [7] "
    [Show abstract] [Hide abstract]
    ABSTRACT: Minimum measure sets (MMSs) summarize the information of a (single-class) dataset. In many situations, they can be preferred to estimated probability density functions (pdfs): they are strongly related to pdf level sets while being much easier to estimate in large dimensions. The main contribution of this paper is a theoretical connection between MMSs and one class support vector machines. This justifies the use of one-class SVMs in the following applications: novelty detection (we give explicit convergence rate) and change detection
    Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on; 06/2006
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: For many years now, there is a growing interest around ROC curve for characterizing machine learning performances. This is particularly due to the fact that in real-world prob- lems misclassification costs are not known and thus, ROC curve and related metrics such as the Area Under ROC curve (AUC) can be a more meaningful performance measures. In this paper, we propose a SVMs based algorithm for AUC maximization and show that under cer- tain conditions this algorithm is related to 2-norm soft margin Support Vector Machines. We present experiments that compare SVMs performances to those of other AUC maximization based algorithms and provide empirical analysis of SVMs behavior with regards to ROC- based metrics. Our main conclusion is that SVMs can maximize both AUC and accuracy.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Kernel methods are closely related to the notion of reproducing kernel Hilbert space (RKHS). A kernel machine is based on the minimiza-tion of an empirical cost and a stabilizer (usually the norm in the RKHS). In this paper we propose to use Besov spaces as alternative hypothesis spaces. We study statistical performances of a penalized empirical risk minimiza-tion for classification where the stabilizer is a Besov norm. More precisely, we state fast rates of convergence to the Bayes rule. These rates are adaptive with respect to the regularity of the Bayes.
    Electronic Journal of Statistics 01/2009; 3. DOI:10.1214/08-EJS316 · 0.96 Impact Factor
Show more