The Bounds on the Rate of Uniform Convergence for Learning Machine

Conference PaperinLecture Notes in Computer Science 3496:538-545 · May 2005with19 Reads
Impact Factor: 0.51 · DOI: 10.1007/11427391_86 · Source: DBLP
Conference: Advances in Neural Networks - ISNN 2005, Second International Symposium on Neural Networks, Chongqing, China, May 30 - June 1, 2005, Proceedings, Part I
Abstract

The generalization performance is the important property of learning machines. The desired learning machines should have the quality of stability with respect to the training samples. We consider the empirical risk minimization on the function sets which are eliminated noisy. By applying the Kutin’s inequality we establish the bounds of the rate of uniform convergence of the empirical risks to their expected risks for learning machines and compare the bounds with known results.

    • "Rademacher average. Zou et al. [29] established the bounds on the rate of uniform convergence of learning machines for independent and identically distribution sequences on the set of admissible functions which are eliminated noisy. On the other hand, the notion of algorithmic stability has also been used effectively to derive tight generalization bounds in some literature by now. "
    [Show abstract] [Hide abstract] ABSTRACT: In many practical applications, the performance of a learning algorithm is not actually affected only by an unitary factor just like the complexity of hypothesis space, stability of the algorithm and data quality. This paper addresses in the performance of the regularization algorithm associated with Gaussian kernels. The main purpose is to provide a framework of evaluating the generalization performance of the algorithm conjointly in terms of hypothesis space complexity, algorithmic stability and data quality. The new bounds on generalization error of such algorithm measured by regularization error and sample error are established. It is shown that the regularization error has polynomial decays under some conditions, and the new bounds are based on uniform stability of the algorithm, covering number of hypothesis space and data information simultaneously. As an application, the obtained results are applied to several special regularization algorithms, and some new results for the special algorithms are deduced.
    Full-text · Article · Apr 2014 · Neural Processing Letters
    0Comments 1Citation
    • "Bousquet [2] derived a generalization of Vapnik and Chervonenkis' relative error inequality by using a new measure of the size of function classes, the local Rademacher average. Zou [7] established the bounds on the rate of uniform convergence of learning machines for i.i.d. sequences on the set of admissible functions which are eliminated noisy. "
    [Show abstract] [Hide abstract] ABSTRACT: Generalization performance is the main purpose of machine learning theoretical research. It has been shown previously by Vapnik, Cucker and Smale that the empirical risks based on an i.i.d. sequence must uniformly converge on their expected risks for learning machines as the number of samples approaches infinity. In order to study the generalization performance of learning machines under the condition of dependent input sequences, this paper extends these results to the case where the i.i.d. sequence is replaced by exponentially strongly mixing sequence. We obtain the bound on the rate of uniform convergence for learning machines by using Bernstein’s inequality for exponentially strongly mixing sequences, and establishing the bound on the rate of relative uniform convergence for learning machines based on exponentially strongly mixing sequence. In the end, we compare these bounds with previous results.
    Full-text · Article · Apr 2007 · Computers & Mathematics with Applications
    0Comments 19Citations