Conference Paper

The Bounds on the Rate of Uniform Convergence for Learning Machine.

DOI: 10.1007/11427391_86 Conference: Advances in Neural Networks - ISNN 2005, Second International Symposium on Neural Networks, Chongqing, China, May 30 - June 1, 2005, Proceedings, Part I
Source: DBLP

ABSTRACT The generalization performance is the important property of learning machines. The desired learning machines should have the
quality of stability with respect to the training samples. We consider the empirical risk minimization on the function sets
which are eliminated noisy. By applying the Kutin’s inequality we establish the bounds of the rate of uniform convergence
of the empirical risks to their expected risks for learning machines and compare the bounds with known results.

0 Bookmarks
 · 
43 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Generalization performance is the main purpose of machine learning theoretical research. It has been shown previously by Vapnik, Cucker and Smale that the empirical risks based on an i.i.d. sequence must uniformly converge on their expected risks for learning machines as the number of samples approaches infinity. In order to study the generalization performance of learning machines under the condition of dependent input sequences, this paper extends these results to the case where the i.i.d. sequence is replaced by exponentially strongly mixing sequence. We obtain the bound on the rate of uniform convergence for learning machines by using Bernstein’s inequality for exponentially strongly mixing sequences, and establishing the bound on the rate of relative uniform convergence for learning machines based on exponentially strongly mixing sequence. In the end, we compare these bounds with previous results.
    Computers & Mathematics with Applications 01/2007; 53:1050-1058. · 2.07 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In many practical applications, the performance of a learning algorithm is not actually affected only by an unitary factor just like the complexity of hypothesis space, stability of the algorithm and data quality. This paper addresses in the performance of the regularization algorithm associated with Gaussian kernels. The main purpose is to provide a framework of evaluating the generalization performance of the algorithm conjointly in terms of hypothesis space complexity, algorithmic stability and data quality. The new bounds on generalization error of such algorithm measured by regularization error and sample error are established. It is shown that the regularization error has polynomial decays under some conditions, and the new bounds are based on uniform stability of the algorithm, covering number of hypothesis space and data information simultaneously. As an application, the obtained results are applied to several special regularization algorithms, and some new results for the special algorithms are deduced.
    Neural Processing Letters 04/2014; · 1.24 Impact Factor