The Bounds on the Rate of Uniform Convergence for Learning Machine
The generalization performance is the important property of learning machines. The desired learning machines should have the quality of stability with respect to the training samples. We consider the empirical risk minimization on the function sets which are eliminated noisy. By applying the Kutin’s inequality we establish the bounds of the rate of uniform convergence of the empirical risks to their expected risks for learning machines and compare the bounds with known results.
[Show abstract] [Hide abstract] ABSTRACT: In many practical applications, the performance of a learning algorithm is not actually affected only by an unitary factor just like the complexity of hypothesis space, stability of the algorithm and data quality. This paper addresses in the performance of the regularization algorithm associated with Gaussian kernels. The main purpose is to provide a framework of evaluating the generalization performance of the algorithm conjointly in terms of hypothesis space complexity, algorithmic stability and data quality. The new bounds on generalization error of such algorithm measured by regularization error and sample error are established. It is shown that the regularization error has polynomial decays under some conditions, and the new bounds are based on uniform stability of the algorithm, covering number of hypothesis space and data information simultaneously. As an application, the obtained results are applied to several special regularization algorithms, and some new results for the special algorithms are deduced.0Comments 1Citation
- "Rademacher average. Zou et al.  established the bounds on the rate of uniform convergence of learning machines for independent and identically distribution sequences on the set of admissible functions which are eliminated noisy. On the other hand, the notion of algorithmic stability has also been used effectively to derive tight generalization bounds in some literature by now. "
[Show abstract] [Hide abstract] ABSTRACT: Generalization performance is the main purpose of machine learning theoretical research. It has been shown previously by Vapnik, Cucker and Smale that the empirical risks based on an i.i.d. sequence must uniformly converge on their expected risks for learning machines as the number of samples approaches infinity. In order to study the generalization performance of learning machines under the condition of dependent input sequences, this paper extends these results to the case where the i.i.d. sequence is replaced by exponentially strongly mixing sequence. We obtain the bound on the rate of uniform convergence for learning machines by using Bernstein’s inequality for exponentially strongly mixing sequences, and establishing the bound on the rate of relative uniform convergence for learning machines based on exponentially strongly mixing sequence. In the end, we compare these bounds with previous results.0Comments 19Citations
- "Bousquet  derived a generalization of Vapnik and Chervonenkis' relative error inequality by using a new measure of the size of function classes, the local Rademacher average. Zou  established the bounds on the rate of uniform convergence of learning machines for i.i.d. sequences on the set of admissible functions which are eliminated noisy. "