Generalization bounds of ERM algorithm with Markov chain samples
ABSTRACT One of the main goals of machine learning is to study the generalization performance of learning algorithms. The previous
main results describing the generalization ability of learning algorithms are usually based on independent and identically
distributed (i.i.d.) samples. However, independence is a very restrictive concept for both theory and real-world applications.
In this paper we go far beyond this classical framework by establishing the bounds on the rate of relative uniform convergence
for the Empirical Risk Minimization (ERM) algorithm with uniformly ergodic Markov chain samples. We not only obtain generalization
bounds of ERM algorithm, but also show that the ERM algorithm with uniformly ergodic Markov chain samples is consistent. The
established theory underlies application of ERM type of learning algorithms.
Keywordsgeneralization bounds–ERM algorithm–relative uniform convergence–uniformly ergodic Markov chain–learning theory