# Vladimir N. Vapnik's research while affiliated with Ecole normale supérieure de Lyon and other places

**What is this page?**

This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

## Publications (12)

We examine the so-called rigorous support vector machine (RSVM) approach proposed by Vapnik (1998). The formulation of RSVM is derived by explicitly implementing the structural risk minimization principle with a parameter H used to directly control the VC dimension of the set of separating hyperplanes. By optimizing the dual problem, RSVM finds the...

We introduce the concept of span of support vectors (SV) and show that the generalization ability of support vector machines (SVM) depends on this new geometrical concept. We prove that the value of the span is always smaller (and can be much smaller) than the diameter of the smallest sphere containing the support vectors, used in previous bounds (...

We study the use of support vector machines (SVM) in classifying
e-mail as spam or nonspam by comparing it to three other classification
algorithms: Ripper, Rocchio, and boosting decision trees. These four
algorithms were tested on two different data sets: one data set where
the number of features were constrained to the 1000 best features and
anot...

It is well known that for a given sample size there exists a model
of optimal complexity corresponding to the smallest prediction
(generalization) error. Hence, any method for learning from finite
samples needs to have some provisions for complexity control. Existing
implementations of complexity control include penalization (or
regularization), we...

Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that support vector machines (SVM's) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavy-tailed RBF kernels of the...

Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical l...

In this paper, we studied the problem of classifying spam e-mail vs nonspam e-mail under the general frame work of text categorization. First we applied the regular support vector machine to this problem, and tuned up the parameters. We observed that the classification accuracy and generalization ability of the SVM classifier can be controlled, via...

A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the pr...

The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high gene...

This paper compares the performance of several classifier
algorithms on a standard database of handwritten digits. We consider not
only raw accuracy, but also training time, recognition time, and memory
requirements. When available, we report measurements of the fraction of
patterns that must be rejected so that the remaining patterns have
misclass...

## Citations

... Finally, the predicted RSSI value was evaluated as the average RSSI value of the training set points that were associated with the same cluster. (6) Support vector machine (SVM)-based method [26]: tries to fit the best line within a predefined or threshold error value. In our study, we used the radial basis function (RBF) kernel with gamma = scale, and the default values of the other parameters. ...

... In the framework of this theory, the method of Structural Risk Minimization for model selection was proposed. In the case studied here, this yields the following multiplicative factor (Cherkassky, Mulier, & Vapnik, 1997), derived from Uniform Convergence Bounds (UCB): ...

... The average error is computed and used to evaluate the predictor. It is known (Vapnik and Chapelle, 1999;Chapelle et al., 2002) that the leave-one-out procedure gives an almost unbiased estimate of the expected generalization error. The results are shown in table 5.4. ...

... This is a logical approach because Ivanov regularization frameworks provide the ability to directly handle the two main forces guiding the SRMbased learning: on one hand, the force is the minimization of the empirical risk; on the other hand, it controls the hypothesis space. The guiding forces seek to minimize the risk for the chosen hypothesis space (Bi and Vapnik, 2003). For the sake of brevity, we will refer to this formulation as the I-SVM when addressing the Ivanov-based SVM. ...

... Here, y i ∈ {−1, 1} is the corresponding class label of the ith sample. According to [29], the leave-one-out (LOO) error is an unbiased estimate of the expected generalization error of SVM and upper bounded by the square of the radius-margin ratio. To be specific, the following holds ...

... This study uses statistical downscaling methods to downscale the data of seven GCMs in CMIP6 in time and space. It uses Support Vector Regression (SVM) [16] to regress and analyze the simulated data of multiple climate models to obtain a relatively reasonable prediction model. In this paper, nine commonly used extreme temperature indices were selected to firstly simulate and evaluate the extreme temperature indices in the historical period of the North China Plain and then predict the spatial and temporal variability of the extreme temperature indices in the future climate scenarios of the North China Plain, to provide a scientific basis for addressing the disaster risks associated with extreme temperature events. ...

... For supervised learning methods, SVM is one of the most robust methods used for classification purposes. Recently, SVM was excellently used for pattern recognition problems (Vapnik 1999), machine learning (Gammerman et al. 2016), and medical diagnosis area (Dobrowolski, Wierzbowski, and Tomczykiewicz 2012;Subasi 2013). Moreover, SVM is used in a variety of applications such as recognition and detection, text recognition, content-based image retrial, biometrics, speech recognition, etc. SVM construct a hyperplane or set of hyperplanes in infinite or high dimensional space using kernel trick to separate the nonlinear data with larger margin. ...

... A support vector machine (SVM) is a linear binary classifier [18]. Because of the multivariate data set, a one-vs-all multiclass implementation was used to transform this binary classifier into a multiple class discrimination model [19,20]. In brief, a binary SVM can be extended to distinguish between multiple classes by training an SVM model that all samples in a particular class y are labeled as positive, i.e., c = +1. ...

... Recently, thanks to the abilities of self-learning and adaptability, the adaptive dynamic programming-based method based on reinforcement learning (RL) has demonstrated the capability to find the optimal control policy and solve the Bellman equation in a practical way [13][14][15][16]. These methods can obtain near-accurate estimation to some extent, in the sense of asymptotic convergence [17,18], minimum risks [19,20]. Also some work obtained the estimation error bounds under strict assumptions [8][9][10]. ...

... This is a mild degree of imbalance. SVM models tend to be less sensitive to class imbalance as they try defining a hyperplane that separates examples belonging to each class in a high-dimensional space, thereby achieving higher accuracy compared to other models which strive towards minimising the error rate [30]. ...