Article

Controlling the Sensitivity of Support Vector Machines

06/1999;
Source: CiteSeer

ABSTRACT For many applications it is important to accurately distinguish false negative results from false positives. This is particularly important for medical diagnosis where the correct balance between sensitivity and specificity plays an important role in evaluating the performance of a classifier. In this paper we discuss two schemes for adjusting the sensitivity and specificity of Support Vector Machines and the description of their performance using receiver operating characteristic (ROC) curves. We then illustrate their use on real-life medical diagnostic tasks. 1 Introduction. Since their introduction by Vapnik and coworkers [ Vapnik, 1995; Cortes and Vapnik, 1995 ] , Support Vector Machines (SVMs) have been successfully applied to a number of real world problems such as handwritten character and digit recognition [ Scholkopf, 1997; Cortes, 1995; LeCun et al., 1995; Vapnik, 1995 ] , face detection [ Osuna et al., 1997 ] and speaker identification [ Schmidt, 1996 ] . They e...

1 Bookmark
 · 
294 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes an improved SMO solving a quadratic optmization problem for class imbalanced learning. The SMO algorithm is aproporiate for solving the optimization problem of a support vector machine that assigns the different regularization values to the two classes, and the prosoposed SMO learning algorithm iterates the learning steps to find the current optimal solutions of only two Lagrange variables selected per class. The proposed algorithm is tested with the UCI benchmarking problems and compared to the experimental results of the SMO algorithm with the g-mean measure that considers class imbalanced distribution for gerneralization performance. In comparison to the SMO algorithm, the proposed algorithm is effective to improve the prediction rate of the minority class data and could shorthen the training time.
    Journal of the Korea Society of Computer and Information. 01/2010; 15(7).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Asymmetric error is a kind of error that trades off between the false positive rate and the false nega-tive rate of a binary classifier. It has been recently used in the imbalanced binary classification prob-lem, where the probability of one class far out-weighs that of the other. Classical bounds on the generalization error are not directly applicable to explain the ability to generalize of a binary classi-fier learned with an asymmetric error. Because in this context, there are different penalties associated with the false positive rate and the false negative rate respectively. In this paper, we present some margin-based bounds on an asymmetric error of a binary classifier based on its empirical asymmetric error. We focus our study on convex combinations of classifiers like classifiers learned from boosting and neural networks. Based on our bounds, it is suggested that machine learning methods that use an asymmetric error to learn should produce a clas-sifier that has different margins for examples com-ing from different classes, rather than treating the margins equally like current methods do.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The imbalanced dataset often becomes obstacle in supervised learning process. Imbalance is case in which the example in training data belonging to one class is heavily outnumber the examples in the other class. Applying classifier to this dataset results in the failure of classifier to learn the minority class. Synthetic Minority Oversampling Technique (SMOTE) is a well known oversampling method that tackles imbalance in data level. SMOTE creates synthetic example between two close vectors that lay together. Our study considers three improvements of SMOTE and call them as SMOTEOut, SMOTE-Cosine, and Selected-SMOTE, in order to cover cases which are not already done by SMOTE. To investigate the proposed method, our experiments were conducted with eighteen different datasets. The results show that our proposed SMOTE give some improvements of B-ACC and F1-Score.
    The 6th International Conference on Advanced Computer Science and Information Systems (ICACSIS), Jakarta, Indonesia; 10/2014