Article
Combined SVMBased Feature Selection and Classification.
Machine Learning (Impact Factor: 1.69). 01/2005; 61:129150. DOI: 10.1007/s1099400515059
Source: DBLP

Conference Paper: Feature selection by block addition and block deletion
[Show abstract] [Hide abstract]
ABSTRACT: In our previous work, we have developed methods for selecting input variables for function approximation based on block addition and block deletion. In this paper, we extend these methods to feature selection. To avoid random tie breaking for a small sample size problem with a large number of features, we introduce the weighted sum of the recognition error rate and the average of margin errors as the feature selection and feature ranking criteria. In our methods, starting from the empty set of features, we add several features at a time until a stopping condition is satisfied. Then we search deletable features by block deletion. To further speedup feature selection, we use a linear programming support vector machine (LP SVM) as a preselector. By computer experiments using benchmark data sets we show that the addition of the average of margin errors is effective for small sample size problems with large numbers of features in realizing high generalization ability.Proceedings of the 5th INNS IAPR TC 3 GIRPR conference on Artificial Neural Networks in Pattern Recognition; 09/2012 
Conference Paper: Robust feature selection for SVMs under uncertain data
[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we consider the problem of feature selection and classification under uncertain data that is inherently prevalent in almost all datasets. Using principles of Robust Optimization, we propose a robust scheme to handle data with ellipsoidal model uncertainty. The difficulty in treating zeronorm ℓ0 in feature selection problem is overcome by using an appropriate approximation and DC (Difference of Convex functions) programming and DCA (DC Algorithm). The computational results show that the proposed robust optimization approach is more performant than a traditional approach in immunizing perturbation of the data.Proceedings of the 13th international conference on Advances in Data Mining: applications and theoretical aspects; 07/2013 
Conference Paper: Improved UTA Feature Selection using Ant Colony Optimization
[Show abstract] [Hide abstract]
ABSTRACT: Feature Selection is the problem of choosing a minimal subset from the set of all features which are sufficient and necessary for classifier. UTA [1] is a simple algorithm performed in the trained artificial neural network. UTA evaluates the features according to their accuracy by removing them one by one. This algorithm classifies features into three categories: relevant, irrelevant and redundant features. UTA can guarantees that all of the relevant features are useful, but the disadvantage of UTA is that all correlated features will be determined as irrelevant/redundant features; because they are evaluated one by one. But in fact some of them may be relevant features. Ant colony optimization (ACO) is widely used for feature selection, and has very good performance; but it needs to a reasonable running time. In this paper at the beginning a UTA algorithm is performed, and then an ACO is used for finding those useful features which UTA could not find them. Proposed algorithm (called UTAACO) efficiently improved the performance of UTA, and reduced the computational time of ACO. Obtained results indicate the robustness of UTAACO.2011 3rd International Conference on Machine Learning and Computing (ICMLC 2011), Singapore; 02/2011
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.