Conference Paper

Optimizing a Class of Feature Selection Measures

Conference: NIPS 2009 Workshop on Discrete Optimization in Machine Learning: Submodularity, Sparsity & Polyhedra (DISCML)

ABSTRACT

Feature selection is an important processing step in machine learning and the de-sign of pattern-recognition systems. A major challenge consists in the selection of relevant features in cases of high-dimensional data sets. In order to tackle the computational complexity, heuristic, sequential or random search strategies are applied frequently. These methods, however, often yield only locally optimal fea-ture sets that might be globally sub-optimal. The aim of our research is to derive a new, efficient approach that ensures globally optimal feature sets. We focus on the so-called filter methods. We show that a number of feature-selection measures, e.g., the correlation-feature-selection measure, the minimal-redundancy-maximal-relevance measure and others, can be fused and generalized. We formulate the fea-ture selection problem as a polynomial-mixed 0 – 1 fractional programming prob-lem (P M 01F P). To solve it, we transform the P M 01F P problem into a mixed 0-1 linear programming (M 01LP) problem. This transformation is performed by applying an improved Chang's method of grouping additional variables. To ob-tain the globally optimal solution to the M 01LP problem, the branch-and-bound algorithm can be used. Experimental results obtained over the UCI database show that our globally optimal method outperforms other heuristic search procedures by up to 10 % of redundant or confusing features that are removed from the original data set, while keeping or yielding an even better accuracy.

Download full-text

Full-text

Available from: Katrin Franke
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Feature extraction is the heart of an object recognition system. In recognition problem, features are utilized to classify one class of object from another. The original data is usually of high dimensionality. The objective of the feature extraction is to classify the object, and further to reduce the dimensionality of the measurement space to a space suitable for the application of object classification techniques. In the feature extraction process, only the salient features necessary for the recognition process are retained such that the classification can be implemented on a vastly reduced feature set. In paper we are going to discuss the feature as well as classification technique used in neural network.
    Preview · Article ·

  • No preview · Article · Jan 2010 · Lecture Notes in Computer Science
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Feature selection is an important pre-processing step in intrusion detection. Achieving reduction of the number of relevant traffic features without negative effect on classification accuracy is a goal that greatly improves overall effectiveness of an intrusion detection system. A major challenge is to choose appropriate feature-selection methods that can precisely determine the relevance of features to the intrusion detection task and the redundancy between features. Two new feature selection measures suitable for the intrusion detection task have been proposed recently [11,12] the correlation-feature-selection (CFS) measure and the minimal-redundancy-maximal-relevance (mRMR) measure. In this paper, we validate these feature selection measures by comparing them with various previously known automatic feature-selection algorithms for intrusion detection. The feature-selection algorithms involved in this comparison are the previously known SVM-wrapper, Markov-blanket and Classification & Regression Trees (CART) algorithms as well as the recently proposed generic-feature-selection (GeFS) method with 2 instances applicable in intrusion detection: the correlation-feature-selection (GeFS CFS ) and the minimal-redundancy-maximal-relevance (GeFS mRMR ) measures. Experimental results obtained over the KDD CUP’99 data set show that the generic-feature-selection (GeFS) method for intrusion detection outperforms the existing approaches by removing more than 30% of redundant features from the original data set, while keeping or yielding an even better classification accuracy.
    Full-text · Conference Paper · Sep 2010
Show more