A new artificial neural network ensemble based on feature selection and class recoding

Neural Computing and Applications (Impact Factor: 1.76). 01/2012; DOI: 10.1007/s00521-010-0458-5

ABSTRACT Many of the studies related to supervised learning have focused on the resolution of multiclass problems. A standard technique
used to resolve these problems is to decompose the original multiclass problem into multiple binary problems. In this paper,
we propose a new learning model applicable to multi-class domains in which the examples are described by a large number of
features. The proposed model is an Artificial Neural Network ensemble in which the base learners are composed by the union
of a binary classifier and a multiclass classifier. To analyze the viability and quality of this system, it will be validated
in two real domains: traffic sign recognition and hand-written digit recognition. Experimental results show that our model
is at least as accurate as other methods reported in the bibliography but has a considerable advantage respecting size, computational
complexity, and running time.

KeywordsClassifier ensemble-Multiclass learning-Neural networks-Feature selection-Class recoding

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Negative Correlation Learning (NCL) is a popular combining method that employs special error function for the simultaneous training of base neural network (NN) experts. In this article, we propose an improved version of NCL method in which the capability of gating network, as the combining part of Mixture of Experts method, is used to combine the base NNs in the NCL ensemble method. The special error function of the NCL method encourages each NN expert to learn different parts or aspects of the training data. Thus, the local competence of the experts should be considered in the combining approach. The gating network provides a way to support this needed functionality for combining the NCL experts. So the proposed method is called Gated NCL. The improved ensemble method is compared with the previous approaches were used for combining NCL experts, including winner-take-all (WTA) and average (AVG) combining techniques, in solving several classification problems from UCI machine learning repository. The experimental results show that our proposed ensemble method significantly improved performance over the previous combining approaches.
    Neural Computing and Applications 01/2011; 22(1). · 1.76 Impact Factor

Full-text (2 Sources)

Available from
Jun 1, 2014