Article

A new artificial neural network ensemble based on feature selection and class recoding

Neural Computing and Applications (Impact Factor: 1.76). 06/2012; 21(4):1-13. DOI: 10.1007/s00521-010-0458-5

ABSTRACT Many of the studies related to supervised learning have focused on the resolution of multiclass problems. A standard technique
used to resolve these problems is to decompose the original multiclass problem into multiple binary problems. In this paper,
we propose a new learning model applicable to multi-class domains in which the examples are described by a large number of
features. The proposed model is an Artificial Neural Network ensemble in which the base learners are composed by the union
of a binary classifier and a multiclass classifier. To analyze the viability and quality of this system, it will be validated
in two real domains: traffic sign recognition and hand-written digit recognition. Experimental results show that our model
is at least as accurate as other methods reported in the bibliography but has a considerable advantage respecting size, computational
complexity, and running time.

KeywordsClassifier ensemble-Multiclass learning-Neural networks-Feature selection-Class recoding

0 Followers
 · 
126 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Over the last two decades, the machine learning and related communities have conducted numerous studies to improve the performance of a single classifier by combining several classifiers generated from one or more learning algorithms. Bagging and Boosting are the most representative examples of algorithms for generating homogeneous ensembles of classifiers. However, Stacking has become a commonly used technique for generating ensembles of heterogeneous classifiers since Wolpert presented his study entitled Stacked Generalization in 1992. Studies that have addressed the Stacking issue demonstrated that when selecting base learning algorithms for generating classifiers that are members of the ensemble, their learning parameters and the learning algorithm for generating the meta-classifier were critical issues. Most studies on this topic manually select the appropriate combination of base learning algorithms and their learning parameters. However, some other methods use automatic methods to determine good Stacking configurations instead of starting from these strong initial assumptions. In this paper, we describe Stacking and its variants and present several examples of application domains. WIREs Data Mining Knowl Discov 2015, 5:21–34. doi: 10.1002/widm.1143For further resources related to this article, please visit the WIREs website.Conflict of interest: The authors have declared no conflicts of interest for this article.
    Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 01/2015; 5(1). DOI:10.1002/widm.1143 · 1.42 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this study, a comprehensive methodology for overcoming the design problem of the Fuzzy ARTMAP neural network is proposed. The issues addressed are the sequence of training data for supervised learning and optimum parameter tuning for parameters such as baseline vigilance. A genetic algorithm search heuristic was chosen to solve this multi-objective optimization problem. To further augment the ARTMAP’s pattern classification ability, multiple ARTMAPs were optimized via genetic algorithm and assembled into a classifier ensemble. An optimal ensemble was realized by the inter-classifier diversity of its constituents. This was achieved by mitigating convergence in the genetic algorithms by employing a hierarchical parallel architecture. The best-performing classifiers were then combined in an ensemble, using probabilistic voting for decision combination. This study also integrated the disparate methods to operate within a single framework, which is the proposed novel method for creating an optimum classifier ensemble configuration with minimum user intervention. The methodology was benchmarked using popular data sets from UCI machine learning repository.
    Neural Computing and Applications 02/2014; 26(2):263-276. DOI:10.1007/s00521-014-1632-y · 1.76 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work, we formalise and evaluate an ensemble of classifiers that is designed for the resolution of multi-class problems. To achieve a good accuracy rate, the base learners are built with pairwise coupled binary and multi-class classifiers. Moreover, to reduce the computational cost of the ensemble and to improve its performance, these classifiers are trained using a specific attribute subset. This proposal offers the opportunity to capture the advantages provided by binary decomposition methods, by attribute partitioning methods, and by cooperative characteristics associated with a combination of redundant base learners. To analyse the quality of this architecture, its performance has been tested on different domains, and the results have been compared to other well-known classification methods. This experimental evaluation indicates that our model is, in most cases, as accurate as these methods, but it is much more efficient.
    Information Fusion 07/2015; 24. DOI:10.1016/j.inffus.2014.09.002 · 3.47 Impact Factor

Full-text (2 Sources)

Download
47 Downloads
Available from
Jun 1, 2014