Conference Paper

Discriminant Parallel Perceptrons.

DOI: 10.1007/11550907_3 Conference: Artificial Neural Networks: Formal Models and Their Applications - ICANN 2005, 15th International Conference, Warsaw, Poland, September 11-15, 2005, Proceedings, Part II
Source: DBLP


Parallel perceptrons (PPs), a novel approach to committee machine training requiring minimal communication between outputs
and hidden units, allows the construction of efficient and stable nonlinear classifiers. In this work we shall explore how
to improve their performance allowing their output weights to have real values, computed by applying Fisher’s linear discriminant
analysis to the committee machine’s perceptron outputs. We shall see that the final performance of the resulting classifiers
is comparable to that of the more complex and costlier to train multilayer perceptrons.

Download full-text


Available from: José R. Dorronsoro, Oct 04, 2015
24 Reads
  • Source
    • "Because of this, we have studied their applicability to imbalanced classification problems, where we try to remove possibly label–noisy patterns, either directly [15] or through a boosting procedure [13] [16]. Moreover, while the output weights of standard PPs are fixed to 1 (i.e., they simply add the outputs of their individual pcps), in [18] it is shown how to improve their performance allowing their output weights to have real values that are computed by applying Fisher's linear discriminant analysis to them. This improves the final performance of the resulting classifiers over that of standard PPs, making it comparable to that of the more complex and costlier to train multilayer perceptrons. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A common feature in many hard pattern recognition problems is the fact that the object of interest is statistically overwhelmed by others. The overall aim of the \Learning, Evolution and Extreme Statistics" (AE3 being its Spanish acronym) project is to study those problems in the following concrete areas: 1. Natural image statistics and applications. 2. New classiflcation techniques in extreme sample problems. 3. Evolutionary machine learning. 4. Machine learning and evolutionary computing in flnance. AE3 is a coordinated project between a research group at the Instituto de Ingenier¶‡a del Conocimiento (IIC) and another at the Escuela Politecnica Superior (EPS), both in the Universidad Autonoma de Madrid (UAM).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Rectangular Basis Functions Networks (RecBFN) come from RBF Networks, and are composed by a set of Fuzzy Points which describe the network. In this paper, a set of characteristics of the RecBF are proposed to be used in imbalanced datasets, especially the order of the training patterns. We will demonstrate that it is an important factor to improve the generalization of the solution, which is the main problem in imbalanced datasets. Finally, this solution is compared with other important methods to work with imbalanced datasets, showing our method works well with this type of datasets and that an understandable set of rules can be extracted.
    Artificial Neural Networks - ICANN 2007, 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I; 01/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The Parallel Perceptron (PP) is a simple neural network which has been shown to be a universal approximator, and it can be trained using the Parallel Delta (P-Delta) rule. This rule tries to maximize the distance between the perceptron activations and their decision hyperplanes in order to increase its generalization ability, following the principles of the Statistical Learning Theory. In this paper we propose a closed-form analytical expression to calculate, without iterations, the PP weights for classification tasks. The calculated weights globally optimize a cost function which takes simultaneously into account the training error and the perceptron margin, similarly to the P-Delta rule. Our approach, called Direct Parallel Perceptron (DPP) has a linear computational complexity in the number of inputs, being very interesting for high-dimensional problems. DPP is competitive with SVM and other approaches (included P-Delta) for two-class classification problems but, as opposed to most of them, the tunable parameters of DPP do not influence the results very much. Besides, the absence of an iterative training stage gives to DPP the ability of on-line learning.
    Neural Networks (IJCNN), The 2010 International Joint Conference on; 08/2010
Show more