Conference Paper

Discriminant Parallel Perceptrons.

DOI: 10.1007/11550907_3 Conference: Artificial Neural Networks: Formal Models and Their Applications - ICANN 2005, 15th International Conference, Warsaw, Poland, September 11-15, 2005, Proceedings, Part II
Source: DBLP

ABSTRACT Parallel perceptrons (PPs), a novel approach to committee machine training requiring minimal communication between outputs
and hidden units, allows the construction of efficient and stable nonlinear classifiers. In this work we shall explore how
to improve their performance allowing their output weights to have real values, computed by applying Fisher’s linear discriminant
analysis to the committee machine’s perceptron outputs. We shall see that the final performance of the resulting classifiers
is comparable to that of the more complex and costlier to train multilayer perceptrons.

Full-text

Available from: José R. Dorronsoro, Jun 05, 2015
0 Followers
 · 
71 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A common feature in many hard pattern recognition problems is the fact that the object of interest is statistically overwhelmed by others. The overall aim of the \Learning, Evolution and Extreme Statistics" (AE3 being its Spanish acronym) project is to study those problems in the following concrete areas: 1. Natural image statistics and applications. 2. New classiflcation techniques in extreme sample problems. 3. Evolutionary machine learning. 4. Machine learning and evolutionary computing in flnance. AE3 is a coordinated project between a research group at the Instituto de Ingenier¶‡a del Conocimiento (IIC) and another at the Escuela Politecnica Superior (EPS), both in the Universidad Autonoma de Madrid (UAM).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Parallel perceptrons (PPs) are very simple and efficient committee machines (a single layer of perceptrons with threshold activation functions and binary outputs, and a majority voting decision scheme), which nevertheless behave as universal approximators. The parallel delta (P-Delta) rule is an effective training algorithm, which, following the ideas of statistical learning theory used by the support vector machine (SVM), raises its generalization ability by maximizing the difference between the perceptron activations for the training patterns and the activation threshold (which corresponds to the separating hyperplane). In this paper, we propose an analytical closed-form expression to calculate the PPs' weights for classification tasks. Our method, called Direct Parallel Perceptrons (DPPs), directly calculates (without iterations) the weights using the training patterns and their desired outputs, without any search or numeric function optimization. The calculated weights globally minimize an error function which simultaneously takes into account the training error and the classification margin. Given its analytical and noniterative nature, DPPs are computationally much more efficient than other related approaches (P-Delta and SVM), and its computational complexity is linear in the input dimensionality. Therefore, DPPs are very appealing, in terms of time complexity and memory consumption, and are very easy to use for high-dimensional classification tasks. On real benchmark datasets with two and multiple classes, DPPs are competitive with SVM and other approaches but they also allow online learning and, as opposed to most of them, have no tunable parameters.
    IEEE Transactions on Neural Networks 12/2011; DOI:10.1109/TNN.2011.2169086 · 2.95 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The Parallel Perceptron (PP) is a simple neural network which has been shown to be a universal approximator, and it can be trained using the Parallel Delta (P-Delta) rule. This rule tries to maximize the distance between the perceptron activations and their decision hyperplanes in order to increase its generalization ability, following the principles of the Statistical Learning Theory. In this paper we propose a closed-form analytical expression to calculate, without iterations, the PP weights for classification tasks. The calculated weights globally optimize a cost function which takes simultaneously into account the training error and the perceptron margin, similarly to the P-Delta rule. Our approach, called Direct Parallel Perceptron (DPP) has a linear computational complexity in the number of inputs, being very interesting for high-dimensional problems. DPP is competitive with SVM and other approaches (included P-Delta) for two-class classification problems but, as opposed to most of them, the tunable parameters of DPP do not influence the results very much. Besides, the absence of an iterative training stage gives to DPP the ability of on-line learning.
    Neural Networks (IJCNN), The 2010 International Joint Conference on; 08/2010