Conference Paper

Discriminant Parallel Perceptrons.

DOI: 10.1007/11550907_3 Conference: Artificial Neural Networks: Formal Models and Their Applications - ICANN 2005, 15th International Conference, Warsaw, Poland, September 11-15, 2005, Proceedings, Part II
Source: DBLP

ABSTRACT Parallel perceptrons (PPs), a novel approach to committee machine training requiring minimal communication between outputs
and hidden units, allows the construction of efficient and stable nonlinear classifiers. In this work we shall explore how
to improve their performance allowing their output weights to have real values, computed by applying Fisher’s linear discriminant
analysis to the committee machine’s perceptron outputs. We shall see that the final performance of the resulting classifiers
is comparable to that of the more complex and costlier to train multilayer perceptrons.

0 Bookmarks
 · 
50 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Parallel perceptrons (PPs) are very simple and efficient committee machines (a single layer of perceptrons with threshold activation functions and binary outputs, and a majority voting decision scheme), which nevertheless behave as universal approximators. The parallel delta (P-Delta) rule is an effective training algorithm, which, following the ideas of statistical learning theory used by the support vector machine (SVM), raises its generalization ability by maximizing the difference between the perceptron activations for the training patterns and the activation threshold (which corresponds to the separating hyperplane). In this paper, we propose an analytical closed-form expression to calculate the PPs' weights for classification tasks. Our method, called Direct Parallel Perceptrons (DPPs), directly calculates (without iterations) the weights using the training patterns and their desired outputs, without any search or numeric function optimization. The calculated weights globally minimize an error function which simultaneously takes into account the training error and the classification margin. Given its analytical and noniterative nature, DPPs are computationally much more efficient than other related approaches (P-Delta and SVM), and its computational complexity is linear in the input dimensionality. Therefore, DPPs are very appealing, in terms of time complexity and memory consumption, and are very easy to use for high-dimensional classification tasks. On real benchmark datasets with two and multiple classes, DPPs are competitive with SVM and other approaches but they also allow online learning and, as opposed to most of them, have no tunable parameters.
    IEEE Transactions on Neural Networks 12/2011; · 2.95 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Rectangular Basis Functions Networks (RecBFN) come from RBF Networks, and are composed by a set of Fuzzy Points which describe the network. In this paper, a set of characteristics of the RecBF are proposed to be used in imbalanced datasets, especially the order of the training patterns. We will demonstrate that it is an important factor to improve the generalization of the solution, which is the main problem in imbalanced datasets. Finally, this solution is compared with other important methods to work with imbalanced datasets, showing our method works well with this type of datasets and that an understandable set of rules can be extracted.
    Artificial Neural Networks - ICANN 2007, 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I; 01/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A common feature in many hard pattern recognition problems is the fact that the object of interest is statistically overwhelmed by others. The overall aim of the \Learning, Evolution and Extreme Statistics" (AE3 being its Spanish acronym) project is to study those problems in the following concrete areas: 1. Natural image statistics and applications. 2. New classiflcation techniques in extreme sample problems. 3. Evolutionary machine learning. 4. Machine learning and evolutionary computing in flnance. AE3 is a coordinated project between a research group at the Instituto de Ingenier¶‡a del Conocimiento (IIC) and another at the Escuela Politecnica Superior (EPS), both in the Universidad Autonoma de Madrid (UAM).

Full-text (2 Sources)

View
74 Downloads
Available from
May 30, 2014