Conference Paper
Discriminant Parallel Perceptrons.
DOI: 10.1007/11550907_3 Conference: Artificial Neural Networks: Formal Models and Their Applications  ICANN 2005, 15th International Conference, Warsaw, Poland, September 1115, 2005, Proceedings, Part II
Source: DBLP

Conference Paper: A Parallel Perceptron network for classification with direct calculation of the weights optimizing error and margin
[Show abstract] [Hide abstract]
ABSTRACT: The Parallel Perceptron (PP) is a simple neural network which has been shown to be a universal approximator, and it can be trained using the Parallel Delta (PDelta) rule. This rule tries to maximize the distance between the perceptron activations and their decision hyperplanes in order to increase its generalization ability, following the principles of the Statistical Learning Theory. In this paper we propose a closedform analytical expression to calculate, without iterations, the PP weights for classification tasks. The calculated weights globally optimize a cost function which takes simultaneously into account the training error and the perceptron margin, similarly to the PDelta rule. Our approach, called Direct Parallel Perceptron (DPP) has a linear computational complexity in the number of inputs, being very interesting for highdimensional problems. DPP is competitive with SVM and other approaches (included PDelta) for twoclass classification problems but, as opposed to most of them, the tunable parameters of DPP do not influence the results very much. Besides, the absence of an iterative training stage gives to DPP the ability of online learning.Neural Networks (IJCNN), The 2010 International Joint Conference on; 08/2010  [Show abstract] [Hide abstract]
ABSTRACT: Parallel perceptrons (PPs) are very simple and efficient committee machines (a single layer of perceptrons with threshold activation functions and binary outputs, and a majority voting decision scheme), which nevertheless behave as universal approximators. The parallel delta (PDelta) rule is an effective training algorithm, which, following the ideas of statistical learning theory used by the support vector machine (SVM), raises its generalization ability by maximizing the difference between the perceptron activations for the training patterns and the activation threshold (which corresponds to the separating hyperplane). In this paper, we propose an analytical closedform expression to calculate the PPs' weights for classification tasks. Our method, called Direct Parallel Perceptrons (DPPs), directly calculates (without iterations) the weights using the training patterns and their desired outputs, without any search or numeric function optimization. The calculated weights globally minimize an error function which simultaneously takes into account the training error and the classification margin. Given its analytical and noniterative nature, DPPs are computationally much more efficient than other related approaches (PDelta and SVM), and its computational complexity is linear in the input dimensionality. Therefore, DPPs are very appealing, in terms of time complexity and memory consumption, and are very easy to use for highdimensional classification tasks. On real benchmark datasets with two and multiple classes, DPPs are competitive with SVM and other approaches but they also allow online learning and, as opposed to most of them, have no tunable parameters.IEEE Transactions on Neural Networks 12/2011; · 2.95 Impact Factor 
Conference Paper: Rectangular Basis Functions Applied to Imbalanced Datasets.
[Show abstract] [Hide abstract]
ABSTRACT: Rectangular Basis Functions Networks (RecBFN) come from RBF Networks, and are composed by a set of Fuzzy Points which describe the network. In this paper, a set of characteristics of the RecBF are proposed to be used in imbalanced datasets, especially the order of the training patterns. We will demonstrate that it is an important factor to improve the generalization of the solution, which is the main problem in imbalanced datasets. Finally, this solution is compared with other important methods to work with imbalanced datasets, showing our method works well with this type of datasets and that an understandable set of rules can be extracted.Artificial Neural Networks  ICANN 2007, 17th International Conference, Porto, Portugal, September 913, 2007, Proceedings, Part I; 01/2007
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.