Fig 1 - uploaded by Christoph M Friedrich
Content may be subject to copyright.
Six steps to develop an 2-2-1 network with mixed activation functions

Six steps to develop an 2-2-1 network with mixed activation functions

Source publication
Conference Paper
Full-text available
An Evolutionary Algorithm is used to optimize the architecture and activation functions of an Artificial Neural Networks (ANN). It will be shown that it is possible, with the help of a graph-database and Genetic Engineering, to find modular structures for these networks. Some new graph-rewritings are used to construct families of architectures from...

Citations

... Attempts to use genetic programming to evolve the architecture and the weights simultaneously [4,5] have been marred by the fact that parse trees are not suitable for representing oriented graphs. Alternatively, GP has been used to evolve rules for constructing neural networks [6,7,8]. However, this approach imposes constraints on the weights of the neural network. ...
... Columns 3, 4, 5 and 6 show the minimum, average, maximum and standard deviation for the number of neurons of the networks evolved, respectively. Columns 7,8,9 and 10 give the minimum, average, maximum and standard deviation for the number of connections of the networks evolved, respectively. Column 11 shows the minimal computational effort, i.e. the minimal number of fitness evaluations necessary to obtain a solution with 99% probability in repeated runs [4]. ...
Article
Full-text available
The application of genetic programming to the evolution of neural networks has been hindered by the inadequacy of parse trees to represent oriented graphs, and by the lack of a good mechanism for encoding the weights. In this work, a hybrid method is introduced, where genetic programming evolves a mapping function to adapt the weights, whereas a genetic algorithm-based approach evolves the architecture. Results on the application of the new method to the evolution of feedforward and recurrent neural networks are reported. 1 Introduction The training of artificial neural networks for a particular task can be seen as a mapping of the initial set of random weights into a new set of adapted weights which solves the problem. That means, the process of training defines a function to map the initial set of random weights into the correct ones. For example, the backpropagation training algorithm [1] is an attempt to construct such a mapping function iteratively. All training algorithms ...
... Figure 1 shows the principle working structure of this algorithm. For detailed information about the used algorithm the papers [4,5] are suggested. ...
... The Level-0 classifiers are ANN's created by an Evolutionary Algorithm to optimize the architecture of ANN's. During the evolutionary cycle some of them are evaluated several times for different initial weight settings and stored in the used graph-database (see [4,5] for details). The Level-1 classifier combines the output of some selected Level-0 classifiers, with the same architecture and feeds the outputs to the Level-2 classifier. ...
... The first test was investigating, if Ensembles of the same network architecture, learned with the same training patterns but with different weight initializations are better than the best member of the Ensemble. The graph-database obtained by the experiments undertaken in [4,5] was used and Ensembles have been created by exhaustive search from about 500 networks of this database. In all cases the best Ensemble showed better or equal performance than the mean of all networks and even outperforms the best network. ...
Article
Full-text available
: Recent developments have shown the usefulness of combinations of classifiers. This paper presents a 3-Level Stacked Generalizer consisting of evolutionary created Artificial Neural Networks with different architectures and weight settings. Several combination and new selection methods are discussed and their usefulness is shown in practical examples. It can be shown that it is more efficient to use information obtained from the complete population rather than taking the best individual. Keywords: Ensembles, Evolutionary Algorithms, Artificial Neural Networks, Stacked Generalization 1 Introduction One way to improve the quality of a classifier is to optimize the underlying algorithm or to search for new algorithms. A more simple approach is to combine the results of several different classifiers by a combination algorithm. Many scientists have presented methods to obtain useful Ensembles of classifiers. The main objective is to create Ensembles of highly accurate members, which ar...
Article
Full-text available
This chapter is an introduction to evolutionary algorithms, a commonly used method by which solutions to problems that might otherwise be impossible to solve are solved. One such method is that of the genetic algorithm. Sometimes, evolutionary algorithms are based on what is called heuristics, or rules of thumb. They are guidelines for solutions that work; there are no mathematical proofs of their effectiveness, they just work well. Consequently, methods incorporating heuristics are deemed to be weak. The word is unfortunate for it conveys a sense of inaccuracy or approximation, but it is, in fact, responsible for some excellent solutions. These weaker methods use less domain knowledge and are not oriented toward specific targets. In law enforcement analytics, the existence of such methods has been shown to be very advantageous. © 2013 Springer Science+Business Media Dordrecht. All rights are reserved.
Article
This paper describes an evolutionary algorithm, GenD, conceived by Buscema in 1998 at the Centro Ricerche di Scienze delta Comunicazione-Semeion in Rome, where it is still successfully used and has been further developed. Unlike classic genetic algorithms, the GenD system maintains an inner instability during evolution, presenting a continuous evolution of the evolution and a natural increase in biodiversity during the progress of the algorithm. The theory which leads to defining the GenD system is outlined. Specific characteristics of GenD, such as the definition of a species-health aware evolutionary law, the use of genetic operators and the adoption of a structured organization of individuals (tribes), are described. In order to measure GenD capabilities, we investigated also different problems, such as that known as the travelling sales person problem, which belongs to the class of full NP problems. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This document presents a review and critique of research on cellular encoding. It begins with a review of the work conducted by Gruau in his PhD thesis, and then describes a cleaned-up version of cellular encoding by Gruau as well as the work of Friedrich & Moraga, Luke & Spector and Kodjabachian & Meyer on variations to cellular encoding. Note: This document was written primarily to formalise my ideas about cellular encoding and is not fully polished. 1. Cellular Encoding by Gruau. .......................................................................................................................2 1.1 Basics of Cellular Encoding...................................................................................................................2 1.1.1 A Cellular Instance .........................................................................................................................3 1.1.2 Basics of Program Symbols ............................................................................................................5 1.1.2 Sample Graph Development ...........................................................................................................7 1.1.3 Discussion of Program Symbols .....................................................................................................9 1.2 Properties of Cellular Encoding Representation ..................................................................................23 1.2.1 Basic Properties ............................................................................................................................23 1.2.2 Modularity ....................................................................................................................................24 1.2.3 Criticism ......................