Article

On synchronized evolution of the network of automata

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

One of the tasks in machine learning is to build a device that predicts each next input symbol of a sequence as it takes one input symbol from the sequence. We studied new approaches to this task. We suggest that deterministic finite automata (DFA) are good building blocks for this device, together with genetic algorithms (GAs), which let these automata "evolve" to predict each next input symbol of the sequence. Moreover, we study how to combine these highly fit automata so that a network of them would compensate for each others' weaknesses and predict better than any single automaton. We studied the simplest approaches to combine automata: building trees of automata with special-purpose automata, which may be called switchboards. These switchboard automata are located on the internal nodes of the tree, take an input symbol from the input sequence just as other automata do, and predict which subtree will make a correct prediction on each next input symbol. GAs again play a crucial role in searching for switchboard automata. We studied various ways of growing trees of automata and tested them on sample input sequences, mainly note pitches, note durations and up/down notes of Bach's Fugue IX. The test results show that DFAs together with GAs seem to be very effective for this type of pattern learning task

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... It is necessary to understand the significance of the modeling of switching mechanisms as a control device for any electronic system. In 2002, according to Inagaki [10], Genetic algorithms (GAs), an evolutionary computation method, was used for generating more complex deterministic finite automata (DFA) through the use of a switching device to make correct predictions on the next input symbol. Within the context of a Design Pattern, Ramnath and Dathan [33] studied the switchboard behavior which is similar to a mediator in a finite state machine (FSM) and also highlighted that FSM events allow anyone to design and modify the two subsystems independently. ...
Article
Full-text available
A finite switchboard state machine is a specialized finite state machine. It is built by binding the concepts of switching state machines and commutative state machines. The main purpose of this paper is to give a specific algorithm for fuzzy finite switchboard state machine and also, investigates the concepts of switching relation, covering, restricted cascade products and wreath products of fuzzy finite switchboard state machines. More precisely, we study that the direct products/Cartesian compositions of two such fuzzy finite switchboard state machines is again a fuzzy finite switchboard state machine. In addition, we introduce the perfect switchboard machine and establish its Cartesian composition. The relations among the products also been examined. Finally, we introduce asynchronous fuzzy finite switchboard state machine and study the switching homomorphic image of asynchronous fuzzy finite switchboard state machine. We illustrate the definition of a restricted product of fuzzy finite switchboard state machine with the single pattern example. © 2019, University of Sistan and Baluchestan. All rights reserved.
... Early work [9] evolved finite state machines to predict symbol sequences. Others have extended these techniques to build modular systems that incorporate independent FSMs to solve maze and grid exploration problems [5] or to predict note sequences in musical compositions [18]. In software engineering, genetic algorithms have been applied to fixing software bugs [10] and software optimization [6]. ...
Conference Paper
Full-text available
In this paper, we investigate an approach to program synthesis that is based on crowd-sourcing. With the help of crowd-sourcing, we aim to capture the "wisdom of the crowds" to find good if not perfect solutions to inherently tricky programming tasks, which elude even expert developers and lack an easy-to-formalize specification. We propose an approach we call program boosting, which involves crowd-sourcing imperfect solutions to a difficult programming problem from developers and then blending these programs together in a way that improves their correctness. We implement this approach in a system called CROWDBOOST and show in our experiments that interesting and highly non-trivial tasks such as writing regular expressions for URLs or email addresses can be effectively crowd-sourced. We demonstrate that carefully blending the crowd-sourced results together consistently produces a boost, yielding results that are better than any of the starting programs. Our experiments on 465 program pairs show consistent boosts in accuracy and demonstrate that program boosting can be performed at a relatively modest monetary cost.
... Fiecare cromozom conţine mii de gene, care sunt blocuri funcţionale ADN, fiecare genă aparţinînd unor cromozomi omologi poate avea diverse expresii , denumite alele, care corespund unei valori specifice a genelor. [121] Genomul reprezintă de asemenea întreaga structură genetică. În cazul curent cromozomul conţine un şir de biţi aparţinînd unui alfabet binar şi o alela este codificată cu 0 şi 1. Fiecare cromozom poate fi considerat ca un punct în spaţiul de căutare al soluţiilor posibile aparţinînd problemei de descompunere generală. ...
... Costelloe tried to reduce the human burden in [69]. Algorithmic approaches are also possible [64,162]. ...
Article
Full-text available
1 This paper introduces genetic programming (GP) – a set of evolutionary computation techniques for getting computers to automatically solve problems without having to tell them explicitly how to do it. Since its inception, GP has been used to solve many practical problems, producing a number of human competitive results and even patentable new inventions. We start with a gentle introduction to the basic representation, initialisation and operators used in GP, complemented by a step by step description of their use for the solution of an illustrative problem. We then progress to discuss a variety of alternative representations for programs and more advance specialisations of GP. A multiplicity of real-world applications of GP are then presented to illustrate the scope of the technique. For the benefits of more advanced readers, this is followed by a series of recommendations and suggestions to obtain the most from a GP system. Although the paper has been written with beginners and practitioners in mind, for completeness we also provide an overview of the theoretical
... Costelloe tried to reduce the human burden in [69]. Algorithmic approaches are also possible [64,162]. ...
Chapter
Full-text available
The goal of having computers automatically solve problems is central to artificial intelligence, machine learning, and the broad area encompassed by what Turing called ‘machine intelligence’ [384]. Machine learning pioneer Arthur Samuel, in his 1983 talk entitled ‘AI: Where It Has Been and Where It Is Going’ [337], stated that the main goal of the fields of machine learning and artificial intelligence is: “to get machines to exhibit behavior, which if done by humans, would be assumed to involve the use of intelligence.”
... Specifically, a change between two states is represented as a transition that may have a trigger (or event) that causes the transition to occur, a guard that must be met for the transition to take place, and/or one or more actions. Although numerous techniques for generating finite state machines (FSMs) have been proposed within software engineering (e.g., [6, 9, 10, 14, 25, 26, 27, 28] ) and evolutionary computation (e.g., [2, 4, 8, 15, 16, 24]), a more limited number [6, 9, 10, 14, 25, 26, 27, 28] address the generation of state diagrams that represent the behavior of software. These techniques synthesize one or more state diagrams from properties (e.g., [9, 10, 27]), scenarios (e.g., [6, 14, 26, 28]), or both (e.g., [25] ). ...
Conference Paper
Full-text available
Increasingly, high-assurance applications rely on autonomic systems to respond to changes in their environment. The inherent uncertainty present in the environment of autonomic systems makes it difficult for developers to identify and model resilient autonomic behavior prior to deployment. In this paper, we propose Avida-MDE, a digital evolution approach to the generation of behavioral models (i.e., a set of interacting finite state machines) that capture autonomic system behavior that is potentially resilient to a variety of environmental conditions. We use an evolving population of digital organisms to generate behavioral models, where the organisms are subjected to natural selection and are rewarded for generating behavioral models that meet developer requirements. To illustrate this approach, we successfully applied it to the generation of behavioral models describing the navigation behavior of an autonomous robot.
... Inagaki [13], who evolved trees of deterministic finite automata for predicting symbol sequences. ...
Article
Full-text available
Finite-state transducers (FSTs) are finite-state machines (FSMs) that map strings in a source domain into strings in a target domain. While there are many reports in the literature of evolving FSMs, there has been much less work on evolving FSTs. In particular, the fitness functions required for evolving FSTs are generally different from those used for FSMs. In this paper, three string distance-based fitness functions are evaluated, in order of increasing computational complexity: string equality, Hamming distance, and edit distance. The fitness-distance correlation (FDC) and evolutionary performance of each fitness function is analyzed when used within a random mutation hill-climber (RMHC). Edit distance has the strongest FDC and also provides the best evolutionary performance, in that it is more likely to find the target FST within a given number of fitness function evaluations. Edit distance is also the most expensive to compute, but in most cases this extra computation is more than justified by its performance. The RMHC was compared with the best known heuristic method for learning FSTs, the onward subsequential transducer inference algorithm (OSTIA). On noise-free data, the RMHC performs best on problems with sparse training sets and small target machines. The RMHC and OSTIA offer similar performance for large target machines and denser data sets. When noise-corrupted data is used for training, the RMHC still performs well, while OSTIA performs poorly given even small amounts of noise. The RMHC is also shown to outperform a genetic algorithm. Hence, for certain classes of FST induction problem, the RMHC presented in this paper offers the best performance of any known algorithm
Article
The constructions of finite switchboard state automata is known to be an extension of finite automata in the view of commutative and switching state machines. This research incorporated an idea of a switchboard in the general fuzzy automata to introduce general fuzzy finite switchboard automata. The attained output reveals that a strongly connected general fuzzy finite switchboard automaton is equivalent to the retrievable general fuzzy automata. Further, the notion of the switchboard subsystem and strong switchboard subsystem of general fuzzy finite switchboard automata are examined. Finally, the concept of fuzzy topology on general fuzzy finite switchboard automata in terms of these characterisations is formulated.
Article
A main research direction in the field of evolutionary machine learning is to develop a scalable classifier system to solve high-dimensional problems. Recently work has begun on automously reusing learnt building blocks of knowledge to scale from low-dimensional problems to high-dimensional ones. An XCS-based classifier system, known as XCSCFC, has been shown to be scalable, through the addition of expression tree-like code fragments, to a limit beyond standard learning classifier systems. XCSCFC is specially beneficial if the target problem can be divided into a hierarchy of subproblems and each of them is solvable in a bottom-up fashion. However, if the hierarchy of subproblems is too deep, then XCSCFC will become impractical, due to the needed computational time, and thus will eventually hit a limit in problem size. A limitation in this technique is the lack of a cyclic representation, which is inherent in finite state machines (FSMs). However, the evolution of FSMs is a hard task due to the combinatorially large number of possible states, connections and interaction. Usually this requires supervised learning to minimize inappropriate FSMs, which for high-dimensional problems necessitates subsampling and/or incremental testing. To avoid these constraints, this work introduces a state-machine based encoding scheme into XCS for the first time, termed XCSSMA. The proposed system has been tested on six complex Boolean problem domains, i.e. multiplexer, majority-on, carry, evenparity, count ones and digital design verification problems. The proposed approach outperforms XCSCFA (an XCS that computes actions) and XCSF (an XCS that computes predictions) in three of the six problem domains, while the performance in others is similar. In addition, XCSSMA evolved, for the first time, compact and human readable general classifiers (i.e. solving any n-bit problems) for the even-parity and carry problem domains demonstrating its ability to produce scalable solutions using a cyclic representation.
Article
In this paper, we investigate an approach to program synthesis that is based on crowd-sourcing. With the help of crowd-sourcing, we aim to capture the "wisdom of the crowds" to find good if not perfect solutions to inherently tricky programming tasks, which elude even expert developers and lack an easy-to-formalize specification. We propose an approach we call program boosting, which involves crowd-sourcing imperfect solutions to a difficult programming problem from developers and then blending these programs together in a way that improves their correctness. We implement this approach in a system called CROWDBOOST and show in our experiments that interesting and highly non-trivial tasks such as writing regular expressions for URLs or email addresses can be effectively crowd-sourced. We demonstrate that carefully blending the crowd-sourced results together consistently produces a boost, yielding results that are better than any of the starting programs. Our experiments on 465 program pairs show consistent boosts in accuracy and demonstrate that program boosting can be performed at a relatively modest monetary cost.
Article
Full-text available
This article illustrates an artificial developmental system that is a computationally efficient technique for the automatic generation of complex artificial neural networks (ANNs). The artificial developmental system can develop a graph grammar into a modular ANN made of a combination of simpler subnetworks. A genetic algorithm is used to evolve coded grammars that generate ANNs for controlling six-legged robot locomotion. A mechanism for the automatic definition of neural subnetworks is incorporated Using this mechanism, the genetic algorithm can automatically decompose a problem into subproblems, generate a subANN for solving the subproblem, and instantiate copies of this subANN to build a higher-level ANN that solves the problem. We report some simulation results showing that the same problem cannot be solved if the mechanism for automatic definition of subnetworks is suppressed. We support our argument with pictures that describe the steps of development, how ANN structures are evolved, and how the ANNs compute.
Article
Full-text available
This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When used with multiple generalizers, stacked generalization can be seen as a more sophisticated version of cross-validation, exploiting a strategy more sophisticated than cross-validation's crude winner-takes-all for combining the individual generalizers. When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question. After introducing stacked generalization and justifying its use, this paper presents two numerical experiments. The first demonstrates how stacked generalization improves upon a set of separate generalizers for the NETtalk task of translating text to phonemes. The second demonstrates how stacked generalization improves the performance of a single surface-fitter. With the other experimental evidence in the literature, the usual arguments supporting cross-validation, and the abstract justifications presented in this paper, the conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. This paper ends by discussing some of the variations of stacked generalization, and how it touches on other fields like chaos theory.
Conference Paper
Full-text available
One of the main problems in applying evolutionary optimisation methods is the choice of operators and parameter values. This paper propose a competitive evolution method, in which several subpopulations are allowed to compete for computer time. The population with the fittest members, and that with the highest improvement rate in the recent past, are rewarded. When using identical strategies in the subpopulations, this competitive strategy provides an insurance against unlucky runs while extracting only an insignificant cost in terms of extra function evaluations. When using different strategies in the subpopulations, it ensures that the best strategies are used and again the extra cost is not great. Competitive evolution is at its best when an operator — or the lack of it — may have a very detrimental effect which is not known in advance. Occasional mixing of the best performing subpopulations leads to further improvement.
Conference Paper
Full-text available
Selection methods in Evolutionary Algorithms, including Genetic Algorithms, Evolution Strategies #ES# and Evolutionary Programming, #EP# are compared by observing the rate of convergence on three idealised problems. The #rst considers selection only, the second introduces mutation as a source of variation, the third also adds in evaluation noise. Fitness proportionate selection su#ers from scaling problems: a numberoftechniques to reduce these are illustrated. The sampling errors caused by roulette wheel and tournament selection are demonstrated. The EP selection model is shown to be almost equivalent to an ES model in one form, and surprisingly similar to #tness proportionate selection in another. Generational models are shown to be remarkably immune to evaluation noise, models that retain parents much less so.
Article
Full-text available
In the typical genetic algorithm experiment, the fitness function is constructed to be independent of the contents of the population to provide a consistent objective measure. Such objectivity entails significant knowledge about the environment which suggests either the problem has previously been solved or other non-evolutionary techniques may be more efficient. Furthermore, for many complex tasks an independent fitness function is either impractical or impossible to provide. In this paper, we demonstrate that competitive fitness functions, i.e. fitness functions that are dependent on the constituents of the population, can provide a more robust training environment than independent fitness functions. We describe three differing methods for competitive fitness, and discuss their respective advantages. 1 INTRODUCTION Competitive learning is a long standing topic in machine learning (Samuel, 1959; Tesauro, 1992). Interest for using competition in machine learning tasks stems from a de...
Book
This book presents a unified view of evolutionary algorithms: the exciting new probabilistic search tools inspired by biological models that have immense potential as practical problem-solvers in a wide variety of settings, academic, commercial, and industrial. In this work, the author compares the three most prominent representatives of evolutionary algorithms: genetic algorithms, evolution strategies, and evolutionary programming. The algorithms are presented within a unified framework, thereby clarifying the similarities and differences of these methods. The author also presents new results regarding the role of mutation and selection in genetic algorithms, showing how mutation seems to be much more important for the performance of genetic algorithms than usually assumed. The interaction of selection and mutation, and the impact of the binary code are further topics of interest. Some of the theoretical results are also confirmed by performing an experiment in meta-evolution on a parallel computer. The meta-algorithm used in this experiment combines components from evolution strategies and genetic algorithms to yield a hybrid capable of handling mixed integer optimization problems. As a detailed description of the algorithms, with practical guidelines for usage and implementation, this work will interest a wide range of researchers in computer science and engineering disciplines, as well as graduate students in these fields.
Chapter
A randomized algorithm is one that makes random choices during its execution. The behavior of such an algorithm may thus be random even on a fixed input. The design and analysis of a randomized algorithm focus on establishing that it is likely to behave well on every input; the likelihood in such a statement depends only on the probabilistic choices made by the algorithm during execution and not on any assumptions about the input. It is especially important to distinguish a randomized algorithm from the average-case analysis of algorithms, where one analyzes an algorithm assuming that its input is drawn from a fixed probability distribution. With a randomized algorithm, in contrast, no assumption is made about the input.
Conference Paper
Techniques to investigate chaotic data require long noisefree series. Genetic programming allows fitting of arbitrary functions to short noisy datasets. Conventional genetic programming was used to fit Lisp S-expressions to a known chaotic series (the Mackey-Glass equation, discretized to a map) with added noise. Embedding was performed by including previous values in time in the terminal set. Prediction intervals were 20–1065 steps into the future, based upon near-minimal 35 ‘training’ points from the series. Fittest S-expressions yielded useful structural information. Semilogarithmic plots of normalised root mean squared error of the fittest forecasts against the length of forecast showed two dominant slopes. Noise led to a small exponential increase in this error. Genetic programming appears useful, as it compares favourably with established techniques, is robust to noise, and easily avoids overfitting.
Book
Genetic algorithms are playing an increasingly important role in studies of complex adaptive systems, ranging from adaptive agents in economic theory to the use of machine learning techniques in the design of complex devices such as aircraft turbines and integrated circuits. Adaptation in Natural and Artificial Systems is the book that initiated this field of study, presenting the theoretical foundations and exploring applications. In its most familiar form, adaptation is a biological process, whereby organisms evolve by rearranging genetic material to survive in environments confronting them. In this now classic work, Holland presents a mathematical model that allows for the nonlinearity of such complex interactions. He demonstrates the model's universality by applying it to economics, physiological psychology, game theory, and artificial intelligence and then outlines the way in which this approach modifies the traditional views of mathematical genetics. Initially applying his concepts to simply defined artificial systems with limited numbers of parameters, Holland goes on to explore their use in the study of a wide range of complex, naturally occuring processes, concentrating on systems having multiple factors that interact in nonlinear ways. Along the way he accounts for major effects of coadaptation and coevolution: the emergence of building blocks, or schemata, that are recombined and passed on to succeeding generations to provide, innovations and improvements. Bradford Books imprint
Conference Paper
This paper employs ideas from genetics to study the evolution of strategies in games. In complex environments, individuals are not fully able to analyze the situation and calculate their optimal strategy. Instead they can be expected to adapt their strategy over time based upon what has been effective and what has not. The genetic algorithm is demonstrated in the context of a rich social setting, the environment formed by the strategies submitted to a prisoner’s dilemma computer tournament. The results of the evolutionary process show that the genetic algorithm has a remarkable ability to evolve sophisticated and effective strategies in a complex environment.
Conference Paper
Evolutionary programming was proposed more than thirty five years ago for generating artificial intelligence. The original experiments consisted of evolving populations of finite state machines (FSMs) for prediction, identification, and control. Since then, all of the studies with FSMs and evolutionary programming have been limited to the evolution of strictly non-modular FSMs. In this study, a modular FSM architecture is proposed and an evolutionary programming procedure for evolving such structures is presented. Preliminary results indicate that the proposed procedure is indeed capable of successfully evolving modular FSMs and that such modularity can result in a statistically significantly increased rate of optimization
Article
This paper shows how the performance of a genetic programming system can be improved through the addition of mechanisms for nongenetic transmission of information between individuals (culture). Teller has previously shown how genetic programming systems can be enhanced through the addition of memory mechanisms for individual programs [Teller 1994]; in this paper we show how Teller's memory mechanism can be changed to allow for communication between individuals within and across generations. We show the effects of indexed memory and culture on the performance of a genetic programming system on a symbolic regression problem, on Koza's Lawnmower problem, and on Wumpus world agent problems. We show that culture can reduce the computational effort required to solve all of these problems. We conclude with a discussion of possible improvements to the technique. 1 Culture and Evolution In most evolutionary computation systems, individuals are assessed for fitness independe...
The Kod&aacute,ly Context, 1981
  • L Chosky
The Kodály Context The Kodály Method I: Comprehensive Music Education
  • L Chosky
L. Chosky, The Kodály Context. Englewood Cliffs, NJ: Prentice-Hall, 1981. [9] , The Kodály Method I: Comprehensive Music Education, 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1999.
Randomized Algorithms
  • R Motowani
  • P Raghavan
R. Motowani and P. Raghavan, Randomized Algorithms. Cambridge, MA: Cambridge Univ. Press, 1995.