Article

Neural Morphogenesis, Synaptic Plasticity, and Evolution

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Morphology plays an important role in the computational properties of neural systems, affecting both their functionality and the way in which this functionality is developed during life. In computer-based models of neural networks, artificial evolution is often used as a method to explore the space of suitable morphologies. In this paper we critically review the most common methods used to evolve neural morphologies and argue that a more effective, and possibly biologically plausible, method consists of genetically encoding rules of synaptic plasticity along with rules of neural morphogenesis. Some preliminary experiments with autonomous robots are described in order to show the feasibility and advantages of the approach.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We include work on online adaptation in the sections on synaptic plasticity, scaling/competition and neuronal excitability regulation. Finally, it should be noted that "adaptive network" is not synonymous with "plastic network": a network may be adaptive without having plastic synapses [11,18,25]. ...
... Direct encoding schemes employ a one-to-one mapping from elements in the genome to components in the phenotype, and include bit-string representations [3,4,10], vector-of-values [8,9,14,20,25,26,30], and graphbased schemes [21,27,29], where vertices in the graph correspond to neurons and edges to connections between them. Indirect encoding schemes include a cellular encoding/grammar tree system [13], a matrix rewriting system [11], a "neural map" scheme where a vertex can either become a single neuron or a set of neurons and edges on the map vertex type may describe a one-to-one connection or one-to-all connections [29], HyperNEAT [22] and Evolvable-Substrate-HyperNEAT (ES-HyperNEAT) [23]. In HyperNEAT neurons exist in a geometric space, and a function, which is the genome, maps from the coordinates of a pair of neurons to the parameters of the synapse between them (including whether or not it exists). ...
Conference Paper
Full-text available
Recent years have seen a resurgence of interest in evolving plastic neural networks for online learning. These approaches have an intrinsic appeal – since, to date, the only working example of general intelligence is the human brain, which has developed through evolution, and exhibits a great capacity to adapt to unfamiliar environments. In this paper we review prior work in this area – including problem domains and tasks, fitness functions, synaptic plasticity models and neural network encoding schemes. We conclude with a discussion of current findings and promising future directions, including incorporation of functional properties observed in biological neural networks which appear to play a role in learning processes, and addressing the “general” in general intelligence by the introduction of previously unseen tasks during the evolution process.
... The robot is controlled by a single (i.e., not modularized) neural network, based on the same low-level building blocks used by Yamauchi and Beer (1994a). Thus, the controller is provided with no explicit learning mechanism, such as automatic weight-changing algorithms (e.g., Floreano & Urzelai, 2001;). Instead, evolution is left to shape the structure of the neural network that better ensures a correspondence between the agent's behavioral skills and the requirements of its environment. ...
... The results of our research support the claim that ER models have the potential to complement existing analytic and modeling approaches. Different low-level building block are currently investigated as basic components for plastic neural structures (Floreano & Urzelai, 2001;). In isolation, our experiment gives us little indication of the relative merits of CTRNNs when compared to other approaches. ...
Article
Full-text available
We are interested in the construction of ecological models of the evolution of learning behavior using methodological tools developed in the field of evolutionary robotics. In this article, we explore the applicability of integrated (i.e., nonmodular) neural networks with fixed connection weights and simple "leaky-integrator" neurons as controllers for autonomous learning robots. In contrast to Yamauchi and Beer (1994a), we show that such a control system is capable of integrating reactive and learned behaviour without explicitly needing hand-designed modules, dedicated to a particular behavior, or an externally introduced reinforcement signal. In our model, evolutionary and ecological contingencies structure the controller and the behavioral responses of the robot. This allows us to concentrate on examining the conditions under which learning behavior evolves.
... Floreano [5] has studied the evolution of morphogenetic plastic networks. In his method, the genetic string encodes both the rules by which a neural network develops in space and time and the rules by which the synaptic connections vary their strength while the organism interacts with the environment . ...
... This suggests that Hebbian dynamics, in some form, is an emerged property of the SENMP. The result also gives support to adaptation strategies, such as [5], utilizing Hebbian dynamics.Figure 4 ...
Conference Paper
Mimicking of the growth and adaptation of a biological neural circuit in an artificial medium is a challenging task. In this paper, we propose a phenomenological developmental model based on a stochastic evolutionary neuron migration process (SENMP). Employing a spatial encoding scheme with lateral interaction of neurons for artificial neural networks representing candidate solutions within a neural ensemble, neurons of the ensemble form problem-specific geometrical structures as they migrate under selective pressure. The approach is applied to gain new insights into the development, adaptation and plasticity in artificial neural networks and to evolve purposeful behavior for autonomous robots. We demonstrate the feasibility and advantages of the approach by. using a simulator to evolve a robust navigation behavior for a mobile robot and by verifying the results in a real office environment. We also present some preliminary results regarding the behavior of the adapting neural ensemble and, particularly, a phenomenon exhibiting Hebbian dynamics.
... There are several approaches to the challenge of controller transferal, including adding noise to the simulated robot's sensors [32]; adding generic safety margins to the simulated objects comprising the physical system [26]; evolving directly on the physical system ( [23], [47] and [59]); evolving first in simulation followed by further adaptation on the physical robot ( [47], [51]); or implementing some neural plasticity that allows the physical robot to adapt during its lifetime to novel environments ( [19], [24], [60]). ...
... As mentioned in the previous section, several types of plastic neural network controllers have been proposed that allow for rapid, lifetime adaptation to external perturbation (e.g., [19], [24] and [60]). Furthermore, Keymeulen et al. [36] have formulated an algorithm that continuously updates an internal model of sensor input/world response data obtained from a wheeled robot, and uses this model to evolve and download controllers to the robot during task execution. ...
Article
We present a coevolutionary algorithm for inferring the topology and parameters of a wide range of hidden nonlinear systems with a minimum of experimentation on the target system. The algorithm synthesizes an explicit model directly from the observed data produced by intelligently generated tests. The algorithm is composed of two coevolving populations. One population evolves candidate models that estimate the structure of the hidden system. The second population evolves informative tests that either extract new information from the hidden system or elicit desirable behavior from it. The fitness of candidate models is their ability to explain behavior of the target system observed in response to all tests carried out so far; the fitness of candidate tests is their ability to make the models disagree in their predictions. We demonstrate the generality of this estimation-exploration algorithm by applying it to four different problems—grammar induction, gene network inference, evolutionary robotics, and robot damage recovery—and discuss how it overcomes several of the pathologies commonly found in other coevolutionary algorithms. We show that the algorithm is able to successfully infer and/or manipulate highly nonlinear hidden systems using very few tests, and that the benefit of this approach increases as the hidden systems possess more degrees of freedom, or become more biased or unobservable. The algorithm provides a systematic method for posing synthesis or analysis tasks to a coevolutionary system.
... Because many applications of EC to learning systems use ANNs, another component that can be evolved is the network topology, as was done early on for supervised learning [24], but not until much later for RL (and then in conjunction with learning rule selection) [30]. In the age of deep learning, the evolution of topology increases in complexity and importance (see Subsection VI-A). ...
... And more recently some new biologically-inspired computing are presented, such as [25,26]. A mass of applications using biologicallyinspired computing are developed, such as [27,28,29,30]. We argue that although the most of the applications using biologicallyinspired computing are implemented as software systems the software technology itself does not benefit from ideas of biologically-inspired computing owing to the software paradigm's incongruity with the structure and the mechanism of biology(particular nervous systems), such as absence of synapse communication mechanism in the Object-Oriented paradigm. ...
Conference Paper
Full-text available
We observe that the nervous systems of biology handle "the non-orthogonal concerns" effectively other than the Object-Oriented paradigm does. This nature phenomenon inspires us with a programming paradigm to handle "cross-cutting". It is argued that the Aspect-Oriented paradigm is a candidate for the biologically-inspired programming paradigm. To support this point, the Aspect-Oriented paradigm is used to implement a simple Artificial Neural Networks(ANN) and the preliminary experiment shows good results. In addition, we proposed a biologically-inspired framework of evolvable software which could be implemented by using the Aspect-Oriented paradigm combining with neurocomputing and genetic computing.
... Although ER had no specific assumptions on neural systems or plasticity (Smith, 2002), robotics experiments suggested that neural control structures evolved with fixed weights perform less well than those evolved with plastic weights (Nolfi and Parisi, 1996;Floreano and Urzelai, 2001b). Floreano and Urzelai (2001a) reported that networks evolved faster when synaptic plasticity and neural architectures were evolved simultaneously. In particular, plastic networks were shown to adapt better in the transition from simulation to real robots. ...
... However, the precise nature and the magnitude of those changes that plasticity can cope with is not always easy to quantify in robotic settings. Another study reported that networks evolved faster, i.e., had better evolvability, when synaptic plasticity and neural architectures are evolved simultaneously (Floreano and Urzelai, 2001b). ...
Article
Full-text available
Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence, but the complexity of the whole system of interactions is an obstacle to the understanding of the key factors at play. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks, artificial systems composed of sensors, outputs, and plastic components that change in response to sensory-output experiences in an environment. These systems may reveal key algorithmic ingredients of adaptation, autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed structures and algorithms currently used in most deep neural networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main computational methods and results are reviewed. Finally, new opportunities and developments are presented.
... Autonomous robotics is one prominent area of application, where neural network controllers are grown from compact genotypes (Sect. 4), and modification to the actual controller may occur during the robot's lifetime, whether the objective is long-term adaptation to the environment [151,218], memorizing events [88], or learning new capabilities [262,276]. ...
Article
Full-text available
In this introduction and review—like in the book which follows—we explore the hypothesis that adaptive growth is a means of producing brain-like machines. The emulation of neural development can incorporate desirable characteristics of natural neural systems into engineered designs. The introduction beginswith a review of neural development and neural models. Next, artificial development— the use of a developmentally-inspired stage in engineering design—is introduced. Several strategies for performing this “meta-design” for artificial neural systems are reviewed. This work is divided into three main categories: bio-inspired representations; developmental systems; and epigenetic simulations. Several specific network biases and their benefits to neural network design are identified in these contexts. In particular, several recent studies show a strong synergy, sometimes interchangeability, between developmental and epigenetic processes—a topic that has remained largely under-explored in the literature.
... We showed that the response threshold models (Bonabeau et al. 1996;Page Jr and Mitchell 1998) and the extended models proposed in this Chapter could be formulated as artificial neural networks. The neuronal formalism introduced here will be useful for further extension of models, such as for example changing the threshold values with age or the integration of adaptive learning, where the connection weights of the neural network are updated using experience-based learnings rules (Floreano and Urzelai 2001;Floreano et al. 2008). Furthermore, one could use neural networks with recurrent connections (Mandic and Chambers 2001), to equip the workers with a memory. ...
... Most of these applications developed simple ANN architectures, which are capable of temporal processing. Typical examples are the Discrete-Time Recurrent Neural Networks (TRNN) with two variants: Plastic Neural Networks (PNN) used in [16,17], and a variant of Feed-Forward (FFNN) described in [18,19]. This kind of controllers is capable to behave properly, remembering the acquired abilities and passing it to the next generations. ...
... As with other aspects of evolved systems, it may be inappropriate in general to attempt to identify behavioral strategies from a human perspective. 34 This technique was used to exclude the initial synaptic change that occurs when the agent is first placed in the environment. deviation greater than 0.05 and 10.8% have a standard deviation over 0.1 (Table 5.5). ...
Article
The development of autonomous control systems is the focus of a great deal of research in the aerospace and defense industries. Autonomous agents need to be able to adapt to new environments and to changes in their physical systems (such as hardware malfunctions). They must be reliable and be able to operate effectively without human intervention. While traditional control theory may be applied, it has the disadvantage that developers typically use their external perception of the agent’s environment to develop the control logic. This research addresses these issues by applying a method of artificial evolution to the development of embodied autonomous agents. Populations of initially random agents are evolved in a simulated environment were laws of natural selection guide their development. This technique allows controllers to be developed using the agent’s internal perspective of its environment, and as a result to exploit invariant relationships between the agent and its environment. After evolution, individuals are transferred to a physical robotics platform to be assessed. The results and subsequent analysis presented in this thesis indicate that simulation-based evolutionary strategies can be successfully applied to the development of
... This is the main approach adopted in this work. [16,17], and a variant of Feed-Forward (FFNN) described in [18,19]. This kind of controllers is capable to behave properly, remembering the acquired abilities and passing it to the next generations. ...
... In further experiments where the genetic code for each synapse of the network included one gene whose value caused its remaining genes to be interpreted as connection strengths or learning rules and rates, 80% of the synapses "made the choice" of using learning, reinforcing the fact that this genetic strategy has a comparatively stronger adaptive power [34]. This methodology could also be used to evolve the morphology of neural controllers were synapses are created at runtime and therefore their strengths cannot be genetically specified [35]. Recently, the adaptive properties of this type of adaptive genetic encoding were confirmed also in the context of evolutionary spiking neurons for robot control [24]. ...
... In this domain, efficient learning methods are rare due to the difficulties inherent to learning in recurrent neural networks. Although some approaches have been reported (Soltoggio and Stanley, 2012;Pitonakova, 2012;Dürr et al., 2008;Hoinville and Hénaff, 2004;Floreano and Urzelai, 2001), the learning usually takes place in very small networks or with network topologies very specifically adapted to the task. One problem is to provide a proper feedback signal, preferably generated by the network itself, to guide the learning towards the desired behavior. ...
Conference Paper
Full-text available
Learning recurrent neural networks as behavior controllers for robots requires measures to guide the learning towards a desired behavior. Organisms in nature solve this problem with feedback signals to assess their behavior and to refine their actions. In line with this, a neural framework is developed where the synaptic learning is controlled by artificial neuromodulators that are produced in response to (undesired) sensory signals. To test this framework and to get a base line to evaluate further approaches, we perform five classical benchmark experiments with a simple \textit{random} plasticity method. We show that even with this simple plasticity method, behaviors can already be found for all experiments, even for comparably large networks with over 90 plastic synapses. The performance depends strongly on the complexity of the task and less on the chosen network topology. This suggests that controlling learning with neuromodulators is a viable approach that is promising to work also with more sophisticated plasticity methods in the future.
... Drawbacks of this approach include poor genotype scaling for large network encodings and very large parameter spaces due to the lack of geometrical constraints of the networks. Alternatively, indirect encoding allows the genotype to specify a rule or method for growing the ANN instead of specifying the parameter values directly (Husbands et al., 1998;Beer, 2000;Floreano and Urzelai, 2001;Stanley and Miikkulainen, 2003). NeuroEvolution of Augmented Topologies (NEAT) and HyperNEAT use indirect encoding to evolve network topologies, beginning with a small network and adding complexity to that network as evolution progresses (Stanley and Miikkulainen, 2002;Stanley et al., 2009;Clune et al., 2011;Risi and Stanley, 2012). ...
Article
Full-text available
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
... Most of these applications developed simple ANN architectures, which are capable of temporal processing. Typical examples are the Discrete-Time Recurrent Neural Networks (TRNN) with two variants: Plastic Neural Networks (PNN) used in [16][17], and a variant of Feed-Forward (FFNN) described in [18][19]. This kind of controllers is capable of behaving properly, remembering the acquired abilities and passing it to the next generations. ...
Article
Full-text available
Mobile robot's navigation and obstacle avoidance in an unknown and static environment is analyzed in this paper. From the guidance of position sensors, artificial neural network (ANN) based controllers settle the desired trajectory between current and a target point. Evolutionary algorithms were used to choose the best controller. This approach, known as Evolutionary Robotics (ER), commonly resorts to very simple ANN architectures. Although they include temporal processing, most of them do not consider the learned experience in the controller's evolution. Thus, the ER research presented in this article, focuses on the specification and testing of the ANN based controllers implemented when genetic mutations are performed from one generation to another. Discrete-Time Recurrent Neural Networks based controllers were tested, with two variants: plastic neural networks (PNN) and standard feed- forward (FFNN) networks. Also the way in which evolution was performed was also analyzed. As a result, controlled mutation do not exhibit major advantages against over the non controlled one, showing that diversity is more powerful than controlled adaptation.
... However, often the transfer is unsuccessful due to inconsistencies between the physical and virtual environments. There are several approaches to this challenge , including adding noise to the simulated robot's sensors (Jakobi, 1997); adding generic safety margins to the simulated objects comprising the physical system (Funes & Pollack, 1999); evolving directly on the physical system (Thompson, 1997; Floreano & Mondada, 1998; Mahdavi & Bentley, 2003); evolving first in simulation followed by further adaptation on the physical robot (Pollack et al., 2000; Mahdavi & Bentley, 2003); or implementing some neural plasticity that allows the physical robot to adapt during its lifetime to novel environments (Floreano & Urzelai, 2001; DiPaolo, 2000; Tokura et al., 2001). Here we reintroduce predictive internal models, but use evolutionary processes to simultaneously generate both the models and the reactive controllers based on them. ...
Article
An important question in cognitive science is whether in-ternal models are encoded in the brain of higher animals at birth, and are only subsequently refined through experi-ence, or whether models are synthesized over the lifetime of an animal-and if so, how are they formed. A further question is whether animals maintain a single model of a particular body part or tool, or whether multiple competing models are maintained simultaneously. In this paper we de-scribe a co-evolutionary algorithm that automatically synthe-sizes and maintains multiple candidate models of a behav-ing robot. These predictive models can then be used to gen-erate new controllers to either elicit some desired behavior under uncertainty (where competing models agree on the re-sulting behavior); or determine actions that uncover hidden components of the target robot (where models disagree, indi-cating further model synthesis is required). We demonstrate automated model synthesis from sensor data; model synthesis'from scratch'(little initial knowledge about the robot's mor-phology is assumed); and integrated, continued model syn-thesis and controller design. This new modeling methodol-ogy may shed light on how models are acquired and main-tained in higher organisms for the purpose of prediction and anticipation.
... A recurrent artificial neural network was used for modeling the development of a spiking neural network for the control of a Khepera robot [22]. In [24], the matrix rewriting scheme suggested in [44] was applied to modeling neural morphogenesis, which was coevolved Fig. 15. Influence of development on selection directions. ...
Article
Self-organization is one of the most important features observed in social, economic, ecological and biological systems. Distributed self-organizing systems are able to generate emergent global behaviors through local interactions among individuals without a centralized control. Such systems are also supposed to be robust, self-repairable and highly adaptive. However, design of self-organizing systems is very challenging, particularly when the emerged global behaviors are required to be predictable. Morphogenesis is the biological process in which a fertilized cell proliferates, producing a large number of cells that interact with each other to generate the body plan of an organism. Biological morphogenesis, governed by gene regulatory networks through cellular and molecular interactions, can be seen as a self-organizing process. This talk presents a methodology that uses genetic and cellular mechanisms inspired from biological morphogenesis to self-organize collective engineered systems, such as multi-robot systems and modular robots. We show that the morphogenetic approach is able to self-organize collective systems without a centralized control, which is nevertheless able to generate controlled global behaviors. Meanwhile, we demonstrate that the global behavior is adaptable to changing environments.
... Over this basic architecture more complex mechanisms can be implemented, such as gas-nets (Husbands et al., 1998) and synaptic plasticity (Di Paolo, 2000b;Floreano and Urzelai, 2001). ...
Article
Full-text available
In complex adaptive systems, where internal and external non-linear interactions give rise to an emergent functionality, analytic decomposition of component and isolated functional evaluation of them is not a viable methodological practice. More recently, embodied bottom-up synthetic methodological approaches have been proposed to solve this problem. Evolutionary simulation modelling (specifically evolutionary robotics) provides an explicit research methodology in this direction. We argue and illustrate that the scientific relevance of such methodology can be best understood in terms of a double conceptual blending: i) a conceptual blending between structural and functional levels of description embedded in the simulation; and ii) a methodological blending between empirical and theoretical work in scientific research. Simulation models show their scientific value on: reconceptualization of theoretical assumptions; hypothesis generation and proof of concept. We conclude that simulation models are capable of extending our cognitive and epistemological resources to (re)conceptualise scientific domains and to establish causal relations between different levels of description.
... [24] ...
Article
Full-text available
Developmental robotics is also known as epigenetic robotics. We propose in this paper that there is one substantial difference between developmental robotics and epigenetic robotics, since epigenetic robotics concentrates primarily on modeling the development of cognitive elements of living systems in robotic systems, such as language, emotion, and social skills, while developmental robotics should also cover the modeling of neural and morphological development in single- and multirobot systems. With the recent rapid advances in evolutionary developmental biology and systems biology, increasing genetic and cellular principles underlying biological morphogenesis have been revealed. These principles are helpful not only in understanding biological development, but also in designing self-organizing, self-reconfigurable, and self-repairable engineered systems. In this paper, we propose morphogenetic robotics, an emerging new field in developmental robotics, is an important part of developmental robotics in addition to epigenetic robotics. By morphogenetic robotics, we mean a class of methodologies in robotics for designing self-organizing, self-reconfigurable, and self-repairable single- or multirobot systems, using genetic and cellular mechanisms governing biological morphogenesis. We categorize these methodologies into three areas, namely, morphogenetic swarm robotic systems, morphogenetic modular robots, and morphogenetic body and brain design for robots. Examples are provided for each of the three areas to illustrate the main ideas underlying the morphogenetic approaches to robotics.
... We showed that the response-threshold models (Bo-nabeau et al. 1996;Page and Mitchell 1998) and the extended model proposed in this article could be formulated as artificial neural networks. The neuronal formalism introduced here will be useful for further extension of models, such as changing the threshold values with age or the integration of adaptive learning, where the connection weights of the neural network are updated via experiencebased learnings rules (Floreano and Urzelai 2001;Floreano et al. 2008). Furthermore, one could use neural networks with recurrent connections (Mandic and Chambers 2001) to equip the workers with a memory. ...
Article
Full-text available
In social insects, workers perform a multitude of tasks, such as foraging, nest construction, and brood rearing, without central control of how work is allocated among individuals. It has been suggested that workers choose a task by responding to stimuli gathered from the environment. Response-threshold models assume that individuals in a colony vary in the stimulus intensity (response threshold) at which they begin to perform the corresponding task. Here we highlight the limitations of these models with respect to colony performance in task allocation. First, we show with analysis and quantitative simulations that the deterministic response-threshold model constrains the workers' behavioral flexibility under some stimulus conditions. Next, we show that the probabilistic response-threshold model fails to explain precise colony responses to varying stimuli. Both of these limitations would be detrimental to colony performance when dynamic and precise task allocation is needed. To address these problems, we propose extensions of the response-threshold model by adding variables that weigh stimuli. We test the extended response-threshold model in a foraging scenario and show in simulations that it results in an efficient task allocation. Finally, we show that response-threshold models can be formulated as artificial neural networks, which consequently provide a comprehensive framework for modeling task allocation in social insects.
... A recurrent artificial neural network was used for modeling the development of a spiking neural network for the control of a Khepera robot [22]. In [24], the matrix rewriting scheme suggested in [44] was applied to modeling neural morphogenesis, which was coevolved with neural plasticity rules for controlling a mobile robot. A low-level GRN model with chemical diffusion was adopted for evolving neurogenesis for a hydra-like animats [41]. ...
Article
Full-text available
Most existing multirobot systems for pattern formation rely on a predefined pattern, which is impractical for dynamic environments where the pattern to be formed should be able to change as the environment changes. In addition, adaptation to environmental changes should be realized based only on local perception of the robots. In this paper, we propose a hierarchical gene regulatory network (H-GRN) for adaptive multirobot pattern generation and formation in changing environments. The proposed model is a two-layer gene regulatory network (GRN), where the first layer is responsible for adaptive pattern generation for the given environment, while the second layer is a decentralized control mechanism that drives the robots onto the pattern generated by the first layer. An evolutionary algorithm is adopted to evolve the parameters of the GRN subnetwork in layer 1 for optimizing the generated pattern. The parameters of the GRN in layer 2 are also optimized to improve the convergence performance. Simulation results demonstrate that the H-GRN is effective in forming the desired pattern in a changing environment. Robustness of the H-GRN to robot failure is also examined. A proof-of-concept experiment using e-puck robots confirms the feasibility and effectiveness of the proposed model.
... This can occur at a mechanistic level, with the addition of separate 'adaptive' processes which sit on top of a sensorimotor system (e.g. [5], where plastic synapses are used to enable an agent to switch between different lights during phototaxis). It may also occur via the imposition of a structural modularity (e.g. ...
Conference Paper
Full-text available
This paper investigates the processes used by an evolved, em- bodied simulated agent to adapt to large disruptive changes in its sensor morphology, whilst maintaining performance in a phototaxis task. By avoiding the imposition of separate mechanisms for the fast sensorimo- tor dynamics and the relatively slow adaptive processes, we are able to comment on the forms of adaptivity which emerge within our Evolution- ary Robotics framework. This brings about interesting notions regarding the relationship between different timescales. We examine the dynamics of the network and find different reactive behaviours depending on the agent's current sensor configuration, but are only able to begin to ex- plain the dynamics of the transitions between these states with reference to variables which exist in the agent's environment, as well as within its neural network 'brain'.
... Robotic systems built according to this paradigm have the property of adapting to dynamical environments without human intervention [4]. A technique which is especially articulated as adaptive and robust to environmental changes is the plastic controller based on Plastic Neural Networks (PNN) [4,5,12]. This technique is a neural networks with adaptive synapses which are updated on-line using simple learning rules. ...
Conference Paper
Full-text available
Plastic Neural Networks (PNNs) are known for their ability to adapt to environmental changes. It is generally believed that PNNs cannot solve timing tasks which require a predefined delay before execution of an action. In this study we investigate the ability of PNNs to solve timing tasks. Our experiments evolve PNNs to perform successfully on a task requiring the delayed execution of an action. The results of our experiments show that PNNs are capable of solving the timing task. We analyse the underlying mechanism and find it is based on slow neural activation dynamics. The mechanism is discussed in relation to mechanisms found in other neural models. We conclude that any neural model that can accommodate slow activation dynamics can solve the timing task.
... In further experiments where the genetic code for each synapse of the network included one gene whose value caused its remaining genes to be interpreted as connection strengths or learning rules and rates, 80% of the synapses "made the choice" of using learning, reinforcing the fact that this genetic strategy has a comparatively stronger adaptive power [34]. This methodology could also be used to evolve the morphology of neural controllers were synapses are created at runtime and therefore their strengths cannot be genetically specified [35]. Recently, the adaptive properties of this type of adaptive genetic encoding were confirmed also in the context of evolutionary spiking neurons for robot control [24]. ...
... The field of evolutionary robotics provides interesting approaches to evolving synaptic plasticity. In [30] a model that genetically encodes rules of synaptic plasticity with rules of neural morphogenesis was shown to be feasible. In [31] an evolutionary model combines an integrate and fire neuron with a correlation-based synaptic plasticity model and developmental encoding. ...
Article
Full-text available
Since synaptic plasticity is regarded as a potential mechanism for memory formation and learning, there is growing interest in the study of its underlying mechanisms. Recently several evolutionary models of cellular development have been presented, but none have been shown to be able to evolve a range of biological synaptic plasticity regimes. In this paper we present a biologically plausible evolutionary cellular development model and test its ability to evolve different biological synaptic plasticity regimes. The core of the model is a genomic and proteomic regulation network which controls cells and their neurites in a 2D environment. The model has previously been shown to successfully evolve behaving organisms, enable gene related phenomena, and produce biological neural mechanisms such as temporal representations. Several experiments are described in which the model evolves different synaptic plasticity regimes using a direct fitness function. Other experiments examine the ability of the model to evolve simple plasticity regimes in a task -based fitness function environment. These results suggest that such evolutionary cellular development models have the potential to be used as a research tool for investigating the evolutionary aspects of synaptic plasticity and at the same time can serve as the basis for novel artificial computational systems.
... Most of these applications developed simple ANN architectures, which are capable of temporal processing. Typical examples are the Discrete-Time Recurrent Neural Networks (TRNN) with two variants: Plastic Neural Networks (PNN) used in [16,17], and a variant of Feed-Forward (FFNN) described in [18,19]. This kind of controllers is capable to behave properly, remembering the acquired abilities and passing it to the next generations. ...
Conference Paper
Full-text available
Mobile robot's navigation and obstacle avoidance in an unknown environment is analyzed in this paper. From the guidance of position sensors, artificial neural network (ANN) based controllers settle the desired trajectory between current and a target point. Evolutionary algorithms were used to choose the best controller. This approach, known as evolutionary robotics (ER), commonly resorts to very simple ANN architectures. Although they include temporal processing, most of them do not consider the learned experience in the controller's evolution. Thus, the ER research presented in this article, focuses on the specification and testing of the ANN based controllers implemented when genetic mutations are performed from one generation to another. Discrete-time recurrent neural networks based controllers were tested, with two variants: plastic neural networks (PNN) and standard feedforward (FFNN) networks. Also the way in which evolution was performed was analyzed. As a result, controlled mutation do not exhibit major advantages against the noncontrolled one, showing that diversity is more powerful than controlled adaptation.
... Under this framework value systems, by modulating the state trajectories of some variables of the control architecture according to adaptively significant events, become key mechanisms in the production of adaptive behaviour. Inspired on some recent work on plastic controllers [34,18] and specially Di Paolo's work [31], we hypothes-ize that homeostatic plasticity could be a genuine candidate for a general purpose value system. We believe that within an autonomous dynamical framework a number of fundamental problems of computational and evolutionary functionalism could also be solved, specially those concerning the notion of normative function. ...
Article
Full-text available
Computational functionalism [5] fails to understand the embodied and sit- uated nature of behaviour by taking steady state functions as theoretical primitives, and by interpreting cognitive behaviour from a language-like, observer dependant framework without a naturalized normativity. Evolu- tionary functionalism [28, 27], on the other hand, by grounding functional normativity on historical processes fails to give an account of normative functionality based on the present causal mechanism producing behaviour. We propose an alternative autonomous dynamical framework where func- tionality is defined as contribution to self-maintenance [15, 10, 35] and nor- mativity as satisfaction of closure criteria. We develop this framework by a set of formal definitions in the framework of dynamical system theory and propose the hypothesis of an homeostatic-plasticity [31, 40] based general purpose value system as an internalized normative mechanism that selects between internal state trajectories to produce adaptive functionality under different environmental conditions. To test the hypothesis we develop a simulation model where lower level specifications of a control arquitecture (an homeostatic plastic DRNN) give rise (through a simulated evolutionary process) to adaptive behaviour in a foraging task where food sources can be poisonous or profitable. Analysis of the evolved agent show that plastic changes occur when the agent produces salient adaptive interactions, those plastic changes determining the adaptive strategy. The embodied and interactive adaptive functionality is dynamically analysed, illustrating the autonomous dynamical framework.
Chapter
In general, the topology of Artificial Neural Networks (ANNs) is human-engineered and learning is merely the process of weight adjustment. However, it is well known that this can lead to sub-optimal solutions. Topology and Weight Evolving Artificial Neural Networks (TWEANNs) can lead to better topologies however, once obtained they remain fixed and cannot adapt to new problems. In this chapter, rather than evolving a fixed structure artificial neural network as in neuroevolution, we evolve a pair of programs that build the network. One program runs inside neurons and allows them to move, change, die or replicate. The other is executed inside dendrites and allows them to change length and weight, be removed, or replicate. The programs are represented and evolved using Cartesian Genetic Programming. From the developed networks multiple traditional ANNs can be extracted, each of which solves a different problem. The proposed approach has been evaluated on multiple classification problems.
Article
Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.
Article
This paper presents a low-power, neuromorphic spiking neural network (SNN) chip that can be integrated in an electronic nose system to classify odor. The proposed SNN takes advantage of sub-threshold oscillation and onset-latency representation to reduce power consumption and chip area, providing a more distinct output for each odor input. The synaptic weights between the mitral and cortical cells are modified according to an spike-timing-dependent plasticity learning rule. During the experiment, the odor data are sampled by a commercial electronic nose (Cyranose 320) and are normalized before training and testing to ensure that the classification result is only caused by learning. Measurement results show that the circuit only consumed an average power of approximately 3.6 μW\mu{\rm W} with a 1-V power supply to discriminate odor data. The SNN has either a high or low output response for a given input odor, making it easy to determine whether the circuit has made the correct decision. The measurement result of the SNN chip and some well-known algorithms (support vector machine and the K-nearest neighbor program) is compared to demonstrate the classification performance of the proposed SNN chip.The mean testing accuracy is 87.59% for the data used in this paper.
Article
This paper explores the use of Bilaterally Symmetric Segmented Neural Networks (BSSNN) to control a multi- jointed arm. Experiments with a 3-jointed arm show it capable of quickly learning to catch balls randomly placed within its reach. It was also found to be extremely robust to environmental changes such as the manipulation of its arm segment lengths. Complex arms with up to 10 joints were also explored and found to evolve innovative strategies to capture balls while avoiding entanglement.
Article
Evolutionary robotics is a biologically inspired approach to robotics that is advantageous to studying the evolution of language. A new model for the evolution of language is presented. This model is used to investigate the interrelationships between communication abilities, namely linguistic production and comprehension, and other behavioral skills. For example, the model supports the hypothesis that the ability to form categories from direct interaction with an environment constitutes the ground for subsequent evolution of communication and language. A variety of experiments, based on the role of social and evolutionary variables in the emergence of communication, are described.
Article
The way in which organisms create body schema, based on their interactions with the real world, is an unsolved prob-lem in neuroscience. Similarly, in evolutionary robotics, a robot learns to behave in the real world either without re-course to an internal model (requiring at least hundreds of in-teractions), or a model is hand designed by the experimenter (requiring much prior knowledge about the robot and its en-vironment). In this paper we present a method that allows a physical robot to automatically synthesize a body schema, us-ing multimodal sensor data that it obtains through interaction with the real world. Furthermore, this synthesis can be either parametric (the experimenter provides an approximate model and the robot then refines the model) or topological: the robot synthesizes a predictive model of its own body plan using lit-tle prior knowledge. We show that this latter type of synthesis can occur when a physical quadrupedal robot performs only nine, 5-second interactions with its environment. The question of whether organisms do, or robots should, create and maintain models of themselves are central ques-tions in neuroscience and robotics respectively. In neuro-science, it has been argued that higher organisms must pos-sess predictive models of their own bodies, because biolog-ical sensor systems are too slow to provide adequate feed-back for fast and/or complex movements: internal models must predict what movements will result from a set of mus-cle contractions (D. Wolpert, 1998; Llinas, 2001). To this end, neural imaging and behavioral studies have begun to seek out where in the primate brain such models may exist, and what form they take (D. Wolpert, 1998; Bhushan and Shadmehr, 1999; Imamizu et al., 2003). In a similar way, internal models lie at the heart of a long-standing debate in artificial intelligence and robotics: how, or should a robot rely on internal models to realize useful be-haviors? In the early decades of AI modeling played a large role, when research emphasized higher-level cognitive func-tions, such as planning. Brooks (Brooks, 1991) and later others (Hendriks-Jansen, 1996; Clark, 1998; Pfeifer and Scheier, 1999) spearheaded embodied AI, in which model-free embodied robots were emphasized: it was thought that active interaction with the environment could supplant the need for internal introspection using models.
Chapter
Full-text available
Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.
Conference Paper
Blynel et al. [2] recently compared two types of recurrent neural network, Continuous Time Recurrent Neural Networks (CTRNNs) and Plastic Neural Networks (PNNs), on their ability to control the behaviour of a robot in a simple learning task; they found little difference between the two. However, this may have been due to the simplicity of their task. Our comparison on a slightly more complex task yielded very different results: 70% runs with CTRNNs produced successful learning networks; runs with PNNs failed to produce a single success.
Chapter
Full-text available
Morphogenesis can be considered as a self-organizing process shaped by natural evolution, in which two major adaptation mechanisms found in nature are involved, namely evolution and development. The main philosophy of morphogenetic robotics is to apply evolutionary developmental principles to robotics for designing self-organizing, self-reconfigurable, and self-repairable single- or multirobot systems. We categorize these methodologies into three areas, namely, morphogenetic swarm robotic systems, morphogenetic modular robots, and co-development of body and brain of robotic systems. In this chapter, we give a brief introduction to morphogenetic robotics. A few examples are also presented to illustrate how evolutionary developmental principles can be applied to swarm robots in changing environment. We also describe computational models for genetically driven neural and morphological development and activity-dependent neural development. As developmental mechanisms are often shaped by evolution both in nature and simulated systems, we suggest that evolutionary developmental robotics is a natural next step to follow.
Conference Paper
Full-text available
Recent studies demonstrated that sophisticated information processing can occur in spike-based computational systems that make use of synapses with only two states (potentiated or depressed). Here we present the hybrid software/hardware implementation of a model of the mammalian olfactory bulb using an analog VLSI device comprising an array of integrate and fire neurons with bistable synapses. Our implementation incorporates both software and hardware components, integrated using an asynchronous event-based spike representation. The model is able to perform highly selective simulated odor recognition, using induced synchronization within a population of neurons as the key to computation. The success of this scheme shows that the analog VLSI circuits used can perform sophisticated computation, taking advantage of the neuron dynamics and the topology of the network, without requiring precise analog synaptic weights.
Article
We investigate interactions between evolution, development and lifelong layered learning in a combination we call evolutionary developmental evaluation (EDE), using a specific implementation, developmental tree-adjoining grammar guided genetic programming (GP). The approach is consistent with the process of biological evolution and development in higher animals and plants, and is justifiable from the perspective of learning theory. In experiments, the combination is synergistic, outperforming algorithms using only some of these mechanisms. It is able to solve GP problems that lie well beyond the scaling capabilities of standard GP. The solutions it finds are simple, succinct, and highly structured. We conclude this paper with a number of proposals for further extension of EDE systems.
Article
Full-text available
The integration of modulatory neurons into evolutionary artificial neural networks is proposed here. A model of modulatory neurons was devised to describe a plasticity mechanism at the low level of synapses and neurons. No initial assumptions were made on the network structures or on the system level dynamics. The work of this thesis studied the outset of high level system dynamics that emerged employing the low level mechanism of neuromodulated plasticity. Fully-fledged control networks were designed by simulated evolution: an evolutionary algorithm could evolve networks with arbitrary size and topology using standard and modulatory neurons as building blocks. A set of dynamic, reward-based environments was implemented with the purpose of eliciting the outset of learning and memory in networks. The evolutionary time and the performance of solutions were compared for networks that could or could not use modulatory neurons. The experimental results demonstrated that modulatory neurons provide an evolutionary advantage that increases with the complexity of the control problem. Networks with modulatory neurons were also observed to evolve alternative neural control structures with respect to networks without neuromodulation. Different network topologies were observed to lead to a computational advantage such as faster input-output signal processing. The evolutionary and computational advantages induced by modulatory neurons strongly suggest the important role of neuromodulated plasticity for the evolution of networks that require temporal neural dynamics, adaptivity and memory functions.
Article
Full-text available
In this article, I discuss the use of neurally driven evolutionary autonomous agents (EAAs) in neuroscientific investigations. Two fundamental questions are addressed. Can EAA studies shed new light on the structure and function of biological nervous systems? And can these studies lead to the development of new tools for neuroscientific analysis? The value and significant potential of EAA modelling in both respects is demonstrated and discussed. Although the study of EAAs for neuroscience research still faces difficult conceptual and technical challenges, it is a promising and timely endeavour.
Article
This article presents a novel method for the evolution of artificial autonomous agents with small neurocontrollers. It is based on adaptive, self-organized compact genotypic encoding (SOCE) generating the phenotypic synaptic weights of the agent's neurocontroller. SOCE implements a parallel evolutionary search for neurocontroller solutions in a dynamically varying and reduced subspace of the original synaptic space. It leads to the emergence of compact successful neurocontrollers starting from large networks. The method can serve to estimate the network size needed to perform a given task, and to delineate the relative importance of the neurons composing the agent's controller network.
Article
Evolutionary robotics is a biologically inspired approach to robotics that is advantageous to studying the evolution of language. A new model for the evolution of language is presented. This model is used to investigate the interrelationships between communication abilities, namely linguistic production and comprehension, and other behavioral skills. For example, the model supports the hypothesis that the ability to form categories from direct interaction with an environment constitutes the ground for subsequent evolution of communication and language. A variety of experiments, based on the role of social and evolutionary variables in the emergence of communication, are described.
Article
Full-text available
A recent article (Stanton and Sejnowski 1989) on long-term synaptic depression in the hippocampus has reopened the issue of the computational efficiency of particular synaptic learning rules (Hebb 1949; Palm 1988a; Morris and Willshaw 1989) — homosynaptic versus heterosynaptic and monotonic versus nonmonotonic changes in synaptic efficacy. We have addressed these questions by calculating and maximizing the signal-to-noise ratio, a measure of the potential fidelity of recall, in a class of associative matrix memories. Up to a multiplicative constant, there are three optimal rules, each providing for synaptic depression such that positive and negative changes in synaptic efficacy balance out. For one rule, which is found to be the Stent-Singer rule (Stent 1973; Rauschecker and Singer 1979), the depression is purely heterosynaptic; for another (Stanton and Sejnowski 1989), the depression is purely homosynaptic; for the third, which is a generalization of the first two, and has a higher signal-to-noise ratio, it is both heterosynaptic and homosynaptic. The third rule takes the form of a covariance rule (Sejnowski 1977a,b) and includes, as a special case, the prescription due to Hopfield (1982) and others (Willshaw 1971; Kohonen 1972).
Article
Full-text available
This article illustrates an artificial developmental system that is a computationally efficient technique for the automatic generation of complex artificial neural networks (ANNs). The artificial developmental system can develop a graph grammar into a modular ANN made of a combination of simpler subnetworks. A genetic algorithm is used to evolve coded grammars that generate ANNs for controlling six-legged robot locomotion. A mechanism for the automatic definition of neural subnetworks is incorporated Using this mechanism, the genetic algorithm can automatically decompose a problem into subproblems, generate a subANN for solving the subproblem, and instantiate copies of this subANN to build a higher-level ANN that solves the problem. We report some simulation results showing that the same problem cannot be solved if the mechanism for automatic definition of subnetworks is suppressed. We support our argument with pictures that describe the steps of development, how ANN structures are evolved, and how the ANNs compute.
Article
Full-text available
Much research has been dedicated recently to applying genetic algorithms to populations of neural networks. However, while in real organisms the inherited genotype maps in complex ways into the resulting phenotype, in most of this research the development process that creates the individual phenotype is ignored. In this paper we present a model of neural development which includes cell division and cell migration in addition to axonal growth and branching. This reflects, in a very simplified way, what happens in the ontogeny of real organisms. The development process of our artificial organisms shows successive phases of functional differentiation and specialization. In addition, we find that mutations that affect different phases of development have very different evolutionary consequences. A single change in the early stages of cell division/migration can have huge effects on the phenotype while changes in later stages have usually a less drammatic impact. Sometimes changes that affect the first developental stages may be retained producing sudden changes in evolutionary history.
Conference Paper
Full-text available
We present a model based on genetic algorithm and neural networks. The neural networks develop on the basis of an inherited genotype but they show phenotypic plasticity, i.e. they develop in ways that are adapted to the specific environment The genotype-to-phenotype mapping is not abstractly conceived as taking place in a single instant but is a temporal process that takes a substantial portion of an individual's lifetime to complete and is sensitive to the particular environment in which the individual happens to develop. Furthermore, the respective roles of the genotype and of the environment are not decided a priori but are part of what evolves. We show how such a model is able to evolve control systems for autonomous robots that can adapt to different types of environments.
Article
Full-text available
The intuitive expectation is that the scheme used to encode the neural network in the chromosome should be critical to the success of evolving neural networks to solve difficult problems. In 1990 Kitano [1] published an encoding scheme based on contextfree parallel matrix rewriting. The method allowed compact, finite, chromosomes to grow neural networks of potentially infinite size. Results were presented that demonstrated superior evolutionary properties of the matrix rewriting method compared to a simple direct encoding. In this paper, we present results that contradict those findings, and demonstrate that a genetic algorithm (GA) using a direct encoding can find good individuals just as efficiently as a GA using matrix rewriting. I. Introduction The intuitive expectation is that the scheme used to encode the neural network in the chromosome should be critical to the success of evolving neural networks to solve difficult problems. In 1990 Kitano [1] published an encoding scheme base...
Article
Full-text available
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network. This research was sponsored in part by the National Science Foundation under Contract Number EET-87163...
Article
Full-text available
In a previous SAB paper [10], we presented the scientific rationale for simulating the coevolution of pursuit and evasion strategies. Here, we present an overview of our simulation methods and some results. Our most notable results are as follows. First, co-evolution works to produce good pursuers and good evaders through a pure bootstrapping process, but both types are rather specially adapted to their opponents' current counter-strategies. Second, eyes and brains can also co-evolve within each simulated species -- for example, pursuers usually evolved eyes on the front of their bodies (like cheetahs), while evaders usually evolved eyes pointing sideways or even backwards (like gazelles). Third, both kinds of coevolution are promoted by allowing spatially distributed populations, gene duplication, and an explicitly spatial morphogenesis program for eyes and brains that allows bilateral symmetry. The paper concludes by discussing some possible applications of simulated pursuit-evasion ...
Chapter
From Animals to Animats 4 brings together the latest research at the frontier of an exciting new approach to understanding intelligence. The Animals to Animats Conference brings together researchers from ethology, psychology, ecology, artificial intelligence, artificial life, robotics, engineering, and related fields to further understanding of the behaviors and underlying mechanisms that allow natural and synthetic agents (animats) to adapt and survive in uncertain environments. The work presented focuses on well-defined models—robotic, computer-simulation, and mathematical—that help to characterize and compare various organizational principles or architectures underlying adaptive behavior in both natural animals and animats. Bradford Books imprint
Chapter
This book explores a central issue in artificial intelligence, cognitive science, and artificial life: how to design information structures and processes that create and adapt intelligent agents through evolution and learning. Among the first uses of the computer was the development of programs to model perception, reasoning, learning, and evolution. Further developments resulted in computers and programs that exhibit aspects of intelligent behavior. The field of artificial intelligence is based on the premise that thought processes can be computationally modeled. Computational molecular biology brought a similar approach to the study of living systems. In both cases, hypotheses concerning the structure, function, and evolution of cognitive systems (natural as well as synthetic) take the form of computer programs that store, organize, manipulate, and use information. Systems whose information processing structures are fully programmed are difficult to design for all but the simplest applications. Real-world environments call for systems that are able to modify their behavior by changing their information processing structures. Cognitive and information structures and processes, embodied in living systems, display many effective designs for biological intelligent agents. They are also a source of ideas for designing artificial intelligent agents. This book explores a central issue in artificial intelligence, cognitive science, and artificial life: how to design information structures and processes that create and adapt intelligent agents through evolution and learning. The book is organized around four topics: the power of evolution to determine effective solutions to complex tasks, mechanisms to make evolutionary design scalable, the use of evolutionary search in conjunction with local learning algorithms, and the extension of evolutionary search in novel directions. Bradford Books imprint
Article
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal 'hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure.
Article
Functional and mechanistic comparisons are made between several network models of cognitive processing: competitive learning, interactive activation, adaptive resonance, and back propagation. The starting point of this comparison is the article of Rumelhart and Zipser (1985) on feature discovery through competitive learning. All the models which Rumelhart and Zipser (1985) have described were shown in Grossberg (1976b) to exhibit a type of learning which is temporally unstable. Competitive learning mechanisms can be stabilized in response to an arbitrary input environment by being supplemented with mechanisms for learning top-down expectancies, or templates; for matching bottom-up input patterns with the top-down expectancies; and for releasing orienting reactions in a mismatch situation, thereby updating short-term memory and searching for another internal representation. Network architectures which embody all of these mechanisms were called adaptive resonance models by Grossberg (1976c). Self-stabilizing learning models are candidates for use in real-world applications where unpredictable changes can occur in complex input environments. Competitive learning postulates are inconsistent with the postulates of the interactive activation model of McClelland and Rumelhart (1981), and suggest different levels of processing and interaction rules for the analysis of word recognition. Adaptive resonance models use these alternative levels and interaction rules. The selforganizing learning of an adaptive resonance model is compared and contrasted with the teacher-directed learning of a back propagation model. A number of criteria for evaluating real-time network models of cognitive processing are described and applied.
Article
Exploring complex networks
Article
We address two issues in Evolutionary Robotics, namely the genetic encoding and the performance criterion, also known as the fitness function. For the first aspect, we suggest to encode mechanisms for parameter self-organization, instead of the parameters themselves as in conventional approaches. We argue that the suggested encoding generates systems that can solve more complex tasks and are more robust to unpredictable sources of change. We support our arguments with a set of experiments on evolutionary neural controllers for physical robots and compare them to conventional encoding. In addition, we show that when also the genetic encoding is left free to evolve, artificial evolution will select to exploit mechanisms of self-organization. For the second aspect, we shall discuss the role of the performance criterion, als known as fitness function, and suggest Fitness Space as a framework to conceive fitness functions in Evolutionary Robotics. Fitness Space can be used as a guide to design fitness functions as well as to compare different experiments in Evolutionary Robotics.
Article
The study of networks pervades all of science, from neurobiology to statistical physics. The most basic issues are structural: how does one characterize the wiring diagram of a food web or the Internet or the metabolic network of the bacterium Escherichia coli? Are there any unifying principles underlying their topology? From the perspective of nonlinear dynamics, we would also like to understand how an enormous network of interacting dynamical systems-be they neurons, power stations or lasers-will behave collectively, given their individual dynamics and coupling architecture. Researchers are only now beginning to unravel the structure and dynamics of complex networks.
Article
This paper describes how the SGOCE paradigm has been used to evolve developmental programs capable of generating recurrent neural networks that control the behavior of simulated insects. This paradigm is characterized by an encoding scheme, by an evolutionary algorithm, by syntactic constraints, and by an incremental strategy that are described in turn. The additional use of an insect model equipped with six legs and two antennae made it possible to generate control modules that allowed it to successively add gradient-following and obstacle-avoidance capacities to walking behavior. The advantages of this evolutionary approach, together with directions for future work, are discussed.
Conference Paper
Genetic algorithms (GAS) are used to generate neural networks that implement Boolean functions. Neural networks both involve an architecture that is a graph of connections, and a set of weights. The algorithm that is put forward yields both the architecture and the weights by using chromosomes that encode an algorithmic description based upon a cell rewriting grammar. The developmental process interprets the grammar for l cycles and develops a neural net parametrized by l . The encoding along with the developmental process have been designed in order to improve the existing approaches. They implement the following key-properties. The representation on the chromosome is abstract and compact. Any chromosome develops a valid phenotype. The developmental process gives modular and interpretable architectures with a powerful scalability property. The GA finds a neural net for the 50 inputs parity function, and for the 40 inputs symmetry function
Article
Research in robotics programming is divided in two camps. The direct hand programmming approach uses an explicit model or a behavioral model ( subsumption architecture). The machine learning community uses neural network and/or genetic algorithm. We claim that hand programming and learning are complementary. The two approaches used together can be orders of magnitude more powerful than each approach taken separately. We propose a method to combine them both. It includes three concepts : syntactic constraints to restrict the search space, hand-made problem decomposition, hand given fitness. We use this method to solve a complex problem ( eight-legged locomotion). It needs 5000 less evaluations compared to when genetic algorithm are used alone. 1 Introduction 1.1 The motivation for Interactive Evolutionary Algorithm In [3] Dave Cliff, Inman Harvey and Phil Husband from the university of Sussex lay down a chart for the development of cognitive architectures, or control systems, for sit...
Article
This paper describes new genetic and developmental principles for an artificial evolutionary system (AES) and reports the first simulation results. Emphasis is placed on those developmental processes which reduce the length of the genome to code for a given problem. We exemplify the usefulness of developmental processes with cell growth, cell differentiation and the creation of neural control structures which we used to control a real world autonomous agent. The importance of including developmental processes relies much on the fact that a neural network can be specified implicitly by using cell-to-cell communication. 1 Introduction In the field of autonomous agents different approaches have been studied: One of them, the evolutionary approach, aims to produce increasingly sophisticated autonomous agents with no need to care about the details of the robots control structure. As others,(Nolfi et al.,1994; Cangelosi et al.,1994; Daellert & Beer,1994; Harvey et al.,1995), we are convinc...
Article
This paper explains in detail a biologically inspired encoding scheme for the artificial evolution of neural network robot controllers. Under the scheme, an individual cell divides and moves, in response to protein interactions with an artificial genome, to form a multi-cellular `organism'. After differentiation dendrites grow out of each cell, guided by chemically sensitive growth cones, to form connections between the cells. The resultant network is then interpreted as a recurrent neural network robot controller. Results are given of preliminary experiments to evolve robot controllers for both corridor following and object avoidance tasks. Keywords: Artificial Evolution, Morphogenesis, Evolutionary Robotics. 1 Introduction Many people have noted the difficulties involved in designing control architectures for robots by hand ([10], [1]) and as robots and the behaviours we demand of them become more complicated these difficulties can only increase. Evolutionary robotics is a...
Article
This work reports experiments in interactive evolutionary robotics. The goal is to evolve an Artificial Neural Network (ANN) to control the locomotion of an 8-legged robot. The ANNs are encoded using a cellular developmental process called cellular encoding. In a previous work similar experiments have been carried on successfully on a simulated robot. They took however around 1,000,000 different ANN evaluations. In this work the fitness is determined on a real robot, and no more than a few hundreds evaluations can be performed. Various ideas were implemented so as to decrease the required number of evaluations from 1,000,000 to 200. First we used cell cloning and link typing. Second we did as many things as possible interactively: interactive problem decomposition, interactive syntactic constraints, interactive fitness. More precisely: 1- A modular design was chosen where a controller for an individual leg, with a precise neuronal interface was developed. 2- Syntactic constrai...
Phenotipic Plasticity in Evolving Neural Networks Learning Representations by Back-Propagation of Errors
  • S Nolfi
  • O Miglino
  • D Parisi
  • G E Hinton
  • R J Williams
Nolfi, S.; Miglino, O., & Parisi, D. (1994) Phenotipic Plasticity in Evolving Neural Networks. In Nicoud, J.-D., & Gaussier, P. (eds.), Proceedings of the conference From Perception to Action. IEEE Computer Press, Los Alamitos, CA. Rumelhart, D. E.; Hinton, G. E., & Williams, R. J. (1986) Learning Representations by Back-Propagation of Errors. Nature, 323, 533±536.
A force field development scheme for use with genetic encodings of network-based sensorymotor control systems
  • P Husbands
Cell interactions as a control tool of developmental processes for evolutionary robotics
  • P Eggenberger
  • P Maes
  • M Matarić
  • J Meyer
  • J Pollack
  • H Roitblat
  • P. Eggenberger