Chapter

Levels of Dynamics and Adaptive Behavior in Evolutionary Neural Controllers

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior The Simulation of Adaptive Behavior Conference brings together researchers from ethology, psychology, ecology, artificial intelligence, artificial life, robotics, computer science, engineering, and related fields to further understanding of the behaviors and underlying mechanisms that allow adaptation and survival in uncertain environments. The work presented focuses on robotic and computational experimentation with well-defined models that help to characterize and compare alternative organizational principles or architectures underlying adaptive behavior in both natural animals and synthetic animats. Bradford Books imprint

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
Lifetime learning, or the change (or acquisition) of behaviors during a lifetime, based on experience, is a hallmark of living organisms. Multiple mechanisms may be involved, but biological neural circuits have repeatedly demonstrated a vital role in the learning process. These neural circuits are recurrent, dynamic, and non-linear and models of neural circuits employed in neuroscience and neuroethology tend to involve, accordingly, continuous-time, non-linear, and recurrently interconnected components. Currently, the main approach for finding configurations of dynamical recurrent neural networks that demonstrate behaviors of interest is using stochastic search techniques, such as evolutionary algorithms. In an evolutionary algorithm, these dynamic recurrent neural networks are evolved to perform the behavior over multiple generations, through selection, inheritance, and mutation, across a population of solutions. Although, these systems can be evolved to exhibit lifetime learning behavior, there are no explicit rules built into these dynamic recurrent neural networks that facilitate learning during their lifetime (e.g., reward signals). In this work, we examine a biologically plausible lifetime learning mechanism for dynamical recurrent neural networks. We focus on a recently proposed reinforcement learning mechanism inspired by neuromodulatory reward signals and ongoing fluctuations in synaptic strengths. Specifically, we extend one of the best-studied and most-commonly used dynamic recurrent neural networks to incorporate the reinforcement learning mechanism. First, we demonstrate that this extended dynamical system (model and learning mechanism) can autonomously learn to perform a central pattern generation task. Second, we compare the robustness and efficiency of the reinforcement learning rules in relation to two baseline models, a random walk and a hill-climbing walk through parameter space. Third, we systematically study the effect of the different meta-parameters of the learning mechanism on the behavioral learning performance. Finally, we report on preliminary results exploring the generality and scalability of this learning mechanism for dynamical neural networks as well as directions for future work.
Article
Full-text available
A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors—even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application to less restricted neural controllers, as typically used in evolutionary robotics, has not yet been attempted. Here we show for the first time that the self-optimization process can be implemented in a continuous-time recurrent neural network with asymmetrical connections. We discuss several open challenges that must still be addressed before this technique could be applied in actual robotic scenarios.
Article
Full-text available
Recently, many brain-inspired models have been used in attempts to support the cognitive abilities of artificial organisms. In this article, we introduce a computational framework to facilitate these efforts, emphasizing the cooperative performance of brain substructures. Specifically, we introduce an agent-based representation of brain areas, together with a hierarchical cooperative co-evolutionary design mechanism. The proposed methodology is capable of designing biologically inspired cognitive systems, considering both the specialties of brain areas and their cooperative performance. The effectiveness of the proposed approach is demonstrated by designing a brain-inspired model of working memory usage. The co-evolutionary scheme enforces the cooperation of agents representing the involved brain areas, facilitating the accomplishment of two different tasks by the same model. Furthermore, we investigate the performance of the model in lesion conditions, highlighting the distinct roles of agents representing brain areas. The implemented model is embedded in a simulated robotic platform to support its cognitive and behavioral capabilities.
Article
Two poles of understanding define the hermeneutic circle: gestalt understanding, in which the experience of a text or piece of music is apprehended as a unity, and conceptual understanding, in which the work is broken down into more determinate parts. In our sensory-motor interaction with the world, the environment is composed of discrete objects, but there is also an omnipresent gestalt background of nonrepresentational practices that confer meaning on these objects. It is argued that neuroscience can provide an explanation of how a physical system instantiates these types of understanding. A naturalized version of temporality, that extended temporal horizon that frames the flux of sensible experience and confers meaning on it, is equated with the dynamical system concept of temporal hierarchical organization. With this naturalized concept of temporality, it is demonstrated how the two poles of understanding that define the hermeneutic circle can emerge in an evolutionary autonomous agent as those dynamics best suited to maintain optimal grip in that particular agent. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Biological brains can adapt and learn from past experience. Yet neuroevolution, i.e. automatically creating artificial neural networks (ANNs) through evolutionary algorithms, has sometimes focused on static ANNs that cannot change their weights during their lifetime. A profound problem with evolving adaptive systems is that learning to learn is highly deceptive. Because it is easier at first to improve fitness without evolving the ability to learn, evolution is likely to exploit domain-dependent static (i.e. non-adaptive) heuristics. This paper analyzes this inherent deceptiveness in a variety of different dynamic, reward-based learning tasks, and proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm. The main idea in novelty search is to abandon objective-based fitness and instead simply search only for novel behavior, which avoids deception entirely. A series of experiments and an in-depth analysis show how behaviors that could potentially serve as a stepping stone to finding adaptive solutions are discovered by novelty search yet are missed by fitness-based search. The conclusion is that novelty search has the potential to foster the emergence of adaptive behavior in reward-based learning tasks, thereby opening a new direction for research in evolving plastic ANNs.
Conference Paper
Blynel et al. [2] recently compared two types of recurrent neural network, Continuous Time Recurrent Neural Networks (CTRNNs) and Plastic Neural Networks (PNNs), on their ability to control the behaviour of a robot in a simple learning task; they found little difference between the two. However, this may have been due to the simplicity of their task. Our comparison on a slightly more complex task yielded very different results: 70% runs with CTRNNs produced successful learning networks; runs with PNNs failed to produce a single success.
Conference Paper
Full-text available
Neuroevolution comprehends the class of methods responsible for evolving neural network topologies and weights by means of evolutionary algorithms. Despite their good performance in several control tasks, most of these methods use variations of simple sigmoidal neurons. Recent investigations have shown the potential applicability of more realistic neuron models, opening new perspectives for the next generation of neuroevolutionary methods. This work aims to extend a recent method known as NEAT to evolve continuous-time recurrent neural networks (CTRNNs). The proposed model is compared with previous methods on a control benchmark test. Preliminary results reveal some advantages when evolving general CTRNNs over traditional models.
Conference Paper
In (8) Yamauchi and Beer explored the abilities of continu- ous time recurrent neural networks (CTRNNs) to display reinforcement- learning like abilities. The investigated tasks were generation and learn- ing of short bit sequences. This "learning" came about without mod- ifications of synaptic strengths, but simply from internal dynamics of the evolved networks. In this paper this approach will be extended to two embodied agent tasks, where simulated robots have acquire and retain "knowledge" while moving around dierent mazes. The evolved controllers are analyzed and the results are discussed.
Conference Paper
Full-text available
The current work addresses the problem of redesigning brain-inspired artificial cognitive systems in order to gradually enrich them with advanced cognitive skills. In the proposed approach, properly formulated neural agents are employed to represent brain areas. A cooperative coevolutionary method, with the inherent ability to co-adapt substructures, supports the design of agents. Interestingly enough, the same method provides a consistent mechanism to reconfigure (if necessary) the structure of agents, facilitating follow-up modelling efforts. In the present work we demonstrate partial redesign of a brain-inspired cognitive system, in order to furnish it with learning abilities. The implemented model is successfully embedded in a simulated robotic platform which supports environmental interaction, exhibiting the ability of the improved cognitive system to adopt, in real-time, two different operating strategies.
Conference Paper
Full-text available
The coupling between an agent’s body and its nervous system ensures that optimal behaviour generation can be undertaken in a specific niche. Depending on this coupling, nervous system or body plan architecture can partake in more or less of the behaviour. We will refer to this as the automatic distribution of computational workload. It is automatic since the coupling is evolved and not pre-specified. In order to investigate this further, we attempt to identify how, in models of undulatory fish, the coupling between body plan morphology and nervous system architecture should emerge in several constrained experimental setups. It is found that neural circuitry emerges minimalistically in all cases and that when certain body segmentation features are not coevolved, the agents exhibit higher levels of neural activity. On account of this, it is suggested that an unconstrained body plan morphology permits greater flexibility in the agent’s ability to generate behaviour, whilst, if the body plan is constrained, flexibility is reduced with the result that the nervous system has to compensate.
Conference Paper
This paper explores the capabilities of continuous time recur- rent neural networks (CTRNNs) to display reinforcement learning-like abilities on a set of T-Maze and double T-Maze navigation tasks, where the robot has to locate and "remember" the position of a reward-zone. The "learning" comes about without modifications of synapse strengths, but simply from internal network dynamics, as proposed by (12). Neural controllers are evolved in simulation and in the simple case evaluated on a real robot. The evolved controllers are analyzed and the results obtained are discussed.
Conference Paper
This study describes how complex goal-directed behavior can evolve in a hierarchically organized recurrent neural network controlling a simulated Khepera robot. Different types of dynamic structures self-organize in the lower and higher levels of a network for the purpose of achieving complex navigation tasks. The parametric bifurcation structures that appear in the lower level ex- plain the mechanism of how behavior primitives are switched in a top-down way. In the higher level, a topologically ordered mapping of initial cell activa- tion states to motor-primitive sequences self-organizes by utilizing the initial sensitivity characteristics of nonlinear dynamical systems. A further experi- ment tests the evolved controller's adaptability to changes in its environment. The biological plausibility of the model's essential principles is discussed.
Conference Paper
Full-text available
Body morphology is thought to have heavily influenced the evolution of neural architecture. However, the extent of this interaction and its underlying principles are largely unclear. To help us elucidate these principles, we examine the artificial evolution of a hypothetical nervous system embedded in a fish-inspired animat. The aim is to observe the evolution of neural structures in relation to both body morphology and required motor primitives. Our investigations reveal that increasing the pressure to evolve a wider range of movements also results in higher levels of neural symmetry. We further examine how different body shapes affect the evolution of neural structure; we find that, in order to achieve optimal movements, the neural structure integrates and compensates for asymmetrical body morphology. Our study clearly indicates that different parts of the animat - specifically, nervous system and body plan - evolve in concert with and become highly functional with respect to the other parts. The autonomous emergence of morphological and neural computation in this model contributes to unveiling the surprisingly strong coupling of such systems in nature.
ResearchGate has not been able to resolve any references for this publication.