Conference Paper

Evolvability Search: Directly Selecting for Evolvability in order to Study and Produce It

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

One hallmark of natural organisms is their significant evolv-ability, i.e., their increased potential for further evolution. However, reproducing such evolvability in artificial evolution remains a challenge, which both reduces the performance of evolutionary algorithms and inhibits the study of evolv-able digital phenotypes. Although some types of selection in evolutionary computation indirectly encourage evolvability, one unexplored possibility is to directly select for evolvabil-ity. To do so, we estimate an individual's future potential for diversity by calculating the behavioral diversity of its immediate offspring, and select organisms with increased offspring variation. While the technique is computationally expensive, we hypothesized that direct selection would better encourage evolvability than indirect methods. Experiments in two evolutionary robotics domains confirm this hypothesis: in both domains, such Evolvability Search produces solutions with higher evolvability than those produced with Novelty Search or traditional objective-based search algorithms. Further experiments demonstrate that the higher evolvability produced by Evolvability Search in a training environment also generalizes, producing higher evolvability in a new test environment without further selection. Overall, Evolvabil-ity Search enables generating evolvability more easily and directly, facilitating its study and understanding, and may inspire future practical algorithms that increase evolvability without significant computational overhead.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this paper, we define evolvability as the ability to generate phenotypic variation, a definition used by previous works (Mengistu et al., 2016;Gajewski et al., 2019). This variation is measured in certain dimensions of interest defined by the behavioral characterization (BC) function. ...
... However, if the maze is open on one side, evolution can create individuals that wander outside of the maze, achieving high diversity of final positions without ever learning navigation skills (Lehman and Stanley, 2011a). Existing algorithms using this definition of evolvability are Evolvability Search (Mengistu et al., 2016) and Evolvability ES (Gajewski et al., 2019). ...
... Indirect Selection There are various mechanisms that produce indirect selection for evolvability (Mengistu et al., 2016). One mechanism is to introduce regular mass extinction events (Lehman and Miikkulainen, 2015), freeing up many niches. ...
... In this paper, we define evolvability as the ability to generate phenotypic variation, a definition used by previous works (Mengistu et al., 2016;Gajewski et al., 2019). This variation is measured in certain dimensions of interest defined by the behavioral characterization (BC) function. ...
... However, if the maze is open on one side, evolution can create individuals that wander outside of the maze, achieving high diversity of final positions without ever learning navigation skills (Lehman and Stanley, 2011a). Existing algorithms using this definition of evolvability are Evolvability Search Mengistu et al. (2016) and Evolvability ES (Gajewski et al., 2019). ...
... Indirect Selection There are various mechanisms that produce indirect selection for evolvability (Mengistu et al., 2016). One mechanism is to introduce regular mass extinction events (Lehman and Miikkulainen, 2015), freeing up many niches. ...
Preprint
Full-text available
One of the most important lessons from the success of deep learning is that learned representations tend to perform much better at any task compared to representations we design by hand. Yet evolution of evolvability algorithms, which aim to automatically learn good genetic representations, have received relatively little attention, perhaps because of the large amount of computational power they require. The recent method Evolvability ES allows direct selection for evolvability with little computation. However, it can only be used to solve problems where evolvability and task performance are aligned. We propose Quality Evolvability ES, a method that simultaneously optimizes for task performance and evolvability and without this restriction. Our proposed approach Quality Evolvability has similar motivation to Quality Diversity algorithms, but with some important differences. While Quality Diversity aims to find an archive of diverse and well-performing, but potentially genetically distant individuals, Quality Evolvability aims to find a single individual with a diverse and well-performing distribution of offspring. By doing so Quality Evolvability is forced to discover more evolvable representations. We demonstrate on robotic locomotion control tasks that Quality Evolvability ES, similarly to Quality Diversity methods, can learn faster than objective-based methods and can handle deceptive problems.
... One challenge in evolutionary computation (EC) is to design algorithms capable of uncovering highly evolvable representations; though evolvability's definition is debated, the idea is to find genomes with great potential for further evolution [2,10,15,19,21,26,33,43]. Here, as in previous work, we adopt a definition of evolvability as the propensity of an individual to generate phenotypic diversity [21,23,26]. ...
... One challenge in evolutionary computation (EC) is to design algorithms capable of uncovering highly evolvable representations; though evolvability's definition is debated, the idea is to find genomes with great potential for further evolution [2,10,15,19,21,26,33,43]. Here, as in previous work, we adopt a definition of evolvability as the propensity of an individual to generate phenotypic diversity [21,23,26]. Such evolvability is important in practice, because it broadens the variation accessible through mutation, thereby accelerating evolution; improved evolvability thus would benefit many areas across EC, e.g. ...
... For example, environments wherein goals vary modularly over generations may implictly favor individuals better able to adapt to such variations [17]. The second approach, which is the focus of this paper, is to select directly for evolvability, i.e. to judge individuals by directly testing their potential for further evolution [26]. While the first approach is more biologically plausible and is important to understanding natural evolvability, the second benefits from its directness, its potential ease of application to new domains, and its ability to enable the study of highly-evolvable genomes without fully understanding evolvability's natural emergence. ...
Preprint
Designing evolutionary algorithms capable of uncovering highly evolvable representations is an open challenge; such evolvability is important because it accelerates evolution and enables fast adaptation to changing circumstances. This paper introduces evolvability ES, an evolutionary algorithm designed to explicitly and efficiently optimize for evolvability, i.e. the ability to further adapt. The insight is that it is possible to derive a novel objective in the spirit of natural evolution strategies that maximizes the diversity of behaviors exhibited when an individual is subject to random mutations, and that efficiently scales with computation. Experiments in 2-D and 3-D locomotion tasks highlight the potential of evolvability ES to generate solutions with tens of thousands of parameters that can quickly be adapted to solve different tasks and that can productively seed further evolution. We further highlight a connection between evolvability and a recent and popular gradient-based meta-learning algorithm called MAML; results show that evolvability ES can perform competitively with MAML and that it discovers solutions with distinct properties. The conclusion is that evolvability ES opens up novel research directions for studying and exploiting the potential of evolvable representations for deep neural networks.
... One challenge in evolutionary computation (EC) is to design algorithms capable of uncovering highly evolvable representations; though evolvability's definition is debated, the idea is to find genomes with great potential for further evolution [2,10,15,19,21,26,33,43]. Here, as in previous work, we adopt a definition of evolvability as the propensity of an individual to generate phenotypic diversity [21,23,26]. ...
... One challenge in evolutionary computation (EC) is to design algorithms capable of uncovering highly evolvable representations; though evolvability's definition is debated, the idea is to find genomes with great potential for further evolution [2,10,15,19,21,26,33,43]. Here, as in previous work, we adopt a definition of evolvability as the propensity of an individual to generate phenotypic diversity [21,23,26]. Such evolvability is important in practice, because it broadens the variation accessible through mutation, thereby accelerating evolution; improved evolvability thus would benefit many areas across EC, e.g. ...
... For example, environments wherein goals vary modularly over generations may implictly favor individuals better able to adapt to such variations [17]. The second approach, which is the focus of this paper, is to select directly for evolvability, i.e. to judge individuals by directly testing their potential for further evolution [26]. While the first approach is more biologically plausible and is important to understanding natural evolvability, the second benefits from its directness, its potential ease of application to new domains, and its ability to enable the study of highly-evolvable genomes without fully understanding evolvability's natural emergence. ...
Conference Paper
Designing evolutionary algorithms capable of uncovering highly evolvable representations is an open challenge in evolutionary computation; such evolvability is important in practice, because it accelerates evolution and enables fast adaptation to changing circumstances. This paper introduces evolvability ES, an evolutionary algorithm designed to explicitly and efficiently optimize for evolvability, i.e. the ability to further adapt. The insight is that it is possible to derive a novel objective in the spirit of natural evolution strategies that maximizes the diversity of behaviors exhibited when an individual is subject to random mutations, and that efficiently scales with computation. Experiments in 2-D and 3-D locomotion tasks highlight the potential of evolvability ES to generate solutions with tens of thousands of parameters that can quickly be adapted to solve different tasks and that can productively seed further evolution. We further highlight a connection between evolvability in EC and a recent and popular gradient-based meta-learning algorithm called MAML; results show that evolvability ES can perform competitively with MAML and that it discovers solutions with distinct properties. The conclusion is that evolvability ES opens up novel research directions for studying and exploiting the potential of evolvable representations for deep neural networks.
... Evolvability is most often measured in prior novelty search studies by estimating how many unique behaviors exist within an individual's immediate mutational neighborhood (Lehman andStanley, 2011b, 2013;Mengistu et al., 2016). However, such a measure requires independently evaluating many mutations of an individual. ...
... Calculating Evolvability Evolvability benefits ER because greater evolvability provides more variation from which evolution can select. Previous work has shown that diversity-driven algorithms can encourage greater evolvability than traditional goal-oriented EAs (Lehman and Stanley, 2011b;Mengistu et al., 2016). To probe the robustness of these results, this paper measures novelty search with a variety of different evolvability metrics. ...
... One popular evolvability estimate in ER is to measure how many distinct behaviors occur among a random sample of an individual's offspring (Lehman andStanley, 2011b, 2013;Mengistu et al., 2016). To instead calculate this quantity exactly, behaviors are first discretized, by superimposing a regular grid over the space of possible behaviors, where all behaviors contained by a grid square are considered the same. ...
... It is important to note two distinct components of this definition: that there is variation (i.e., diversity) being passed from parent to offspring, and that this variation leads to positive effects on fitness. Interestingly and importantly, measures and studies from artificial life (a primary domain of interest for evolvability studies related to artificial evolution) regard evolvability purely as adaptation (Medvet et al., 2017;Veenstra et al., 2020;Liu et al., 2022;Tarapore and Mouret, 2015), or evolvability as diversification (Mengistu et al., 2016;Gajewski et al., 2019;Stanley, 2011b, 2013;Lim et al., 2021;Carlo et al., 2021), but not both. ...
... Searching directly for evolvability has become a recently popular trend. In Evolvability Search (Mengistu et al., 2016), the fitness function of a traditional EA rewards high evolvability (in this diversity-oriented interpretation, it is the number of distinct behaviors in the set of offspring gen-erated by an individual) instead of rewarding maximizing a domain-specific objective. This algorithm is shown to outperform both greedy optimization and novelty search (Lehman and Stanley, 2011a). ...
... Gajewski et al. [110] introduced "Evolvability ES", an ES-based meta-learning algorithm for RL tasks. It combines concepts from evolvability search [111], ESs [4], and MAML [102] to encourage searching for individuals whose immediate offsprings show signs of behavioral diversity (that is, it searches for parameter vectors whose perturbations lead to differing behaviors) [111]. Consequently, Evolvability ES facilitates adaptation and generalization while leveraging the scalability of ESs [110,112]. ...
... Gajewski et al. [110] introduced "Evolvability ES", an ES-based meta-learning algorithm for RL tasks. It combines concepts from evolvability search [111], ESs [4], and MAML [102] to encourage searching for individuals whose immediate offsprings show signs of behavioral diversity (that is, it searches for parameter vectors whose perturbations lead to differing behaviors) [111]. Consequently, Evolvability ES facilitates adaptation and generalization while leveraging the scalability of ESs [110,112]. ...
Preprint
Full-text available
div>Deep Reinforcement Learning (DRL) and Evolution Strategies (ESs) have surpassed human-level control in many sequential decision-making problems, yet many open challenges still exist. To get insights into the strengths and weaknesses of DRL versus ESs, an analysis of their respective capabilities and limitations is provided. After presenting their fundamental concepts and algorithms, a comparison is provided on key aspects such as scalability, exploration, adaptation to dynamic environments, and multi-agent learning. Then, the benefits of hybrid algorithms that combine concepts from DRL and ESs are highlighted. Finally, to have an indication about how they compare in real-world applications, a survey of the literature for the set of applications they support is provided.</div
... Gajewski et al. [110] introduced "Evolvability ES", an ES-based meta-learning algorithm for RL tasks. It combines concepts from evolvability search [111], ESs [4], and MAML [102] to encourage searching for individuals whose immediate offsprings show signs of behavioral diversity (that is, it searches for parameter vectors whose perturbations lead to differing behaviors) [111]. Consequently, Evolvability ES facilitates adaptation and generalization while leveraging the scalability of ESs [110,112]. ...
... Gajewski et al. [110] introduced "Evolvability ES", an ES-based meta-learning algorithm for RL tasks. It combines concepts from evolvability search [111], ESs [4], and MAML [102] to encourage searching for individuals whose immediate offsprings show signs of behavioral diversity (that is, it searches for parameter vectors whose perturbations lead to differing behaviors) [111]. Consequently, Evolvability ES facilitates adaptation and generalization while leveraging the scalability of ESs [110,112]. ...
Preprint
Full-text available
div>Deep Reinforcement Learning (DRL) has the potential to surpass human-level control in sequential decision-making problems. Evolution Strategies (ESs) have different characteristics than DRL, yet they are promoted as a scalable alternative. To get insights into their strengths and weaknesses, in this paper, we put the two approaches side by side. After presenting the fundamental concepts and algorithms for each of the two approaches, they are compared from the perspectives of scalability, exploration, adaptation to dynamic environments, and multi-agent learning. Then, the paper discusses hybrid algorithms, combining aspects of both DRL and ESs, and how they attempt to capitalize on the benefits of both techniques. Lastly, both approaches are compared based on the set of applications they support, showing their potential for tackling real-world problems. This paper aims to present an overview of how DRL and ESs can be used, either independently or in unison, to solve specific learning tasks. It is intended to guide researchers to select which method suits them best and provides a bird's eye view of the overall literature in the field. Further, we also provide application scenarios and open challenges. </div
... Gajewski et al. [110] introduced "Evolvability ES", an ES-based meta-learning algorithm for RL tasks. It combines concepts from evolvability search [111], ESs [4], and MAML [102] to encourage searching for individuals whose immediate offsprings show signs of behavioral diversity (that is, it searches for parameter vectors whose perturbations lead to differing behaviors) [111]. Consequently, Evolvability ES facilitates adaptation and generalization while leveraging the scalability of ESs [110,112]. ...
... Gajewski et al. [110] introduced "Evolvability ES", an ES-based meta-learning algorithm for RL tasks. It combines concepts from evolvability search [111], ESs [4], and MAML [102] to encourage searching for individuals whose immediate offsprings show signs of behavioral diversity (that is, it searches for parameter vectors whose perturbations lead to differing behaviors) [111]. Consequently, Evolvability ES facilitates adaptation and generalization while leveraging the scalability of ESs [110,112]. ...
Preprint
Full-text available
Deep Reinforcement Learning (DRL) and Evolution Strategies (ESs) have surpassed human-level control in many sequential decision-making problems, yet many open challenges still exist. To get insights into the strengths and weaknesses of DRL versus ESs, an analysis of their respective capabilities and limitations is provided. After presenting their fundamental concepts and algorithms, a comparison is provided on key aspects such as scalability, exploration, adaptation to dynamic environments, and multi-agent learning. Then, the benefits of hybrid algorithms that combine concepts from DRL and ESs are highlighted. Finally, to have an indication about how they compare in real-world applications, a survey of the literature for the set of applications they support is provided.
... This is possible since the genotype-phenotype map has the ability to transform random genotypic variation to an advantageous distribution of phenotypic variation [31]. A simple example which is often used to demonstrate this property is how nature encodes development plans for symmetric bodies [17,19]. Because of the way the developmental program for the body is encoded, it is easier for evolution to change the length of both limbs together, then to change them separately, which is probably a useful way to explore possible space of body configurations. ...
... Much effort was given to algorithms which instead of selecting for individuals with the ability to improve their fitness, select for the ability to generate diverse behaviour in their offspring [19,10]. These algorithms capture a different aspect of evolvability which might be able to utilize the capabilities of indirect encoding just as well. ...
... Evolvability has been defined in numerous ways, and the implications of the term both in the biological and evolutionary computation domains are controversial. It can be defined as an organism's capacity to generate heritable phenotypic variation [66], the increased potential of an individual or population to further evolution [89], or the ability of random variations to sometimes produce improvement [168]. Recently, Wilder and Stanley [176] have advocated for making a difference between the concepts evolvable individuals and evolvable populations. ...
... Which models could be used to learn the patterns behind the individuals propensity to evolve? Another related question is whether problem structure plays any role in the algorithms that try to directly evolve for evolvability or in those EAs which indirectly encourage evolvability [63,73,74,89]. ...
Article
Full-text available
The concept of gray-box optimization, in juxtaposition to black-box optimization, revolves about the idea of exploiting the problem structure to implement more efficient evolutionary algorithms (EAs). Work on factorized distribution algorithms (FDAs), whose factorizations are directly derived from the problem structure, has also contributed to show how exploiting the problem structure produces important gains in the efficiency of EAs. In this paper we analyze the general question of using problem structure in EAs focusing on confronting work done in gray-box optimization with related research accomplished in FDAs. This contrasted analysis helps us to identify, in current studies on the use problem structure in EAs, two distinct analytical characterizations of how these algorithms work. Moreover, we claim that these two characterizations collide and compete at the time of providing a coherent framework to investigate this type of algorithms. To illustrate this claim, we present a contrasted analysis of formalisms, questions, and results produced in FDAs and gray-box optimization. Common underlying principles in the two approaches, which are usually overlooked, are identified and discussed. Besides, an extensive review of previous research related to different uses of the problem structure in EAs is presented. The paper also elaborates on some of the questions that arise when extending the use of problem structure in EAs, such as the question of evolvability, high cardinality of the variables and large definition sets, constrained and multi-objective problems, etc. Finally, emergent approaches that exploit neural models to capture the problem structure are covered.
... One common feature of these two algorithms is the use of very small sample sizes to estimate evolvability, which is surprising given the argument presented in Section 4.1. That selection for evolvability estimates leads to increased evolvability agrees with the findings of Mengistu et al. (2016). ...
... Mengistu et al. (2016) describe the evolvability search (ES) algorithm. It uses the look-ahead method, producing a poll offspring population in order to estimate evolvability by sampling. ...
Thesis
Full-text available
This thesis is about direct selection for evolvability in artificial evolutionary systems. The origin of evolvability—the capacity for adaptive evolution—is of great interest to evolutionary biologists, who have proposed many indirect selection mechanisms. In evolutionary computation and artificial life, these indirect selection mechanisms have been co-opted in order to engineer the evolution of evolvability into artificial evolution simulations. Very little work has been done on direct selection, and so this thesis investigates the extent to which we should select for evolvability. I show in a simple theoretical model the existence of conditions in which selection for a weighted sum of fitness and evolvability achieves greater long-term fitness than selection for fitness alone. There are no conditions, within the model, in which it is beneficial to select more for evolvability than for fitness. Subsequent empirical work compares episodic group selection for evolvability (EGS)—an algorithm that selects for evolvability estimates calculated from noisy samples—with an algorithm that selects for fitness alone on four fitness functions taken from the literature. The long-term fitness achieved by EGS does not exceed that of selection for fitness alone in any region of the parameter space. However, there are regions of the parameter space in which EGS achieves greater long-term evolvability. A modification of the algorithm, EGS-AR, which incorporates a recent best-arm identification algorithm, reliably outperforms EGS across the parameter space, in terms of both eventual fitness and eventual evolvability. The thesis concludes that selection for estimated evolvability may be a viable strategy for solving time-varying problems.
... In contrast to objective-driven search, a consistent relationship often holds between divergent selection and evolvability Stanley, 2011b, 2013;Lehman and Miikkulainen, 2015;Wilder and Stanley, 2015;Mengistu et al., 2016). Importantly, unlike with static measures of progress, measures of divergence are relative to the current and past states of the search process. ...
... Beyond theoretical arguments, empirical studies have demonstrated that divergent search often results in higher evolvability than objective-based search Stanley, 2011b, 2013;Lehman and Miikkulainen, 2015;Wilder and Stanley, 2015;Mengistu et al., 2016). Other studies have highlighted that objective-based search often cannot fully exploit features that enable greater potential for evolvability, e.g. ...
Article
Full-text available
An ambitious goal in evolutionary robotics (ER) is to evolve increasingly complex robotic behaviors with minimal human design effort. Reaching this goal requires evolutionary algorithms that can unlock from genetic encodings their latent potential for evolvability. One issue clouding this goal is conceptual confusion about evolvability that often obscures important or desirable aspects of evolvability. The danger from such confusion is that it may establish unrealistic goals for evolvability that prove unproductive in practice. An important issue separate from conceptual confusion is the common misalignment between selection and evolvability in ER. While more expressive encodings can represent higher-level adaptations (e.g. sexual reproduction or developmental systems) that increase long-term evolutionary potential (i.e. evolvability), realizing such potential requires gradients of fitness and evolvability to align. In other words, selection is often a critical factor limiting increasing evolvability. Thus, drawing from a series of recent papers, this article seeks to both (1) clarify and focus the ways in which the term evolvability is used within artificial evolution and (2) argue for the importance of one type of selection, i.e. divergent selection, for enabling evolvability. The main argument is that there is a fundamental connection between divergent selection and evolvability (on both the individual and population level) that does not hold for typical goal-oriented selection. The conclusion is that selection pressure plays a critical role in realizing the potential for evolvability and that divergent selection in particular provides a principled mechanism for encouraging evolvability in artificial evolution.
... Experiments compare novel selection functions learned through Sel4Sel against baseline selection functions from literature that have been explicitly designed to encourage both fitness-based adaptation and diversification. Importantly, evolvability requires both of these pressures, yet surprisingly most quantitative studies of evolvability in artificial life focus on either evolvability as adaptation (Medvet et al., 2017;Veenstra et al., 2020) or evolvability as diversification (Mengistu et al., 2016;Gajewski et al., 2019), but not both. Baseline comparisons are chosen to represent various methods of encouraging evolvability, without explicitly requiring both adaptation and diversification, so as to remain agnostic to the ideal balance between the two. ...
Preprint
Full-text available
Inspired by natural evolution, evolutionary search algorithms have proven remarkably capable due to their dual abilities to radiantly explore through diverse populations and to converge to adaptive pressures. A large part of this behavior comes from the selection function of an evolutionary algorithm, which is a metric for deciding which individuals survive to the next generation. In deceptive or hard-to-search fitness landscapes, greedy selection often fails, thus it is critical that selection functions strike the correct balance between gradient-exploiting adaptation and exploratory diversification. This paper introduces Sel4Sel, or Selecting for Selection, an algorithm that searches for high-performing neural-network-based selection functions through a meta-evolutionary loop. Results on three distinct bitstring domains indicate that Sel4Sel networks consistently match or exceed the performance of both fitness-based selection and benchmarks explicitly designed to encourage diversity. Analysis of the strongest Sel4Sel networks reveals a general tendency to favor highly novel individuals early on, with a gradual shift towards fitness-based selection as deceptive local optima are bypassed.
... Estimating the evolvability of an individual is not straightforward. Some algorithms estimate it via sampling [24], which requires a huge amount of costly evaluations. Finding a selective pressure that would be simple and cheap to compute while indirectly fostering evolvability is thus of critical interest. ...
Preprint
Evolvability is an important feature that impacts the ability of evolutionary processes to find interesting novel solutions and to deal with changing conditions of the problem to solve. The estimation of evolvability is not straightforward and is generally too expensive to be directly used as selective pressure in the evolutionary process. Indirectly promoting evolvability as a side effect of other easier and faster to compute selection pressures would thus be advantageous. In an unbounded behavior space, it has already been shown that evolvable individuals naturally appear and tend to be selected as they are more likely to invade empty behavior niches. Evolvability is thus a natural byproduct of the search in this context. However, practical agents and environments often impose limits on the reach-able behavior space. How do these boundaries impact evolvability? In this context, can evolvability still be promoted without explicitly rewarding it? We show that Novelty Search implicitly creates a pressure for high evolvability even in bounded behavior spaces, and explore the reasons for such a behavior. More precisely we show that, throughout the search, the dynamic evaluation of novelty rewards individuals which are very mobile in the behavior space, which in turn promotes evolvability.
... If a structure is repeatedly optimized (or selected) to move in some dimensions and not others, it can restructure itself to make traversing the preferred dimensions of variation more likely than traversing non-preferred dimensions. In the literature of biological and computational evolution, this phenomenon is called evolvability [82,83,187,27,127,186,107]. Lehman and Stanley [89] showed that Novelty Search produces more evolvability than objective-based search. ...
Preprint
Perhaps the most ambitious scientific quest in human history is the creation of general artificial intelligence, which roughly means AI that is as smart or smarter than humans. The dominant approach in the machine learning community is to attempt to discover each of the pieces required for intelligence, with the implicit assumption that some future group will complete the Herculean task of figuring out how to combine all of those pieces into a complex thinking machine. I call this the ``manual AI approach.'' This paper describes another exciting path that ultimately may be more successful at producing general AI. It is based on the clear trend in machine learning that hand-designed solutions eventually are replaced by more effective, learned solutions. The idea is to create an AI-generating algorithm (AI-GA), which automatically learns how to produce general AI. Three Pillars are essential for the approach: (1) meta-learning architectures, (2) meta-learning the learning algorithms themselves, and (3) generating effective learning environments. I argue that either approach could produce general AI first, and both are scientifically worthwhile irrespective of which is the fastest path. Because both are promising, yet the ML community is currently committed to the manual approach, I argue that our community should increase its research investment in the AI-GA approach. To encourage such research, I describe promising work in each of the Three Pillars. I also discuss AI-GA-specific safety and ethical considerations. Because it it may be the fastest path to general AI and because it is inherently scientifically interesting to understand the conditions in which a simple algorithm can produce general AI (as happened on Earth where Darwinian evolution produced human intelligence), I argue that the pursuit of AI-GAs should be considered a new grand challenge of computer science research.
... As a result of this limitation to genetic diversity, more recent approaches directly reward a diversity of behaviours 63,68 , and further research has led to related ideas such as directly evolving for desired qualities such as curiosity 54 , evolvability 69 or generating surprise 70 . A representative approach 68 involves a multi-objective evolutionary algorithm 71,72 that rewards individuals both for increasing their fitness and for diverging from other individuals in experimenter-specified characterizations of behaviour in the domain. ...
Article
Full-text available
Much of recent machine learning has focused on deep learning, in which neural network weights are trained through variants of stochastic gradient descent. An alternative approach comes from the field of neuroevolution, which harnesses evolutionary algorithms to optimize neural networks, inspired by the fact that natural brains themselves are the products of an evolutionary process. Neuroevolution enables important capabilities that are typically unavailable to gradient-based approaches, including learning neural network building blocks (for example activation functions), hyperparameters, architectures and even the algorithms for learning themselves. Neuroevolution also differs from deep learning (and deep reinforcement learning) by maintaining a population of solutions during search, enabling extreme exploration and massive parallelization. Finally, because neuroevolution research has (until recently) developed largely in isolation from gradient-based neural network research, it has developed many unique and effective techniques that should be effective in other machine learning areas too. This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search. Our hope is to inspire renewed interest in the field as it meets the potential of the increasing computation available today, to highlight how many of its ideas can provide an exciting resource for inspiration and hybridization to the deep learning, deep reinforcement learning and machine learning communities, and to explain how neuroevolution could prove to be a critical tool in the long-term pursuit of artificial general intelligence.
... Both evolvability and neutrality are relevant properties for the different evolutionary algorithms, and they are not just peculiar to GE frameworks. Several different ways for quantifying these properties have been proposed, in order to capture the different nuances of the neutrality or for adapting the measure to the particular EA considered: e.g., [51], [52] for evolvability and [53], [54], [23], [55] for neutrality. We chose to measure evolvability with the method introduced in [56] and later used in [39] for GE: while in [56] evolvability is used to compare different problems tackled with the same representation, in [39] the same measure is used to compare different representations on the same set of problems, as in the present work. ...
Article
Grammatical evolution (GE) is one of the most widespread techniques in evolutionary computation. Genotypes in GE are bit strings while phenotypes are strings, of a language defined by a user-provided context-free grammar. In this paper, we propose a novel procedure for mapping genotypes to phenotypes that we call weighted hierarchical GE (WHGE). WHGE imposes a form of hierarchy on the genotype and encodes grammar symbols with a varying number of bits based on the relative expressive power of those symbols. WHGE does not impose any constraint on the overall GE framework, in particular, WHGE may handle recursive grammars, uses the classical genetic operators, and does not need to define any bound in advance on the size of phenotypes. We assessed experimentally our proposal in depth on a set of challenging and carefully selected benchmarks, comparing the results of the standard GE framework as well as two of the most significant enhancements proposed in the literature: 1) position-independent GE and 2) structured GE. Our results show that WHGE delivers very good results in terms of fitness as well as in terms of the properties of the genotype-phenotype mapping procedure.
... This eliminates the need to store the examples for the support set, and allows a continuous generation of models, which is especially suitable for generating a continuum of regression models. Other few-shot learning techniques include using Siamese structures [12] and evolutionary methods [14]. ...
Preprint
Compared to humans, machine learning models generally require significantly more training examples and fail to extrapolate from experience to solve previously unseen challenges. To help close this performance gap, we augment single-task neural networks with a meta-recognition model which learns a succinct model code via its autoencoder structure, using just a few informative examples. The model code is then employed by a meta-generative model to construct parameters for the task-specific model. We demonstrate that for previously unseen tasks, without additional training, this Meta-Learning Autoencoder (MeLA) framework can build models that closely match the true underlying models, with loss significantly lower than given by fine-tuned baseline networks, and performance that compares favorably with state-of-the-art meta-learning algorithms. MeLA also adds the ability to identify influential training examples and predict which additional data will be most valuable to acquire to improve model prediction.
... ey have shown to be particularly instrumental in the eld of evolutionary robotics, for instance, by allowing robots to overcome mechanical damages [3], or to evolve complex neural networks for maze navigation [27]. Several variants of these two main algorithms have been proposed using di erent containers [29,31], and selection operators [10,13,20]. A unifying framework has been proposed to gather these di erent variants into a common formalism [4]. ...
Article
Full-text available
Enabling artificial agents to automatically learn complex, versatile and high-performing behaviors is a long-lasting challenge. This paper presents a step in this direction with hierarchical behavioral repertoires that stack several behavioral repertoires to generate sophisticated behaviors. Each repertoire of this architecture uses the lower repertoires to create complex behaviors as sequences of simpler ones, while only the lowest repertoire directly controls the agent's movements. This paper also introduces a novel approach to automatically define behavioral descriptors thanks to an unsupervised neural network that organizes the produced high-level behaviors. The experiments show that the proposed architecture enables a robot to learn how to draw digits in an unsupervised manner after having learned to draw lines and arcs. Compared to traditional behavioral repertoires, the proposed architecture reduces the dimensionality of the optimization problems by orders of magnitude and provides behaviors with a twice better fitness. More importantly, it enables the transfer of knowledge between robots: a hierarchical repertoire evolved for a robotic arm to draw digits can be transferred to a humanoid robot by simply changing the lowest layer of the hierarchy. This enables the humanoid to draw digits although it has never been trained for this task.
... This approach has been applied to learning to optimize neural networks (Hochreiter et al., 2001;Andrychowicz et al., 2016;Li & Malik, 2017), as well as for learning dynamically changing recurrent neural networks (Ha et al., 2017). Similar methods have also been proposed that use evolutionary algorithms (Mengistu et al., 2016). One recent approach learns both the weight initialization and the optimizer, for the purpose of few-shot image recognition (Ravi & Larochelle, 2017). ...
Article
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on a few-shot image classification benchmark, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
Chapter
Indirect encoding is a promising area of research in machine learning/evolutionary computation, however, it is rarely able to achieve performance on par with state of the art directly encoded methods. One of the most important properties of indirect encoding is the ability to control exploration during learning by transforming random genotypic variation into an arbitrary distribution of phenotypic variation. This gives indirect encoding a capacity to learn to be adaptable in a way which is not possible for direct encoding. However, during normal objective based learning, there is no direct selection for adaptability, which results in not only a missed opportunity to improve the ability to learn, but often degrading it too. The recent meta learning algorithm MAML makes it possible to directly and efficiently optimize for the ability to adapt. This paper demonstrates that even when indirect encoding can be detrimental to performance in the case of normal learning, when selecting for the ability to adapt, indirect encoding can outperform direct encoding in a fair comparison. The indirect encoding technique Hypernetwork was used on the task of few shot image classification on the Omniglot dataset. The results show the importance of directly optimizing for adaptability in realizing the powerful potential of indirect encoding.
Conference Paper
We present AutoMap, a pair of methods for automatic generation of evolvable genotype-phenotype mappings. Both use an artificial neural network autoencoder trained on phenotypes harvested from fitness peaks as the basis for a genotype-phenotype mapping. In the first, the decoder segment of a bottlenecked autoencoder serves as the genotype-phenotype mapping. In the second, a denoising autoencoder serves as the genotype-phenotype mapping. Automatic generation of evolvable genotype-phenotype mappings are demonstrated on the n-legged table problem, a toy problem that defines a simple rugged fitness landscape, and the Scrabble string problem, a more complicated problem that serves as a rough model for linear genetic programming. For both problems, the automatically generated genotype-phenotype mappings are found to enhance evolvability.
Conference Paper
A common aim in evolutionary search is to skillfully navigate complex search spaces. Achieving this aim requires creating search algorithms that exploit the structure of such spaces. Yet studying such structure directly is challenging because of the expansiveness of most search spaces. In the context of evolutionary robotics, this paper suggests a middle-ground approach that combines a full-fledged domain with an expressive but limited encoding, and then precomputes the behavior of all possible individuals, enabling evaluation as a look-up table. The product is an experimental playground in which search is non-trivial yet which offers extreme computational efficiency and ground truth about search-space structure. This paper describes the approach and demonstrates a range of its applications, directly exploring deception, behavioral rarity, and generalizations of evolvability in a popular benchmark task. The hope is that the extensible framework enables quick experimentation and idea generation, aiding brainstorming of new search algorithms and measures.
Conference Paper
A common aim across evolutionary search is to skillfully navigate complex search spaces, which requires search algorithms that exploit search space structure. This paper focuses on evolutionary robotics (ER) in particular, wherein controllers for robots are evolved to produce complex behavior. One productive approach for probing search space structure is to analyze properties of fitness landscapes; however, this paper argues that ER may require a fresh perspective for landscape analysis, because ER often goes beyond the black-box setting, i.e. evaluations provide useful information about how robots behave, beyond scalar performance heuristics. Indeed, some ER algorithms explicitly exploit such behavioral information, e.g. to follow gradients of behavioral novelty rather than to climb gradients of increasing performance. Thus well-motivated behavior-aware metrics may aid probing search-space structure in ER. In particular, this paper argues that behavioral conceptions of deception, evolvability, and rarity may help to understand ER landscapes, and seeks to quantify and explore them within a common ER benchmark task. To help this investigation, an expressive but limited encoding is designed, such that the behavior of all possible individuals in the domain can be precomputed. The result is an efficient platform for experimentation that facilitates (1) probing exact quantifications of deception, evolvability, and rarity in the chosen domain, and (2) the ability to efficiently drive search through idealistic ground-truth measures. The results help develop intuitions and suggest possible new ER algorithms. The hope is that the extensible open-source framework enables quick experimentation and idea generation, aiding brainstorming of new search algorithms and measures.
Thesis
Full-text available
Deep Learning, a type of Artificial Intelligence, is transforming many industries including transportation, health care and mobile computing. The main actors behind deep learning are deep neural networks (DNNs). These artificial brains have demonstrated impressive performance on many challenging tasks such as synthesizing and recognizing speech, driving cars, and even detecting cancer from medical scans. Given their excellent performance and widespread applications in everyday life, it is important to understand: (1) how DNNs function internally; (2) why they perform so well; and (3) when they fail. Answering these questions would allow end-users (e.g. medical doctors harnessing deep learning to assist them in diagnosis) to gain deeper insights into how these models behave, and therefore more confidence in utilizing the technology in important real-world applications. Artificial neural networks traditionally had been treated as black boxes---little was known about how they arrive at a decision when an input is present. Similarly, in neuroscience, understanding how biological brains work has also been a long-standing quest. Neuroscientists have discovered neurons in human brains that selectively fire in response to specific, abstract concepts such as Halle Berry or Bill Clinton, informing the discussion of whether learned neural codes are local or distributed. These neurons were identified by finding the preferred stimuli (here, images) that highly excite a specific neuron, which was accomplished by showing subjects many different images while recording a target neuron's activation. Inspired by such neuroscience techniques, my Ph.D. study produced a series of visualization methods that synthesize the preferred stimuli for each neuron in DNNs to shed more light into (1) the weaknesses of DNNs, which raise serious concerns about their widespread deployment in critical sectors of our economy and society; and (2) how DNNs function internally. Some of the notable findings are summarized as follows. First, DNNs are easily fooled in that it is possible to produce images that are visually unrecognizable to humans, but that state-of-the-art DNNs classify as familiar objects with near certainty confidence (i.e. labeling white-noise images as ``school bus''). These images can be optimized to fool the DNN regardless of whether we treat the network as a white- or black-box (i.e. we have access to the network parameters or not). These results shed more light into the inner workings of DNNs and also question the security and reliability of deep learning applications. Second, our visualization methods reveal that DNNs can automatically learn a hierarchy of increasingly abstract features from the input space that are useful to solve a given task. In addition, we also found that neurons in DNNs are often multifaceted in that a single neuron fires for a variety of different input patterns (i.e. it is invariant to changes in the input). These observations align with the common wisdom previously established for both human visual cortex and DNNs. Lastly, many machine learning hobbyists and scientists have successfully applied our methods to visualize their own DNNs or even generate high-quality art images. We also turn the visualization frameworks into (1) an art generator algorithm, and (2) a state-of-the-art image generative model, making contributions to the fields of evolutionary computation and generative modeling, respectively.
Article
Full-text available
The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g. churches, mosques, obelisks, etc.). Here we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: e.g. producing intelligent software, robot controllers, optimized physical components, and art.
Article
Full-text available
One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments which is crucial for evolvability. Recent work showed that when selective environments vary in a systematic manner, it is possible that development can constrain the phenotypic space in regions that are evolutionarily more advantageous. Yet, the underlying mechanism that enables the spontaneous emergence of such adaptive developmental constraints is poorly understood. How can natural selection, given its myopic and conservative nature, favour developmental organisations that facilitate adaptive evolution in future previously unseen environments? Such capacity suggests a form of \textit{foresight} facilitated by the ability of evolution to accumulate and exploit information not only about the particular phenotypes selected in the past, but regularities in the environment that are also relevant to future environments. Here we argue that the ability of evolution to discover such regularities is analogous to the ability of learning systems to generalise from past experience. Conversely, the canalisation of evolved developmental processes to past selective environments and failure of natural selection to enhance evolvability in future selective environments is directly analogous to the problem of over-fitting and failure to generalise in machine learning. We show that this analogy arises from an underlying mechanistic equivalence by showing that conditions corresponding to those that alleviate over-fitting in machine learning enhance the evolution of generalised developmental organisations under natural selection. This equivalence provides access to a well-developed theoretical framework that enables us to characterise the conditions where natural selection will find general rather than particular solutions to environmental conditions.
Article
Full-text available
Hierarchical organization -- the recursive composition of sub-modules -- is ubiquitous in biological networks, including neural, metabolic, ecological, and genetic regulatory networks, and in human-made systems, such as large organizations and the Internet. To date, most research on hierarchy in networks has been limited to quantifying this property. However, an open, important question in evolutionary biology is why hierarchical organization evolves in the first place. It has recently been shown that modularity evolves because of the presence of a cost for network connections. Here we investigate whether such connection costs also tend to cause a hierarchical organization of such modules. In computational simulations, we find that networks without a connection cost do not evolve to be hierarchical, even when the task has a hierarchical structure. However, with a connection cost, networks evolve to be both modular and hierarchical, and these networks exhibit higher overall performance and evolvability (i.e. faster adaptation to new environments). Additional analyses confirm that hierarchy independently improves adaptability after controlling for modularity. Overall, our results suggest that the same force--the cost of connections--promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability. In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings will also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.
Conference Paper
Full-text available
The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search avoids this problem by encouraging a search in all interesting directions. That occurs by replacing a performance objective with a reward for novel behaviors, as defined by a human-crafted, and often simple, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a novelty pressure in image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g. churches, mosques, obelisks, etc.). Here we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: e.g. producing intelligent software, robot controllers, optimized physical components, and art.
Article
Full-text available
Many fields use search algorithms, which automatically explore a search space to find high-performing solutions: chemists search through the space of molecules to discover new drugs; engineers search for stronger, cheaper, safer designs, scientists search for models that best explain data, etc. The goal of search algorithms has traditionally been to return the single highest-performing solution in a search space. Here we describe a new, fundamentally different type of algorithm that is more useful because it provides a holistic view of how high-performing solutions are distributed throughout a search space. It creates a map of high-performing solutions at each point in a space defined by dimensions of variation that a user gets to choose. This Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) algorithm illuminates search spaces, allowing researchers to understand how interesting attributes of solutions combine to affect performance, either positively or, equally of interest, negatively. For example, a drug company may wish to understand how performance changes as the size of molecules and their cost-to-produce vary. MAP-Elites produces a large diversity of high-performing, yet qualitatively different solutions, which can be more helpful than a single, high-performing solution. Interestingly, because MAP-Elites explores more of the search space, it also tends to find a better overall solution than state-of-the-art search algorithms. We demonstrate the benefits of this new algorithm in three different problem domains ranging from producing modular neural networks to designing simulated and real soft robots. Because MAP- Elites (1) illuminates the relationship between performance and dimensions of interest in solutions, (2) returns a set of high-performing, yet diverse solutions, and (3) improves finding a single, best solution, it will advance science and engineering.
Article
Full-text available
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.
Conference Paper
Full-text available
Novelty Search, a new type of Evolutionary Algorithm, has shown much promise in the last few years. Instead of select-ing for phenotypes that are closer to an objective, Novelty Search assigns rewards based on how different the pheno-types are from those already generated. A common criticism of Novelty Search is that it is effectively random or exhaus-tive search because it tries solutions in an unordered man-ner until a correct one is found. Its creators respond that over time Novelty Search accumulates information about the environment in the form of skills relevant to reaching un-charted territory, but to date no evidence for that hypothe-sis has been presented. In this paper we test that hypothesis by transferring robots evolved under Novelty Search to new environments (here, mazes) to see if the skills they've ac-quired generalize. Three lines of evidence support the claim that Novelty Search agents do indeed learn general explo-ration skills. First, robot controllers evolved via Novelty Search in one maze and then transferred to a new maze ex-plore significantly more of the new environment than non-evolved (randomly generated) agents. Second, a Novelty Search process to solve the new mazes works significantly faster when seeded with the transferred controllers versus randomly-generated ones. Third, no significant difference exists when comparing two types of transferred agents: those evolved in the original maze under (1) Novelty Search vs. (2) a traditional, objective-based fitness function. The evi-dence gathered suggests that, like traditional Evolutionary Algorithms with objective-based fitness functions, Novelty Search is not a random or exhaustive search process, but instead is accumulating information about the environment, resulting in phenotypes possessing skills needed to explore their world.
Conference Paper
Full-text available
One of humanity’s grand scientific challenges is to create artificially intelligent robots that rival natural animals in intelligence and agility. A key enabler of such animal complexity is the fact that animal brains are structurally organized in that they exhibit modularity and regularity, amongst other attributes. Modularity is the localization of function within an encapsulated unit. Regularity refers to the compressibility of the information describing a structure, and typically involves symmetries and repetition. These properties improve evolvability, but they rarely emerge in evolutionary algorithms without specific techniques to encourage them. It has been shown that (1) modularity can be evolved in neural networks by adding a cost for neural connections and, separately, (2) that the HyperNEAT algorithm produces neural networks with complex, functional regularities. In this paper we show that adding the connection cost technique to HyperNEAT produces neural networks that are significantly more modular, regular, and higher performing than HyperNEAT without a connection cost, even when compared to a variant of HyperNEAT that was specifically designed to encourage modularity. Our results represent a stepping stone towards the goal of producing artificial neural networks that share key organizational properties with the brains of natural animals.
Article
Full-text available
Evolutionary robotics relies upon techniques in- volving the evolution of artificial neural networks to synthesize sensorimotor control systems for ac- tual or physically simulated robots. This paper is a comparative study of three principal types of artificial neural networks; the Continuous Time Recurrent Neural Network (CTRNN), the Plas- tic Neural Network (PNN) and the GasNet. An attempt is made to evolve networks capable of achieving locomotion with a physically simulated biped. Of the 14 distinct networks tested, Gas- Nets were the only network to achieve cyclical lo- comotion, although CTRNNs were able to attain a higher level of average fitness.
Article
Full-text available
Why evolvability appears to have increased over evolutionary time is an important unresolved biological question. Unlike most candidate explanations, this paper proposes that increasing evolvability can result without any pressure to adapt. The insight is that if evolvability is heritable, then an unbiased drifting process across genotypes can still create a distribution of phenotypes biased towards evolvability, because evolvable organisms diffuse more quickly through the space of possible phenotypes. Furthermore, because phenotypic divergence often correlates with founding niches, niche founders may on average be more evolvable, which through population growth provides a genotypic bias towards evolvability. Interestingly, the combination of these two mechanisms can lead to increasing evolvability without any pressure to out-compete other organisms, as demonstrated through experiments with a series of simulated models. Thus rather than from pressure to adapt, evolvability may inevitably result from any drift through genotypic space combined with evolution's passive tendency to accumulate niches.
Article
Full-text available
A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks-their organization as functional, sparsely connected subunits-but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes.
Chapter
Full-text available
Novelty search is a recent and promising approach to evolve neurocontrollers, especially to drive robots. The main idea is to maximize the novelty of behaviors instead of the efficiency. However, abandoning the efficiency objective(s) may be too radical in many contexts. In this paper, a Pareto-based multi-objective evolutionary algorithmis employed to reconcile novelty search with objective-based optimization by following a multiobjectivization process. Several multiobjectivizations based on behavioral novelty and on behavioral diversity are compared on a maze navigation task. Results show that the bi-objective variant “Novelty + Fitness” is better at fine-tuning behaviors than basic novelty search, while keeping a comparable number of iterations to converge.
Conference Paper
Full-text available
The bootstrap problem is often recognized as one of the main challenges of evolutionary robotics: if all individuals from the first randomly generated population perform equally poorly, the evolutionary process won't generate any interesting solution. To overcome this lack of fitness gradient, we propose to efficiently explore behaviors until the evolutionary process finds an individual with a non-minimal fitness. To that aim, we introduce an original diversity-preservation mechanism, called behavioral diversity, that relies on a distance between behaviors (instead of genotypes or phenotypes) and multi-objective evolutionary optimization. This approach has been successfully tested and compared to a recently published incremental evolution method (multi-subgoal evolution) on the evolution of a neuro-controller for a light-seeking mobile robot. Results obtained with these two approaches are qualitatively similar although the introduced one is less directed than multi-subgoal evolution.
Article
Full-text available
This paper investigates how an evolutionary algorithm with an indirect encoding exploits the property of phenotypic regularity, an important design principle found in natural organisms and engineered designs. We present the first comprehensive study showing that such phenotypic regularity enables an indirect encoding to outperform direct encoding controls as problem regularity increases. Such an ability to produce regular solutions that can exploit the regularity of problems is an important prerequisite if evolutionary algorithms are to scale to high-dimensional real-world problems, which typically contain many regularities, both known and unrecognized. The indirect encoding in this case study is HyperNEAT, which evolves artificial neural networks (ANNs) in a manner inspired by concepts from biological development. We demonstrate that, in contrast to two direct encoding controls, HyperNEAT produces both regular behaviors and regular ANNs, which enables HyperNEAT to significantly outperform the direct encodings as regularity increases in three problem domains. We also show that the types of regularities HyperNEAT produces can be biased, allowing domain knowledge and preferences to be injected into the search. Finally, we examine the downside of a bias toward regularity. Even when a solution is mainly regular, some irregularity may be needed to perfect its functionality. This insight is illustrated by a new algorithm called HybrID that hybridizes indirect and direct encodings, which matched HyperNEAT's performance on regular problems yet outperformed it on problems with some irregularity. HybrID's ability to improve upon the performance of HyperNEAT raises the question of whether indirect encodings may ultimately excel not as stand-alone algorithms, but by being hybridized with a further process of refinement, wherein the indirect encoding produces patterns that exploit problem regularity and the refining process modifies that pattern to capture irregularities. This- - paper thus paints a more complete picture of indirect encodings than prior studies because it analyzes the impact of the continuum between irregularity and regularity on the performance of such encodings, and ultimately suggests a path forward that combines indirect encodings with a separate process of refinement.
Conference Paper
Full-text available
A challenge for current evolutionary algorithms is to yield highly evolvable representations like those in nature. Such evolvability in natural evolution is encouraged through selection: Lineages better at molding to new niches are less susceptible to extinction. Similar selection pressure is not generally present in evolutionary algorithms; however, the first hypothesis in this paper is that novelty search, a recent evolutionary technique, also selects for evolvability because it rewards lineages able to continually radiate new behaviors. Results in experiments in a maze-navigation domain in this paper support that novelty search finds more evolvable representations than regular fitness-based search. However, though novelty search outperforms fitness-based search in a second biped locomotion experiment, it proves no more evolvable than fitness-based search because delicately balanced behaviors are more fragile in that domain. The second hypothesis is that such fragility can be mitigated through self-adaption, whereby genomes influence their own reproduction. Further experiments in fragile domains with novelty search and self-adaption indeed demonstrate increased evolvability, while, interestingly, adding self-adaptation to fitness-based search decreases evolvability. Thus, selecting for novelty may often facilitate evolvability when representations are not overly fragile; furthermore, achieving the potential of self-adaptation may often critically depend upon the reward scheme driving evolution.
Conference Paper
Full-text available
In Evolutionary Robotics (ER), explicitly rewarding for behavioral diversity recently revealed to generate efficient results without recourse to complex fitness functions. The principle of such approaches is to explicitly encourage diversity in the robot behavior space instead of in the space of genotypes (the space explored by the evolutionary algorithm) or the space of phenotypes (the space of robot controllers and morphologies). To implement such approaches, a similarity between behaviors needs to be evaluated but, up to now, used similarity measures are problem-specific. The goal of this work is to explore generic behavioral similarity measures that only rely on sensori-motor values. With such a measure, we managed to evolve the topology and the parameters of neuro-controllers that make a simulated robot go towards a ball, take it, find a basket, put the ball into the basket, perform a half-turn, search and take another ball, put it into the basket, etc. In this experiment, two objectives were simultaneously optimized with NSGA-II: the number of collected balls and the generic behavioral diversity objective. Several generic behavioral measures are compared. To confirm the interpretation of behavioral diversity objective and in an attempt to characterize behavioral similarity measures, they are also compared to human-made behavioral similarity evaluations. They reveal to classify behaviors globally as humans did, but with no clear correlation between the closeness to human classification and the efficiency within an evolutionary run.
Conference Paper
Full-text available
A major goal for researchers in neuroevolution is to evolve artificial neural networks (ANNs) that can learn during their lifetime. Such networks can adapt to changes in their envi- ronment that evolution on its own cannot anticipate. How- ever, a profound problem with evolving adaptive systems is that if the impact of learning on the fitness of the agent is only marginal, then evolution is likely to produce individuals that do not exhibit the desired adaptive behavior. Instead, because it is easier at first to improve fitness without evolv- ing the ability to learn, they are likely to exploit domain- dependent static (i.e. non-adaptive) heuristics. This paper proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm, which opens up a new avenue in the evolution of adaptive systems because it can exploit the behavioral difference between learning and non- learning individuals. The main idea in novelty search is to abandon objective-based fitness and instead simply search only for novel behavior, which avoids deception entirely and has shown prior promising results in other domains. This paper shows that novelty search significantly outperforms fitness-based search in a tunably deceptive T-Maze naviga- tion domain because it fosters the emergence of adaptive behavior.
Article
Full-text available
Evolutionary robotics (ER) aims at automatically designing robots or controllers of robots without having to describe their inner workings. To reach this goal, ER researchers primarily employ phenotypes that can lead to an infinite number of robot behaviors and fitness functions that only reward the achievement of the task-and not how to achieve it. These choices make ER particularly prone to premature convergence. To tackle this problem, several papers recently proposed to explicitly encourage the diversity of the robot behaviors, rather than the diversity of the genotypes as in classic evolutionary optimization. Such an approach avoids the need to compute distances between structures and the pitfalls of the noninjectivity of the phenotype/behavior relation; however, it also introduces new questions: how to compare behavior? should this comparison be task specific? and what is the best way to encourage diversity in this context? In this paper, we review the main published approaches to behavioral diversity and benchmark them in a common framework. We compare each approach on three different tasks and two different genotypes. The results show that fostering behavioral diversity substantially improves the evolutionary process in the investigated experiments, regardless of genotype or task. Among the benchmarked approaches, multi-objective methods were the most efficient and the generic, Hamming-based, behavioral distance was at least as efficient as task specific behavioral metrics.
Article
Full-text available
In evolutionary computation, the fitness function normally measures progress toward an objective in the search space, effectively acting as an objective function. Through deception, such objective functions may actually prevent the objective from being reached. While methods exist to mitigate deception, they leave the underlying pathology untreated: Objective functions themselves may actively misdirect search toward dead ends. This paper proposes an approach to circumventing deception that also yields a new perspective on open-ended evolution. Instead of either explicitly seeking an objective or modeling natural evolution to capture open-endedness, the idea is to simply search for behavioral novelty. Even in an objective-based problem, such novelty search ignores the objective. Because many points in the search space collapse to a single behavior, the search for novelty is often feasible. Furthermore, because there are only so many simple behaviors, the search for novelty leads to increasing complexity. By decoupling open-ended search from artificial life worlds, the search for novelty is applicable to real world problems. Counterintuitively, in the maze navigation and biped walking tasks in this paper, novelty search significantly outperforms objective-based search, suggesting the strange conclusion that some problems are best solved by methods that ignore the objective. The main lesson is the inherent limitation of the objective-based paradigm and the unexploited opportunity to guide search through other means.
Article
Full-text available
We report a measurement of the top-quark mass M{sub t} in the dilepton decay channel ttbl{sup '+}{sub l}{sup '}bl{sub l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb¹ of pp collisions collected with the CDF II detector, yielding a measurement of M{sub t}=171.2{+-}2.7(stat){+-}2.9(syst) GeV/c².
Article
Full-text available
The rate of mutation is central to evolution. Mutations are required for adaptation, yet most mutations with phenotypic effects are deleterious. As a consequence, the mutation rate that maximizes adaptation will be some intermediate value. Here, we used digital organisms to investigate the ability of natural selection to adjust and optimize mutation rates. We assessed the optimal mutation rate by empirically determining what mutation rate produced the highest rate of adaptation. Then, we allowed mutation rates to evolve, and we evaluated the proximity to the optimum. Although we chose conditions favorable for mutation rate optimization, the evolved rates were invariably far below the optimum across a wide range of experimental parameter settings. We hypothesized that the reason that mutation rates evolved to be suboptimal was the ruggedness of fitness landscapes. To test this hypothesis, we created a simplified landscape without any fitness valleys and found that, in such conditions, populations evolved near-optimal mutation rates. In contrast, when fitness valleys were added to this simple landscape, the ability of evolving populations to find the optimal mutation rate was lost. We conclude that rugged fitness landscapes can prevent the evolution of mutation rates that are optimal for long-term adaptation. This finding has important implications for applied evolutionary research in both biological and computational realms.
Article
Full-text available
Concomitant with the evolution of biological diversity must have been the evolution of mechanisms that facilitate evolution, because of the essentially infinite complexity of protein sequence space. We describe how evolvability can be an object of Darwinian selection, emphasizing the collective nature of the process. We quantify our theory with computer simulations of protein evolution. These simulations demonstrate that rapid or dramatic environmental change leads to selection for greater evolvability. The selective pressure for large-scale genetic moves such as DNA exchange becomes increasingly strong as the environmental conditions become more uncertain. Our results demonstrate that evolvability is a selectable trait and allow for the explanation of a large body of experimental results.
Article
Full-text available
Biological networks have an inherent simplicity: they are modular with a design that can be separated into units that perform almost independently. Furthermore, they show reuse of recurring patterns termed network motifs. Little is known about the evolutionary origin of these properties. Current models of biological evolution typically produce networks that are highly nonmodular and lack understandable motifs. Here, we suggest a possible explanation for the origin of modularity and network motifs in biology. We use standard evolutionary algorithms to evolve networks. A key feature in this study is evolution under an environment (evolutionary goal) that changes in a modular fashion. That is, we repeatedly switch between several goals, each made of a different combination of subgoals. We find that such "modularly varying goals" lead to the spontaneous evolution of modular network structure and network motifs. The resulting networks rapidly evolve to satisfy each of the different goals. Such switching between related goals may represent biological evolution in a changing environment that requires different combinations of a set of basic biological functions. The present study may shed light on the evolutionary forces that promote structural simplicity in biological networks and offers ways to improve the evolutionary design of engineered systems.
Article
Full-text available
We describe an evolutionary approach to the control problem of bipedal walking. Using a full rigid-body simulation of a biped, it was possible to evolve recurrent neural networks that controlled stable straight-line walking on a planar surface. No proprioceptive information was necessary in order to achieve this task. Furthermore, simple sensory input to locate a sound source was integrated to achieve directional walking. To our knowledge, this is the first work that demonstrates the application of evolutionary optimization to 3D physically simulated biped locomotion
Conference Paper
A challenge in evolutionary computation is to create representations as evolvable as those in natural evolution. This paper hypothesizes that extinction events, i.e. mass extinctions, can significantly increase evolvability, but only when combined with a divergent search algorithm, i.e. a search driven towards diversity (instead of optimality). Extinctions amplify diversity-generation by creating unpredictable evolutionary bottlenecks. Persisting through multiple such bottlenecks is more likely for lineages that diversify across many niches, resulting in indirect selection pressure for the capacity to evolve. This hypothesis is tested through experiments in two evolutionary robotics domains. The results show that combining extinction events with divergent search increases evolvability, while combining them with convergent search offers no similar benefit. The conclusion is that extinction events may provide a simple and effective mechanism to enhance performance of divergent search algorithms.
Article
Evolution’s ability to find innovative phenotypes is an important ingredient in the emergence of complexity in nature. A key factor in this capability is evolvability, or the propensity towards phenotypic variation. Numerous explanations for the origins of evolvability have been proposed, often differing in the role that they attribute to adaptive processes. To provide a new perspective on these explanations, experiments in this paper simulate evolution in gene regulatory networks, revealing that the type of evolvability in question significantly impacts the dynamics that follow. In particular, while adaptive processes result in evolvable individuals, processes that are either neutral or that explicitly encourage divergence result in evolvable populations. Furthermore, evolvability at the population level proves the most critical factor in the production of evolutionary innovations, suggesting that nonadaptive mechanisms are the most promising avenue for investigating and understanding evolvability. These results reconcile a large body of work across biology and inform attempts to reproduce evolvability in artificial settings.
Article
Evolvability is an organism's capacity to generate heritable phenotypic variation. Metazoan evolution is marked by great morphological and physiological diversification, although the core genetic, cell biological, and developmental processes are largely conserved. Metazoan diversification has entailed the evolution of various regulatory processes controlling the time, place, and conditions of use of the conserved core processes. These regulatory processes, and certain of the core processes, have special properties relevant to evolutionary change. The properties of versatile protein elements, weak linkage, compartmentation, redundancy, and exploratory behavior reduce the interdependence of components and confer robustness and flexibility on processes during embryonic development and in adult physiology, They also confer evolvability on the organism by reducing constraints on change and allowing the accumulation of nonlethal variation. Evolvability may have been generally selected in the course of selection for robust, flexible processes suitable for complex development and physiology and specifically selected in lineages undergoing repeated radiations.
Article
The problem of complex adaptations is studied in two largely disconnected research traditions: evolutionary biology and evolutionary computer science. This paper summarizes the results from both areas and compares their implications. In evolutionary computer science it was found that the Darwinian process of mutation, recombination and selection is not universally effective in improving complex systems like computer programs or chip designs. For adaptation to occur, these systems must possess "evolvability," i.e., the ability of random variations to sometimes produce improvement. It was found that evolvability critically depends on the way genetic variation maps onto phenotypic variation, an issue known as the representation problem. The genotype-phenotype map determines the variability of characters, which is the propensity to vary. Variability needs to be distinguished from variations, which are the actually realized differences between individuals. The genotype-phenotype map is the common theme underlying such varied biological phenomena as genetic canalization, developmental constraints, biological versatility, devel- opmental dissociability, and morphological integration. For evolutionary biology the representation problem has im- portant implications: how is it that extant species acquired a genotype-phenotype map which allows improvement by mutation and selection? Is the genotype-phenotype map able to change in evolution? What are the selective forces, if any, that shape the genotype-phenotype map? We propose that the genotype-phenotype map can evolve by two main routes: epistatic mutations, or the creation of new genes. A common result for organismic design is modularity. By modularity we mean a genotype-phenotype map in which there are few pleiotropic effects among characters serving different functions, with pleiotropic effects falling mainly among characters that are part of a single functional complex. Such a design is expected to improve evolvability by limiting the interference between the adaptation of different functions. Several population genetic models are reviewed that are intended to explain the evolutionary origin of a modular design. While our current knowledge is insufficient to assess the plausibility of these models, they form the beginning of a framework for understanding the evolution of the genotype-phenotype map. Key ~tords.-Adaptation, evolution of development, evolutionary computation, genetic representations, modularity, pleiotropy, quantitative genetics.
Conference Paper
Fluid bipedal locomotion remains a significant challenge for humanoid robotics. Recent bio-inspired approaches have made significant progress by using small numbers of tightly coupled neurons, called central pattern generators (CPGs). Our approach exchanges complexity of the neuron model for complexity of the network, gradually building a network of simple neurons capable of complex behaviors. We show this approach generates controllers de novo that are able to control 3D bipedal locomotion up to 10 meters. This results holds for robots with human-proportionate morphologies across 95% of normal human variation. The resulting networks are then examined to discover neural structures that arise unusually often, lending some insight into the workings of otherwise opaque controllers.
Conference Paper
Evolutionary algorithms tend to produce solutions that are not evolvable: Although current fitness may be high, further search is impeded as the effects of mutation and crossover become increasingly detrimental. In nature, in addition to having high fitness, organisms have evolvable genomes: phenotypic variation resulting from random mutation is structured and robust. Evolvability is important because it allows the population to produce meaningful variation, leading to efficient search. However, because evolvability does not improve immediate fitness, it must be selected for indirectly. One way to establish such a selection pressure is to change the fitness function systematically. Under such conditions, evolvability emerges only if the representation allows manipulating how genotypic variation maps onto phenotypic variation and if such manipulations lead to detectable changes in fitness. This research forms a framework for understanding how fitness function and representation interact to produce evolvability, yielding more evolvable encodings. Ultimately such encodings may lead to evolutionary algorithms that exhibit the structured complexity and robustness found in nature.
Article
The success of evolutionary search depends on adequate parameter settings. Ill conditioned strategy parameters decrease the success probabilities of genetic operators. Proper settings may change during the optimization process. The question arises if adequate settings can be found automatically during the optimization process. Evolution strategies gave an answer to the online parameter control problem decades ago: self-adaptation. Self-adaptation is the implicit search in the space of strategy parameters. The self-adaptive control of mutation strengths in evolution strategies turned out to be exceptionally successful. Nevertheless, for years self-adaptation has not achieved the attention it deserves. This paper is a survey of self-adaptive parameter control in evolutionary computation. It classifies self-adaptation in the taxonomy of parameter setting techniques, gives an overview of automatic online-controllable evolutionary operators and provides a coherent view on search techniques in the space of strategy parameters. Beyer and Sendhoff’s covariance matrix self-adaptation evolution strategy is reviewed as a successful example for self-adaptation and exemplarily tested for various concepts that are discussed.
Article
Complex Adaptations and the Evolution of Evolvability +representation problem: "evolvability critically depends on the way genetic variation maps onto phenotypic variation" +Evidence that phenotypic variation is under genetic control: canalization; mutant phenotypes often show more variation than the wild phenotype (see p.7) "the variability of the traits itself can evolve" +cites Halder1995: Drosophila eyeless - out-of-place eye production can be triggered by a single signal +"genotype-phenotype map underlying theme of: genetic canalization, developmental constraints, biological versatility, developmental dissociability, morphological integration" and more +genotype-phenotype map evolves, main selective forces: epistatic mutations, creation of new genes +"Variability needs to be distinguished from variation" (variability: potential to vary [*dispositional* concept, not actual state but expected development of a phenotypic trait in response to genetic and environmental influences] - variation: actually realized differences between individuals) +cites Levinton1988: generation of variability needs to be studied +Evolution of complex adaptations requires a match between the functional relationships of the phenotypic characters and their genetic representation - cites Riedl1975: "If the epigenetic regulation of gene expression 'imitates' the functional organization of the traits then the improvement by mutation and selection is facilitated" (helps when sexual recombination, p.11) +cites Wright1968 "Pleiotropy cannot be wholly universal" - how limit: modularity, interaction mainly short range, less frequent between members of different complexes +evolution of modularity: origin of differentiated animals dominated by parcellation / detachment (opposed to integration of distinct parts) but where shall we put delimitations???
Article
From the Cover: Spontaneous evolution of modularity and network motifs
Article
A report that a switch of a yeast protein to a 'prion' state triggers diverse phenotypic changes has prompted re-examination of the processes of evolution. To what extent should processes of gene expression and control be interpreted in terms of their capacity to allow future evolution as well as present adaptation?
Article
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.
Article
We study the dynamics of modularization in a minimal substrate. A module is a functional unit relatively separable from its surrounding structure. Although it is known that modularity is useful both for robustness and for evolvability (Wagner 1996), there is no quantitative model describing how such modularity might originally emerge. Here we suggest, using simple computer simulations, that modularity arises spontaneously in evolutionary systems in response to variation, and that the amount of modular separation is logarithmically proportional to the rate of variation. Consequently, we predict that modular architectures would appear in correlation with high environmental change rates. Because this quantitative model does not require any special substrate to occur, it may also shed light on the origin of modular variation in nature. This observed relationship also indicates that modular design is a generic phenomenon that might be applicable to other fields, such as engineering: Engineering design methods based on evolutionary simulation would benefit from evolving to variable, rather than stationary, fitness criteria, as a weak and problem-independent method for inducing modularity.
Article
In recent years, biologists have increasingly been asking whether the ability to evolve--the evolvability--of biological systems, itself evolves, and whether this phenomenon is the result of natural selection or a by-product of other evolutionary processes. The concept of evolvability, and the increasing theoretical and empirical literature that refers to it, may constitute one of several pillars on which an extended evolutionary synthesis will take shape during the next few years, although much work remains to be done on how evolvability comes about.
Article
Two major goals in machine learning are the discovery of complex multidimensional solutions and continual improvement of existing solutions. In this paper, we argue that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals. We demonstrate the power of complexification through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures. NEAT is applied to an open-ended coevolutionary robot duel domain where robot controllers compete head to head. Because the robot duel domain supports a wide range of sophisticated strategies, and because coevolution benefits from an escalating arms race, it serves as a suitable testbed for observing the effect of evolving increasingly complex controllers. The result is an arms race of increasingly sophisticated strategies. When compared to the evolution of networks with fixed structure, complexifying networks discover significantly more sophisticated strategies. The results suggest that in order to realize the full potential of evolution, and search in general, solutions must be allowed to complexify as well as optimize.
Article
: The problem of complex adaptations is studied in two largely disconnected research traditions: evolutionary biology and evolutionary computer science. This paper summarizes the results from both areas and compares their implications. In evolutionary computer science it was found that the Darwinian process of mutation, recombination and selection is not universally effective in improving complex systems like computer programs or chip designs. For adaptation to occur, these systems must possess "evolvability", i.e. the ability of random variations to sometimes produce improvement. It was found that evolvability critically depends on the way genetic variation maps onto phenotypic variation, an issue known as the representation problem. The genotype-phenotype map determines the variability of characters, which is the propensity to vary. Variability needs to be distinguished from variation, which are the actually realized differences between individuals. The genotype-phenotype map is the ...
Complex networks of simple neurons for bipedal locomotion, Proceedings of the
  • Petros Brian Allen
  • Faloutsos
Behavioral diversity measures for evolutionary robotics
  • S Doncieux
  • J.-B Mouret
S. Doncieux and J.-B. Mouret. Behavioral diversity measures for evolutionary robotics. In Evolutionary Computation (CEC), 2010 IEEE Congress on, pages 1-8. IEEE, 2010.
How evolution learns to generalise: Principles of under-fitting, over-fitting and induction in the evolution of developmental organisation
  • K Kouvaris
  • J Clune
  • L Kounios
  • M Brede
  • R A Watson
K. Kouvaris, J. Clune, L. Kounios, M. Brede, and R. A. Watson. How evolution learns to generalise: Principles of under-fitting, over-fitting and induction in the evolution of developmental organisation. arXiv preprint arXiv:1508.06854, 2015.
Illuminating search spaces by mapping elites
  • J.-B Mouret
  • J Clune
J.-B. Mouret and J. Clune. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909, 2015.
On the origin of modular variation. Evolution
  • P J B Lipson
  • N P Suh