Conference Paper

Cartesian genetic programming

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cartesian Genetic Programming is a form of genetic programming. It is increasing in popularity. It was developed by Julian Miller with Peter Thomson in 1997. In its classic form it uses a very simple integer based genetic representation of a program in the form of a directed graph. In a number of studies, it has been shown to be efficient in comparison with other GP techniques. Since then, the classical form of CGP has been enhanced in various ways by including automatically defined functions. Most recently, it has been developed by Julian Miller, Wolfgang Banzhaf and Simon Harding to include self-modification operators. This again has increased its efficiency. The tutorial will cover the basic technique, advanced developments and applications to a variety of problem domains.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This approach aims at diversifying the logic circuit design through adding various basic logic gates, e.g., AND and Inverter. While this approach has been shown to be effective, e.g., increasing design resilience up to 90% under (benign) faults [45], it is designed to be incorporated in the vendor's IP libraries and tools, where the designer has little control. The designer must either create libraries (costly and unproductive) or live under the mercy of the vendor and fab (who may collude). ...
... However, this works well as long as a minority of circuits are non-faulty and assuming the independence of failures, i.e., the absence of Common Mode Failures (CMF, henceforth); this is often considered a hard assumption [9], [31]. To demonstrate this, we conducted an experiment that shows, in Figure 1, that the average failure probability of TMR (red curve) is around 70%, even when used together with the aforementioned fine-grained design approach [45] (see more details in Section VII). ...
... To combat low level faults, resilience through design diversity must be applied at the gate level. Cartesian Genetic Programming (CGP) [45] had been proposed to generate diverse isofunctional structures to improve resilience at the gate level of a design [2]. Similarly, redundant designs can be spatially separated to provide resilience against external faults in TMR systems [44]. ...
Preprint
Full-text available
A long-standing challenge is the design of chips resilient to faults and glitches. Both fine-grained gate diversity and coarse-grained modular redundancy have been used in the past. However, these approaches have not been well-studied under other threat models where some stakeholders in the supply chain are untrusted. Increasing digital sovereignty tensions raise concerns regarding the use of foreign off-the-shelf tools and IPs, or off-sourcing fabrication, driving research into the design of resilient chips under this threat model. This paper addresses a threat model considering three pertinent attacks to resilience: distribution, zonal, and compound attacks. To mitigate these attacks, we introduce the \texttt{ResiLogic} framework that exploits \textit{Diversity by Composability}: constructing diverse circuits composed of smaller diverse ones by design. This gives designer the capability to create circuits at design time without requiring extra redundancy in space or cost. Using this approach at different levels of granularity is shown to improve the resilience of circuit design in \texttt{ResiLogic} against the three considered attacks by a factor of five. Additionally, we also make a case to show how E-Graphs can be utilized to generate diverse circuits under given rewrite rules.
... CGP for Synthesizing Complex Functions. CGP is a type of genetic programing that is specifically used for designing combinational digital circuits [23,38,39]. It creates directed acyclic graphs that function as combinational arrays of gates, which represent the digital circuits. ...
... Each node is identified by 2D coordinates on the graph. In CGP, a directed acrylic graph (DAG) consisting of an array of gates is evolved iteratively for the desired function [38]. Nodes of the DAG are gates picked from the predefined list employed to build the combinational circuit. ...
... Furthermore, a search-based strategy that adopted CGP to resynthesize circuits for FPGA system design was presented in [41]. Overall, CGP provides better tradeoff metrics between hardware design parameters and error metrics than traditional methods [38], but the synthesis runtime is exceptionally high for a large number of inputs and complex nonlinear functionalities [42]. Additionally, the increased use of machine learning and artificial intelligence techniques mandates higher-order computations in 32-or 64-bit data formats [43]. ...
Article
Full-text available
Unconventional functions, including activation functions and power functions, are extremely hard-to-realize primarily due to the difficulty in arriving at the hierarchical design. The hierarchical design allows the synthesis tool to map the functionality with that of standard cells employed through the regular ASIC synthesis flow. For conventional functions, the hierarchical design is structured and then supplied to the synthesis flow, whereas, for unconventional functions, the same method is not reliable, since the current synthesis method does not offer any design-space exploration scheme to arrive at an easy-to-realize design entity. The unconventional functions either take a long synthesis run-time or additional efforts are spent in restructuring the hierarchical design for the desired function to synthesizable ones. Cartesian genetic programing (CGP) allows to not only incorporate custom logic gates for synthesizing the hierarchical design but also aids in the design-space exploration for the targeted function through the custom gates. The CGP configuration evolves difficult-to-realize complex functions with multiple solutions, and filtering through desired Pareto-optimal requirements offers a unique hierarchical design. Incorporating CGP-derived hierarchical designs into the traditional synthesis flow is instrumental for implementing and evaluating higher-order designs comprising nonlinear functional constructs. Six activation functions and power functions that fall in the category of unconventional functions are realized by the CGP method using custom cells to demonstrate the capability. Further, the hierarchical design of these unconventional functions is flattened and compared with the same function that is directly synthesized using basic gates. The CGP-derived synthesis method reports 3× less synthesis time for realizing the complex functions at the hierarchical level compared to the synthesis using basic gate cells. Hardware characteristics and error metrics are also investigated for the CGP realized complex functions and are made freely available for further usage to the research and designers’ community.
... Cartesian GP (CGP) is a GP approach initially developed to aid in the automatic design of electronic circuits but later became popular as a variant of GP [52]. Recently, it has gained much relevance because it has proven its usefulness in various applications. ...
... LGP [50]: fRegister Count: 6, If-Then Allowed: Yesg . CGP [52]: fColumn: 200, Rows: 1, Mutation: Probabilisticg Finally, it should be remembered that the linear regression that will act as a baseline in our experiments does not require the con¯guration of any parameters since it is always about minimizing the accumulated point-to-point distance between the functions to be compared. ...
Article
The challenge of assessing semantic similarity between pieces of text through computers has attracted considerable attention from industry and academia. New advances in neural computation have developed very sophisticated concepts, establishing a new state of the art in this respect. In this paper, we go one step further by proposing new techniques built on the existing methods. To do so, we bring to the table the stacking concept that has given such good results and propose a new architecture for ensemble learning based on genetic programming. As there are several possible variants, we compare them all and try to establish which one is the most appropriate to achieve successful results in this context. Analysis of the experiments indicates that Cartesian Genetic Programming seems to give better average results.
... Digital logic gate circuit G-P maps have been widely used to study the properties of evolution Ofria and Wilke (2005); Arthur and Polak (2006); Macia and Solé (2009);Miller and Harding (2009);Raman and Wagner (2011);Hu et al. (2012); Hu and Banzhaf (2018). For the map used in this paper, genotypes are single-output feed-forward circuits of logic gates, such as AND and OR gates, and phenotypes are the Boolean functions computed by circuits over all possible inputs to the circuit. ...
... Our CGP representation is based on Miller et al. (2000) with one row of gates. The levels-back parameter can be chosen to be any integer from one (in which case, nodes can only connect to the previous layer) to the maximum number of nodes (in which case a node can connect to any previous node) Miller and Harding (2009). Our LGP representation is described in Hu et al. (2020) with the exceptions that we are using 10 instructions instead of 6, and we use the first 2 registers as computational registers and the remainder as input registers. ...
Preprint
Full-text available
Understanding the evolution of complexity is an important topic in a wide variety of academic fields. Implications of better understanding complexity include increased knowledge of major evolutionary transitions and the properties of living and technological systems. Genotype-phenotype (G-P) maps are fundamental to evolution, and biologically-oriented G-P maps have been shown to have interesting and often-universal properties that enable evolution by following phenotype-preserving walks in genotype space. Here we use a digital logic gate circuit G-P map where genotypes are represented by circuits and phenotypes by the functions that the circuits compute. We compare two mathematical definitions of circuit and phenotype complexity and show how these definitions relate to other well-known properties of evolution such as redundancy, robustness, and evolvability. Using both Cartesian and Linear genetic programming implementations, we demonstrate that the logic gate circuit shares many universal properties of biologically derived G-P maps, with the exception of the relationship between one method of computing phenotypic evolvability, robustness, and complexity. Due to the inherent structure of the G-P map, including the predominance of rare phenotypes, large interconnected neutral networks, and the high mutational load of low robustness, complex phenotypes are difficult to discover using evolution. We suggest, based on this evidence, that evolving complexity is hard and we discuss computational strategies for genetic-programming-based evolution to successfully find genotypes that map to complex phenotypes in the search space.
... Over the last three decades, we have seen the introduction of new paradigms and methods in the search for ever-better models. Two of these are Linear Genetic Programming (LGP), where programs are represented as a sequence of instructions (Banzhaf et al., 1998), and Cartesian Genetic Programming (CGP), where a program is represented as a Directed Acyclical Graph (DAG) (Miller et al., 1999;Miller and Harding, 2008). ...
Conference Paper
Full-text available
Cartesian Genetic Programming (CGP) literature repeatedly reports that crossover operators hinder CGP search compared to a 1 + λ strategy based on mutation only. Though there have been efforts in making CGP crossover operators work, the literature is relatively evasive on why the phenomenon is observed at all. This contrasts with what happens in Linear Genetic Programming (LGP), where we know that crossover works well. While both CGP and LGP individuals can be represented as directed acyclic graphs (DAGs), changing a single connection gene in a CGP individual can drastically alter the activeness of nodes in the entire graph, as opposed to LGP where crossover changes are much more beneficial. In this contribution, we demonstrate the phenomenon and show that LGP evolution produces children that are far more similar to their parents than in CGP. This lets us propose that the design of LGP, namely the inclusion of steady-state memory registers and program size regulation, serves to protect high-fitness substructures from perturbation in a way that is not provided for in CGP.
... Specifically, it uses numbers and strings to denote different kinds of computational operations and connections between inputs and outputs of operations. Any program encoded by CGP can be represented by a graph of operations, which can then be translated into string genotypes (Miller and Harding, 2008). ...
Preprint
Recently, Cartesian Genetic Programming has been used to evolve developmental programs to guide the formation of artificial neural networks (ANNs). This approach has demonstrated success in enabling ANNs to perform multiple tasks while avoiding catastrophic forgetting. One unique aspect of this approach is the use of separate developmental programs evolved to regulate the development of separate soma and dendrite units. An opportunity afforded by this approach is the ability to incorporate Activity Dependence (AD) into the model such that environmental feedback can help to regulate the behavior of each type of unit. Previous work has shown a limited version of AD (influencing neural bias) to provide marginal improvements over non-AD ANNs. In this work, we present promising results from new extensions to AD. Specifically, we demonstrate a more significant improvement via AD on new neural parameters including health and position, as well as a combination of all of these along with bias. We report on the implications of this work and suggest several promising directions for future work.
... GP employs genetic operators inspired by natural processes, such as crossover, mutation, and selection. Over time, various GP systems have been proposed, each possessing unique characteristics (e.g., GP [27], Cartesian GP [43], and linear GP [44]). ...
Article
Full-text available
The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While LLMs have shown success in program synthesis from a task description, they often struggle to generate the correct code due to ambiguity in task specifications, complex programming syntax, and lack of reliability in the generated code. Furthermore, their generative nature limits their ability to fix erroneous code with iterative LLM prompting. Grammar-guided genetic programming (G3P, i.e., one of the top GP methods) has been shown capable of evolving programs that fit a defined Backus–Naur-form (BNF) grammar based on a set of input/output tests that help guide the search process while ensuring that the generated code does not include calls to untrustworthy libraries or poorly structured snippets. However, G3P still faces issues generating code for complex tasks. A recent study attempting to combine both approaches (G3P and LLMs) by seeding an LLM-generated program into the initial population of the G3P has shown promising results. However, the approach rapidly loses the seeded information over the evolutionary process, which hinders its performance. In this work, we propose combining an LLM (specifically ChatGPT) with a many-objective G3P (MaOG3P) framework in two parts: (i) provide the LLM-generated code as a seed to the evolutionary process following a grammar-mapping phase that creates an avenue for program evolution and error correction; and (ii) leverage many-objective similarity measures towards the LLM-generated code to guide the search process throughout the evolution. The idea behind using the similarity measures is that the LLM-generated code is likely to be close to the correct fitting code. Our approach compels any generated program to adhere to the BNF grammar, ultimately mitigating security risks and improving code quality. Experiments on a well-known and widely used program synthesis dataset show that our approach successfully improves the synthesis of grammar-fitting code for several tasks.
... For a long time, the de facto criterion for symbolic regression has been evolutionary programs, which are exemplary instances of this technique. Several methods of symbolic regression represent this criterion, such as Genetic Programming (GP) [6], network operator method [15], Cartesian GP [16], analytic programming [17], complete binary GP [18], parsematrix evolution [19], and so on. In more recent times, neural networks have also been employed in this technique [11,12,20]. ...
Conference Paper
Full-text available
Recently, finding the mathematical equations that match with data from any function has been considered a significant challenge for artificial intelligence and is known as symbolic regression. In a nutshell, symbolic regression is a subset of regression analysis that uses mathematical equation space to look for the best paradigm that matches the data and thus can match a much broader range of data sets than other paradigms, like linear regression. Explainable artificial intelligence has recently appeared where symbolic regression methods have been used for a long time to build models that are both understandable and tractable mathematically. Two symbolic regression methods, a network operator (NOP) and cartesian genetic programming (CGP), are discussed in detail. This study presents approaches for coding a mathematical equation and the basic collections of elementary functions that must be generated to perform this task. A comparative study for solving classical symbolic regression equations (benchmarks) has been carried out between the network operator method and cartesian genetic programming. It has been demonstrated through numerical results that the network operator outperforms cartesian genetic programming.
... GP [24] is an evolutionary algorithm capable of evolving problem-solving entities, including algorithms, programs, [36][37][38][39]. In this study, we adopted the tree-based GP representation, one of the most recognized and extensively utilized forms [24,36]. ...
Article
Full-text available
This study aims to determine the mechanical response of a brushless DC motor (BLDC) used in two-wheeled electric vehicles by analyzing torque values through finite element analysis. The motor features a three-phase stator structure and four permanent magnets on its rotor. The study investigates the relationship between torque, pulse degree, excitation voltage, and stator current using RMxprt software. A parametric dataset consisting of 600 data points is generated to model the BLDC motor system. Genetic programming (GP) is employed to establish a formula that correlates the motor’s output torque with the input variables. The resulting simplified formula, created with GP, achieves a mean absolute percentage error (MAPE) of 0.085 and an R-squared (R2R2{R}^2) value of 0.989, indicating high accuracy in torque prediction based on simulation parameters. This research provides a torque formulation based on parametric finite element analysis, offering potential benefits for electric bicycles and potentially eliminating the need for certain sensors. Thus, before experimental studies in the process of determining BLDC motor torque behavior, a dataset approach based on FEA parametric simulation studies and a GP formulation developed based on the dataset obtained from parametric simulations were proposed.
... With the goal to produce better programs, GP iteratively evolves a population starting with randomly selected individuals (typically not very well suited for purpose props) using operators similar to natural genetic processes (e.g., crossover, mutation, and selection). A number of GP systems have been proposed over time, each with their own unique characteristics (e.g., GP [10], Linear GP [13], Cartesian GP [14]). ...
Conference Paper
Full-text available
Grammar-Guided Genetic Programming (G3P) is widely recognised as one of the most successful approaches to program synthesis. Using a set of input/output tests, G3P evolves programs that fit a defined BNF grammar and that are capable of solving a wide range of program synthesis problems. However, G3P's inability to scale to more complex problems has limited its applicability. Recently, Generative Pre-trained Transformers (GPTs) have shown promise in revolutionizing program synthesis by generating code based on natural language prompts. However, challenges such as ensuring correctness and safety still need to be addressed as some GPT-generated programs might not work while others might include security vulnerabilities or blacklisted library calls. In this work, we proposed to combine GPT (in our case ChatGPT) with a G3P system, forcing any synthesised program to fit the BNF grammar-thus offering an opportunity to evolve/fix incorrect programs and reducing security threats. In our work, we leverage GPT-generated programs in G3P's initial population. However, since GPT-generated programs have an arbitrary structure, the initial work that we undertake is to devise a technique that maps such programs to a predefined BNF grammar before seeding the code into G3P's initial population. By seeding the grammar-mapped code into the population of our G3P system, we were able to successfully improve some of the desired programs using a well-known program synthesis benchmark. However, in its default configuration, G3P is not successful in fixing some incorrect GPT-generated programs-even when they are close to a correct program. We analysed the performance of our approach in depth and discussed its limitations and possible future improvements.
... Through iterative processes utilising operators akin to natural genetic mechanisms (such as crossover, mutation, and selection), GP progressively evolves these programs to discover improved solutions. Throughout the years, diverse GP systems have been proposed, each possessing its unique characteristics (e.g., GP [8], Linear GP [12], Cartesian GP [13]). ...
Conference Paper
Full-text available
The approach known as Grammar-Guided Genetic Programming (G3P) is widely acknowledged as a highly effective method for program synthesis, which involves automatically generating code based on high-level formal specifications. Given the increasing quantity and scale of open software repositories and generative artificial intelligence techniques, there exists a significant range of methods for retrieving or generating source code using textual problem descriptions. Therefore, in light of the prevailing circumstances, it becomes imperative to introduce G3P into alternative means of user intent, with a specific focus on textual depictions. In our previous work, we assessed the potential for G3P to evolve programs based on bi-objectives that combine the similarity to the target program using four different similarity measures and the traditional input/output error rate. The result showed that such an approach improved the success rate for generating correct solutions for some of the considered problems. Nevertheless, it is noteworthy that despite the inclusion of various similarity measures, there is no single measure that uniformly improves the success rate of G3P across all problems. Instead, certain similarity measures exhibit effectiveness in addressing specific problems while demonstrating limited efficacy in others. In this paper, we would like to expand the bi-objective framework with different similarity measures to a many-objective framework to enhance the general performance of the algorithm to more range of problems. Our experiments show that compared to the bi-objective G3P (BOG3P), the Many-objective G3P (MaOG3P) approach could achieve the best result of all BOG3P algorithms with different similarity measures.
... An approach called Self-Modifying Cartesian Genetic Programming (SMCGP) was also proposed to solve the problem of learning Boolean even-N-parity functions (Miller and Harding, 2008). In SMCGP, a genotype-to-phenotype mapping is used-i.e., the genetic programming is developmental. ...
... Modularization is a well-established research topic in GP [18]. Researchers have proposed various GP algorithms for achieving modularization, including Automatically Defined Function (ADF) [19], Tangled Program Graphs (TPG) [20], Cartesian genetic programming (CGP) [21], linear genetic programming (LGP) [22], and stack-based genetic programming (SGP) [23]. ADF is a modularization technique in tree GP that evolves multiple GP trees as functions for use in the primary GP tree, showing impressive performance in discovering complex symbolic models [24]. ...
Article
Full-text available
Evolutionary feature construction is a key technique in evolutionary machine learning, with the aim of constructing high-level features that enhance performance of a learning algorithm. In real-world applications, engineers typically construct complex features based on a combination of basic features, re-using those features as modules. However, modularity in evolutionary feature construction is still an open research topic. This paper tries to fill that gap by proposing a modular and hierarchical multitree genetic programming (GP) algorithm that allows trees to use the output values of other trees, thereby representing expressive features in a compact form. Based on this new representation, we propose a macro parent-repair strategy to reduce redundant and irrelevant features, a macro crossover operator to preserve interactive features, and an adaptive control strategy for crossover and mutation rates to dynamically balance the trade-off between exploration and exploitation. A comparison with seven bloat control methods on 98 regression datasets shows that the proposed modular representation achieves significantly better results in terms of test performance and smaller model size. Experimental results on the state-of-the-art symbolic regression benchmark demonstrate that the proposed symbolic regression method outperforms 22 existing symbolic regression and machine learning algorithms, providing empirical evidence for the superiority of the modularized evolutionary feature construction method.
... We used Cartesian genetic programming (CGP), wherein an evolving individual is represented as a two-dimensional grid of computational nodes-often an a-cyclic graph-which together express a program [28]. An individual is represented by a linear genome, composed of integer genes, each encoding a single node in the graph, which represents a specific function. ...
Preprint
Full-text available
We survey eight recent works by our group, involving the successful blending of evolutionary algorithms with machine learning and deep learning: 1. Binary and Multinomial Classification through Evolutionary Symbolic Regression, 2. Classy Ensemble: A Novel Ensemble Algorithm for Classification, 3. EC-KitY: Evolutionary Computation Tool Kit in Python, 4. Evolution of Activation Functions for Deep Learning-Based Image Classification, 5. Adaptive Combination of a Genetic Algorithm and Novelty Search for Deep Neuroevolution, 6. An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks, 7. Foiling Explanations in Deep Neural Networks, 8. Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors.
... This type of algorithm can be further divided into two subgroups, i.e., evolutionary algorithms (EAs) and swarm intelligence (SI) algorithms. The design of these algorithms was inspired by the theory of evolution (e.g., genetic algorithms (GAs) [13][14][15][16], genetic programming (GP) [17][18][19][20][21], differential evolution (DE) [22][23][24][25][26], and evolutionary strategies (ESs) [27][28][29][30]) or the collective behavior of social animals (e.g., particle swarm optimization (PSO) [31][32][33][34][35] and ant colony optimization (ACO) [36][37][38][39][40][41]). ...
Article
Full-text available
Image reconstruction is an interesting yet challenging optimization problem that has several potential applications. The task is to reconstruct an image using a fixed number of transparent polygons. Traditional gradient-based algorithms cannot be applied to the problem since the optimization objective has no explicit expression and cannot be represented by computational graphs. Metaheuristic search algorithms are powerful optimization techniques for solving complex optimization problems, especially in the context of incomplete information or limited computational capability. In this paper, we developed a novel metaheuristic search algorithm named progressive learning hill climbing (ProHC) for image reconstruction. Instead of placing all the polygons on a blank canvas at once, ProHC starts from one polygon and gradually adds new polygons to the canvas until reaching the number limit. Furthermore, an energy-map-based initialization operator was designed to facilitate the generation of new solutions. To assess the performance of the proposed algorithm, we constructed a benchmark problem set containing four different types of images. The experimental results demonstrated that ProHC was able to produce visually pleasing reconstructions of the benchmark images. Moreover, the time consumed by ProHC was much shorter than that of the existing approach.
... Evolutionary algorithms, especially genetic programming, have been utilized to design digital circuits including approximate designs [96,97,117,118]. Cartesian Genetic Programming (CGP) that uses graph representations is a flexible form of the genetic programming [119]. Based on CGP, circuits are represented by node arrays, where a node represents a basic logic function such as AND and OR. ...
Preprint
Given the stringent requirements of energy efficiency for Internet-of-Things edge devices, approximate multipliers have recently received growing attention, especially in error-resilient applications. The computation error and energy efficiency largely depend on how and where the approximation is introduced into a design. Thus, this article aims to provide a comprehensive review of the approximation techniques in multiplier designs ranging from algorithms and architectures to circuits. We have implemented representative approximate multiplier designs in each category to understand the impact of the design techniques on accuracy and efficiency. The designs can then be effectively deployed in high level applications, such as machine learning, to gain energy efficiency at the cost of slight accuracy loss.
... GP starts with a population of random programs (often not very fit for purpose), and iteratively evolves it using operators analogous to natural genetic processes (e.g., crossover, mutation, and selection). Over the years, a variety of GP systems have been proposed-each with its specificity (e.g., GP [16], Linear GP [2], Cartesian GP [19]). ...
Chapter
Full-text available
Grammar-Guided Genetic Programming is widely recognised as one of the most successful approaches for program synthesis, i.e., the task of automatically discovering an executable piece of code given user intent. Grammar-Guided Genetic Programming has been shown capable of successfully evolving programs in arbitrary languages that solve several program synthesis problems based only on a set of input-output examples. Despite its success, the restriction on the evolutionary system to only leverage input/output error rate during its assessment of the programs it derives limits its scalability to larger and more complex program synthesis problems. With the growing number and size of open software repositories and generative artificial intelligence approaches, there is a sizeable and growing number of approaches for retrieving/generating source code based on textual problem descriptions. Therefore, it is now, more than ever, time to introduce G3P to other means of user intent (particularly textual problem descriptions). In this paper, we would like to assess the potential for G3P to evolve programs based on their similarity to particular target codes of interest (obtained using some code retrieval/generative approach). We particularly assess 4 similarity measures from various fields: text processing (i.e., FuzzyWuzzy), natural language processing (i.e., Cosine Similarity based on term frequency), software clone detection (i.e., CCFinder), plagiarism detector(i.e., SIM). Through our experimental evaluation on a well-known program synthesis benchmark, we have shown that G3P successfully manages to evolve some of the desired programs with three of the used similarity measures. However, in its default configuration, G3P is not as successful with similarity measures as with the classical input/output error rate at evolving solving program synthesis problems.KeywordsProgram synthesisGrammar-guided genetic programmingCode similarityTextual descriptionText to code
... A GP approach was proposed in [69] to automatically construct CNN architectures and solve image classification problems with better accuracy. A direct encoding scheme inspired by the Cartesian genetic program (CGP) [70][71][72] was employed to represent the network structure and connectivity weights of CNN with better flexibility. Although the GP approach has better performance than its compared methods in solving CIFAR-10 datasets [57], excessive computational efforts were required. ...
Article
Full-text available
Convolutional neural networks (CNNs) have exhibited significant performance gains over conventional machine learning techniques in solving various real-life problems in computational intelligence fields, such as image classification. However, most existing CNN architectures were handcrafted from scratch and required significant amounts of problem domain knowledge from designers. A novel deep learning method abbreviated as TLBOCNN is proposed in this paper by leveraging the excellent global search ability of teaching-learning-based optimization (TLBO) to obtain an optimal design of network architecture for a CNN based on the given dataset with symmetrical distribution of each class of data samples. A variable-length encoding scheme is first introduced in TLBOCNN to represent each learner as a potential CNN architecture with different layer parameters. During the teacher phase, a new mainstream architecture computation scheme is designed to compute the mean parameter values of CNN architectures by considering the information encoded into the existing population members with variable lengths. The new mechanisms of determining the differences between two learners with variable lengths and updating their positions are also devised in both the teacher and learner phases to obtain new learners. Extensive simulation studies report that the proposed TLBOCNN achieves symmetrical performance in classifying the majority of MNIST-variant datasets, displays the highest accuracy, and produces CNN models with the lowest complexity levels compared to other state-of-the-art methods due to its promising search ability.
... GP starts with a population of random programs (often not very fit for purpose) and iteratively evolves them to search for better programs using operators analogous to natural genetic processes (e.g., crossover, mutation, and selection). Over the years, a variety of GP systems have been proposed-each with its specificity (e.g., GP [7], Linear GP [16], Cartesian GP [17]). ...
Conference Paper
Full-text available
Grammar-Guided Genetic Programming (G3P) is widely recognised as one of the most successful approaches for program synthesis, i.e., the task of automatically discovering an executable piece of code given user intent. G3P has been shown capable of successfully evolving programs in arbitrary languages that solve several program synthesis problems based only on a set of input/output examples. Despite its success, the restriction on the evolutionary system to only leverage input/output error rate during its assessment of the programs it derives limits its scalabil-ity to larger and more complex program synthesis problems. With the growing number and size of open software repositories and generative artificial intelligence approaches, there is a sizeable and growing number of approaches for retrieving/generating source code (potentially several partial snippets) based on textual problem descriptions. Therefore, it is now, more than ever, time to introduce G3P to other means of user intent (particularly textual problem descriptions). In this paper, we would like to assess the potential for G3P to evolve programs based on their similarity to particular target codes of interest (obtained using some code retrieval/generative approach). Through our experimental evaluation on a well-known program synthesis benchmark, we have shown that G3P successfully manages to evolve some of the desired programs with all four considered similarity measures. However, in its default configuration, G3P is not as successful with similarity measures as it is with the classical input/output error rate when solving program synthesis problems. Therefore, we propose a novel multi-objective G3P approach that combines the similarity to the target program and the traditional input/output error rate. Our experiments show that compared to the error-based G3P, the multi-objective G3P approach could improve the success rate of specific problems and has great potential to improve on the traditional G3P system.
... Cartesian genetic programming (CGP) is an evolutionary algorithm wherein an evolving individual is represented as a two-dimensional grid of computational nodes-often an a-cyclic graph-which together express a program [14]. It originally grew from a mechanism for developing digital circuits [16]. ...
Preprint
Full-text available
Activation functions (AFs) play a pivotal role in the performance of neural networks. The Rectified Linear Unit (ReLU) is currently the most commonly used AF. Several replacements to ReLU have been suggested but improvements have proven inconsistent. Some AFs exhibit better performance for specific tasks, but it is hard to know a priori how to select the appropriate one(s). Studying both standard fully connected neural networks (FCNs) and convolutional neural networks (CNNs), we propose a novel, three-population, coevolutionary algorithm to evolve AFs, and compare it to four other methods, both evolutionary and non-evolutionary. Tested on four datasets -- MNIST, FashionMNIST, KMNIST, and USPS -- coevolution proves to be a performant algorithm for finding good AFs and AF architectures.
... The traditional or classic phenotype consists of a rectangular array where each cell is occupied by a logic gate of two inputs; the system inputs are on the left and the system outputs are on the right. This phenotype is known as gate-level Cartesian Genetic Programming (CGP) [6]. The types of available logic gates are restricted to a small group of basic logical functions. ...
... Initially, Koza expresses programs as syntax trees rather than lines of code (1998). GP computer programs are also described by linear data structures (Brameier & Banzhaf, 2007), graphs (Miller, 2011) or data stacks of instructions (Perkis, 1994). As the interest of this paper is to obtain closed-form solutions, the treebased syntax has the preference above other representations. ...
Research
Full-text available
In queuing theory, closed-form expressions for key performance metrics (such as the waiting time distribution, numbers of customers in the system, etc.) are useful as they show how the performance of a queuing system depends on the system parameters. Unfortunately, many queuing systems prohibit the derivation of closed-form expressions. Alternatively, mathematical approximations or simulation approaches are very useful, but they fail to give fundamental insight into the functional relationship between system parameters and the performance measures of a queuing system. This paper proposes a data-driven approach to obtain closed-form expressions for key performance metrics by symbolic regression. Searching the mathematical expressions space is performed by genetic programming, an evolutionary algorithm variant. Data sets are created by selecting system parameters for a variety of single node queuing systems and obtaining the key performance metrics by simulation when these metrics are not derivable. Three different sampling techniques are used for selecting parameters: single random sampling, stratified sampling, and systematic sampling. This research shows that for the M/M/1, M/G/1, and M/M/s queuing systems, genetic programming is able to obtain exact performance metrics. Prior knowledge, such as the heavy-traffic behavior, can improve the speed of convergence when for example this behavior is implemented in the form of an explanatory variable. Furthermore, it is shown that none of the sampling techniques resulted in improving the speed of convergence. For the M/G/s queue, genetic programming is able to find accurate approximations for some performance metrics when using prior knowledge of the heavy traffic behavior and the probability of waiting.
... GP uses evolutionary operators -crossover, mutation, and selection, to change the individual encoding and generate better offspring for searching a solution in the mathematical expression space. Various GPs employ different individual encodings to represent mathematical equations, such as tree-encoded GPs [7,26,35,38,45], graph-encoded GPs [37,44], and linearly encoded GPs [3,14,31]. For the mathematical expression space, the presence of real constants accounts for a significant portion of the size of the space. ...
Preprint
Full-text available
Genetic programming (GP) is a commonly used approach to solve symbolic regression (SR) problems. Compared with the machine learning or deep learning methods that depend on the pre-defined model and the training dataset for solving SR problems, GP is more focused on finding the solution in a search space. Although GP has good performance on large-scale benchmarks, it randomly transforms individuals to search results without taking advantage of the characteristics of the dataset. So, the search process of GP is usually slow, and the final results could be unstable.To guide GP by these characteristics, we propose a new method for SR, called Taylor genetic programming (TaylorGP) (Code and appendix at https://kgae-cup.github.io/TaylorGP/). TaylorGP leverages a Taylor polynomial to approximate the symbolic equation that fits the dataset. It also utilizes the Taylor polynomial to extract the features of the symbolic equation: low order polynomial discrimination, variable separability, boundary, monotonic, and parity. GP is enhanced by these Taylor polynomial techniques. Experiments are conducted on three kinds of benchmarks: classical SR, machine learning, and physics. The experimental results show that TaylorGP not only has higher accuracy than the nine baseline methods, but also is faster in finding stable results.
... SRNet is an evolutionary computing algorithm. In each evolution, SRNet first leverages the Cartesian Genetic Programming (CGP) [19,20] to find each layer's mathematical function (ℎ −1 ). It then uses the Newton-Raphson method [28] (or L-BFGS method [16]) for few (or many) variables to obtain and so that ℎ = (ℎ −1 ) + approximates the output ℎ of the layer in a NN. ...
Preprint
Full-text available
Many recent studies focus on developing mechanisms to explain the black-box behaviors of neural networks (NNs). However, little work has been done to extract the potential hidden semantics (mathematical representation) of a neural network. A succinct and explicit mathematical representation of a NN model could improve the understanding and interpretation of its behaviors. To address this need, we propose a novel symbolic regression method for neural works (called SRNet) to discover the mathematical expressions of a NN. SRNet creates a Cartesian genetic programming (NNCGP) to represent the hidden semantics of a single layer in a NN. It then leverages a multi-chromosome NNCGP to represent hidden semantics of all layers of the NN. The method uses a (1+λ\lambda) evolutionary strategy (called MNNCGP-ES) to extract the final mathematical expressions of all layers in the NN. Experiments on 12 symbolic regression benchmarks and 5 classification benchmarks show that SRNet not only can reveal the complex relationships between each layer of a NN but also can extract the mathematical representation of the whole NN. Compared with LIME and MAPLE, SRNet has higher interpolation accuracy and trends to approximate the real model on the practical dataset.
... Other relevant future directions include, for instance: 1) testing our approach on different (and more challenging) tasks; 2) applying techniques to allow SN P systems to work with non-integer inputs; 3) evolving SN P systems that, instead of using the number of spikes for each neuron, use the difference between two spikes as the output, as proposed in [2]; 4) extend our approach to evolve also the number of timesteps allowed in the computation; and 5) test novel approaches for the neuro-evolution of SN P systems, such as Cartesian genetic programming [45]. ...
Conference Paper
Membrane computing is a discipline that aims to perform computation by mimicking nature at the cellular level. Spiking Neural P (in short, SN P) systems are a subset of membrane computing methodologies that combine spiking neurons with membrane computing techniques, where “P” means that the system is intrinsically parallel. While these methodologies are very powerful, being able to simulate a Turing machine with only few neurons, their design is time-consuming and it can only be handled by experts in the field, that have an in-depth knowledge of such systems. In this work, we use the Neuroevolution of Augmenting Topologies (NEAT) algorithm, usually employed to evolve multi-layer perceptrons and recurrent neural networks, to evolve SN P systems. Unlike existing approaches for the automatic design of SN P systems, NEAT provides high flexibility in the type of SN P systems, removing the need to specify a great part of the system. To test the proposed method, we evolve Spiking Neural P systems as policies for two classic control tasks from OpenAI Gym. The experimental results show that our method is able to generate efficient (yet extremely simple) Spiking Neural P systems that can solve the two tasks. A further analysis shows that the evolved systems act on the environment by performing a kind of “if-then-else” reasoning.
... For example, in [26], CGP was modified to allow levels forward in order to model feedback loops of sequential circuits. Cartesian Genetic Programming (CGP) is a GP variant widely used in EHW [27]. CGP uses direct acyclic graphs to represent its phenotype [28]. ...
Article
Full-text available
The evolution of complex circuits remains a challenge for the Evolvable Hardware field in spite much effort. There are two major issues: the amount of testing required and the low evolvability of representation structures to handle complex circuitry, at least partially due to the destructive effects of genetic operators. A 64-bit ×\times × 64-bit add-shift multiplier circuit modelled at register-transfer level in SystemVerilog would require approximately 33,200 gates when synthesized using Yosys Open SYnthesis Suite tool. This enormous gate count makes evolving such a circuit at the gate-level difficult. We use Grammatical Evolution (GE) and SystemVerilog, a hardware description language (HDL), to evolve fully functional parameterized Adder, Multiplier, Selective Parity and Up–Down Counter circuits at a more abstract level other than gate level—register transfer level. Parameterized modules have the additional benefit of not requiring a re-run of evolutionary experiments if multiple instances with different input sizes are required. For example, a 64-bit ×\times × 64-bit and 128-bit ×\times × 128-bit multipliers etc., can be instantiated from a fully evolved functional and parameterized N-bit ×\times × N-bit multiplier. The Adder (6.4 ×\times × ), Multiplier (10.7 ×\times × ) and Selective Parity (6.7 ×\times × ) circuits are substantially larger than the current state of the art for evolutionary approaches. We are able to scale so dramatically because of the use of a HDL, which permits us to operate at a register-transfer level. Furthermore, we adopt a well known technique for reducing testing from digital circuit design known as corner case testing. Skilled circuit designers rely on this to avoid time-consuming exhaustive testing. We demonstrate a simple way to identify and use corner cases for evolutionary testing and show that it enables the generation of massively complex circuits. All circuits were successfully evolved without resorting to the use of any standard decomposition methods, due to our ability to use programming constructs and operators available in SystemVerilog.
... Markov Brains, like NEAT (Stanley & Miikkulainen, 2002) or Cartesian genetic programming (Miller & Harding, 2008), build a network of computational components that can read from inputs and shared hidden states and write to outputs and recurrent shared hidden states (i.e., the hidden states that will be used on the next update). Mutations to the substrate can change connections between computational components or alter the types and functions of the computational components (for a detailed review, see Hintze et al., 2017). ...
Article
Full-text available
Deep learning (primarily using backpropagation) and neuroevolution are the preeminent methods of optimizing artificial neural networks. However, they often create black boxes that are as hard to understand as the natural brains they seek to mimic. Previous work has identified an information-theoretic tool, referred to as R, which allows us to quantify and identify mental representations in artificial cognitive systems. The use of such measures has allowed us to make previous black boxes more transparent. Here we extend R to not only identify where complex computational systems store memory about their environment but also to differentiate between different time points in the past. We show how this extended measure can identify the location of memory related to past experiences in neural networks optimized by deep learning as well as a genetic algorithm.
Article
Full-text available
A typical machine learning development cycle maximizes performance during model training and then minimizes the memory and area footprint of the trained model for deployment on processing cores, graphics processing units, microcontrollers or custom hardware accelerators. However, this becomes increasingly difficult as machine learning models grow larger and more complex. Here we report a methodology for automatically generating predictor circuits for the classification of tabular data. The approach offers comparable prediction performance to conventional machine learning techniques as substantially fewer hardware resources and power are used. We use an evolutionary algorithm to search over the space of logic gates and automatically generate a classifier circuit with maximized training prediction accuracy, which consists of no more than 300 logic gates. When simulated as a silicon chip, our tiny classifiers use 8–18 times less area and 4–8 times less power than the best-performing machine learning baseline. When implemented as a low-cost chip on a flexible substrate, they occupy 10–75 times less area, consume 13–75 times less power and have 6 times better yield than the most hardware-efficient ML baseline.
Conference Paper
In recent years, the rapid advances in neural networks for Natural Language Processing (NLP) have led to the development of Large Language Models (LLMs), able to substantially improve the state-of-the-art in many NLP tasks, such as question answering and text summarization. Among them, one particularly interesting application is automatic code generation based only on the problem description. However, it has been shown that even the most effective LLMs available often fail to produce correct code. To address this issue, we propose an evolutionary-based approach using Genetic Improvement (GI) to improve the code generated by an LLM using a collection of user-provided test cases. Specifically, we employ Grammatical Evolution (GE) using a grammar that we automatically specialize—starting from a general one—for the output of the LLM. We test 25 different problems and 5 different LLMs, showing that the proposed method is able to improve in a statistically significant way the code generated by LLMs. This is a first step in showing that the combination of LLMs and evolutionary techniques can be a fruitful avenue of research.
Chapter
We survey eight recent works by our group, involving the successful blending of evolutionary algorithms with machine learning and deep learning: Binary and Multinomial Classification through Evolutionary Symbolic Regression, Classy Ensemble: A Novel Ensemble Algorithm for Classification, EC-KitY: Evolutionary Computation Tool Kit in Python, Evolution of Activation Functions for Deep Learning-Based Image Classification, Adaptive Combination of a Genetic Algorithm and Novelty Search for Deep Neuroevolution, An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks, Foiling Explanations in Deep Neural Networks, Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors.
Chapter
We consider the problem of optimizing a controller for agents whose observation and action spaces are continuous, i.e., where the controller is a multivariate real function f:RnRmf: \mathbb {R}^n \rightarrow \mathbb {R}^m. We use genetic programming (GP) for solving this optimization problem. Namely, we employ a multi-tree-based GP variant, where a candidate solution is an array of m trees, each encoding a univariate function of the agent observation. We compare this form of optimization against the more common one where the controller is a multi-layer perceptron, with a predefined topology, whose weights are optimized through (neuro)evolution (NE). Moreover, we consider an evolutionary algorithm, GraphEA, that directly evolves graphs, each having n input nodes and m output nodes. We apply these three approaches to the case of simulated modular soft robots , where a robot is an aggregation of identical soft modules, each employing a controller that processes the local observation and produces the local action. We find that, in our scenario, multi-tree-based GP is competitive with NE and tends to produce different behaviors. We then experimentally investigate the possibility of optimizing a controller using another, pre-optimized one, as teacher, i.e., we realize a form of offline imitation learning . We consider all the teacher-learner pairs resulting from the three evolutionary algorithms and find that NE is a better learner than GP and GraphEA. However, controllers obtained through offline imitation learning are far less effective than those obtained through direct evolution. We hypothesize that this gap in effectiveness may be explained by the possibility, given by direct evolution, of exploring during the simulations a larger portion of the observation-action space.
Article
Given the stringent requirements of energy efficiency for Internet-of-Things edge devices, approximate multipliers, as a basic component of many processors and accelerators, have been constantly proposed and studied for decades, especially in error-resilient applications. The computation error and energy efficiency largely depend on how and where the approximation is introduced into a design. Thus, this article aims to provide a comprehensive review of the approximation techniques in multiplier designs ranging from algorithms and architectures to circuits. We have implemented representative approximate multiplier designs in each category to understand the impact of the design techniques on accuracy and efficiency. The designs can then be effectively deployed in high-level applications, such as machine learning, to gain energy efficiency at the cost of slight accuracy loss.
Article
Symbolic Regression searches for a parametric model with the optimal value of the parameters that best fits a set of samples to a measured target. The desired solution has a balance between accuracy and interpretability. Commonly there is no constraint in the way the functions are composed in the expression nor where the numerical parameters are placed, this can potentially lead to expressions that require a nonlinear optimization to find the optimal parameters. The representation called Interaction-Transformation alleviates this problem by describing expressions as a linear regression of the composition of functions applied to the interaction of the variables. One advantage is that any model that follows this representation is linear in its parameters, allowing an efficient computation. More recently, this representation was extended by applying a univariate function to the rational function of two Interaction-Transformation expressions, called Transformation-Interaction-Rational (TIR). The use of this representation was shown to be competitive with the current literature of Symbolic Regression. In this paper, we make a detailed analysis of these results using the SRBench benchmark. For this purpose, we split the datasets into different categories to understand the algorithm behavior in different settings. We also test the use of nonlinear optimization to adjust the numerical parameters instead of Ordinary Least Squares. We find through the experiments that TIR has some difficulties handling high-dimensional and noisy data sets, especially when most of the variables are composed of random noise. These results point to new directions for improving the evolutionary search of TIR expressions.
Article
Logic Synthesis From Partial Specifications (LSFPS) is the problem of finding the hardware implementation of a Boolean function from a partial knowledge of its care set. The elements missing from the specifications are named don’t knows. The exact solution of LSFPS is the minimum size circuit of the corresponding problem in which the don’t knows set is void. Hence, in addition to the traditional objective of size minimization, the goal is to maximize the test accuracy, i.e., the accuracy of the circuit when evaluated over a subset of the don’t knows. This problem is relevant because efficient solutions can lead to hardware friendly machine learning models, not relying on black-box approaches. Indeed, LSFPS maps directly to the problem of the automatic generation of optimized topologies for Binarized Neural Networks. Furthermore, combining the exact solution with modern logic synthesis techniques would unlock unprecedented optimization capabilities. Previous works proved the effectiveness of approximate logic synthesis (ALS) for designing circuits with high test accuracy. Nonetheless, these methods sacrifice accuracy on the specifications, which banishes them from the legitimate candidates for LSFPS. In this paper, we propose accuracy recovery, a procedure to map an approximate version of the circuit to a new one that satisfies the exact functionality of the specifications. The proposed approach relies on an extension of a disjoint support decomposition algorithm. Relative experiments on the IWLS2020 benchmarks show that, on average, the addition of the designed decomposition to a synthesis flow reduces by 17.38% the number of gates and by 12.02% the depth. The usage of accuracy recovery, based on such a decomposition, yields a 95.73% accuracy in the binary MNIST problem, beating the state-of-the-art in ALS of 92.76%.
Article
Deep neuroevolution and deep Reinforcement Learning have received a lot of attention in the last years. Some works have compared them, highlighting their pros and cons, but an emerging trend combines them so as to benefit from the best of both worlds. In this paper, we provide a survey of this emerging trend by organizing the literature into related groups of works and casting all the existing combinations in each group into a generic framework. We systematically cover all easily available papers irrespective of their publication status, focusing on the combination mechanisms rather than on the experimental results. In total, we cover 45 algorithms more recent than 2017. We hope this effort will favor the growth of the domain by facilitating the understanding of the relationships between the methods, leading to deeper analyses, outlining missing useful comparisons and suggesting new combinations of mechanisms.
Chapter
With a growing availability of ambient computing power as well as sensor data, networked systems are emerging in all areas of daily life. Coordination and optimization in complex cyber-physical systems demand for decentralized and self-organizing algorithms to cope with problem size and distributed information availability. Self-organization often relies on emergent behavior. Local observations and decisions aggregate to some global behavior without any apparent, explicitly programmed rule. Systematically designing algorithms with emergent behavior suitably for a new orchestration or optimization task is, at best, tedious and error prone. Appropriate design patterns are scarce so far. It is demonstrated that a machine learning approach based on Cartesian Genetic Programming is capable of learning the emergent mechanisms that are necessary for swarm-based optimization. Targeted emergent behavior can be evolved by evolution strategies. The learned swarm behavior is already significantly better than just random search. The encountered pitfalls as well as remaining challenges on the research agenda are discussed in detail. An additional fitness landscape analysis gives insight in obstructions during evolutions and clues for future improvements.
Article
Full-text available
We have recently shown how program synthesis (PS), or the concept of "self-writing code", can generate novel algorithms that solve the vibrational Schrödinger equation, providing approximations to the allowed wave functions for bound, one-dimensional (1-D) potential energy surfaces (PESs). The resulting algorithms use a grid-based representation of the underlying wave function ψ(x) and PES V(x), providing codes which represent approximations to standard discrete variable representation (DVR) methods. In this Article, we show how this inductive PS strategy can be improved and modified to enable prediction of both vibrational wave functions and energy eigenvalues of representative model PESs (both 1-D and multidimensional). We show that PS can generate algorithms that offer some improvements in energy eigenvalue accuracy over standard DVR schemes; however, we also demonstrate that PS can identify accurate numerical methods that exhibit desirable computational features, such as employing very sparse (tridiagonal) matrices. The resulting PS-generated algorithms are initially developed and tested for 1-D vibrational eigenproblems, before solution of multidimensional problems is demonstrated; we find that our new PS-generated algorithms can reduce calculation times for grid-based eigenvector computation by an order of magnitude or more. More generally, with further development and optimization, we anticipate that PS-generated algorithms based on effective Hamiltonian approximations, such as those proposed here, could be useful in direct simulations of quantum dynamics via wave function propagation and evaluation of molecular electronic structure.
Article
Based on a solid mathematical background, this paper proposes a method for Symbolic Regression that enables the extraction of mathematical expressions from a dataset. Contrary to other approaches, such as Genetic Programming, the proposed method is deterministic and, consequently, does not require the creation of a population of initial solutions. Instead, a simple expression is grown until it fits the data. This method has been compared with four well-known Symbolic Regression techniques with a large number of datasets. As a result, on average, the proposed method returns better performance than the other techniques, with the advantage of returning mathematical expressions that can be easily used by different systems. Additionally, this method makes it possible to establish a threshold at the complexity of the expressions generated, i.e., the system can return mathematical expressions that are easily analyzed by the user, as opposed to other techniques that return very large expressions.
Chapter
Vectorial Genetic Programming (GP) is a young branch of GP, where the training data for symbolic models not only include regular, scalar variables, but also allow vector variables. Also, the model’s abilities are extended to allow operations on vectors, where most vector operations are simply performed component-wise. Additionally, new aggregation functions are introduced that reduce vectors into scalars, allowing the model to extract information from vectors by itself, thus eliminating the need of prior feature engineering that is otherwise necessary for traditional GP to utilize vector data. And due to the white-box nature of symbolic models, the operations on vectors can be as easily interpreted as regular operations on scalars. In this paper, we extend the ideas of vectorial GP of previous authors, and propose a grammar-based approach for vectorial GP that can deal with various challenges noted. To evaluate grammar-based vectorial GP, we have designed new benchmark functions that contain both scalar and vector variables, and show that traditional GP falls short very quickly for certain scenarios. Grammar-based vectorial GP, however, is able to solve all presented benchmarks.KeywordsSymbolic regressionGenetic programmingVectorialGrammar
Conference Paper
Full-text available
Cartesian Genetic Programming (CGP) is a well-known form of Genetic Programming developed by Julian Miller in 1999-2000. In its classic form, it uses a very simple integer address-based genetic representation of a program in the form of a directed graph. Graphs are very useful program representations and can be applied to many domains (e.g. electronic circuits, neural networks). It can handle cyclic or acyclic graphs. In a number of studies, CGP has been shown to be comparatively efficient to other GP techniques. It is also very simple to program. The classical form of CGP has undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include self-modifying CGP (SMCGP), cyclic connections (recurrent-CGP), encoding artificial neural networks and automatically defined functions (modular CGP). SMCGP uses functions that cause the evolved programs to change themselves as a function of time. This makes it possible to find general solutions to classes of problems and mathematical algorithms (e.g. arbitrary parity, n-bit binary addition, sequences that provably compute pi and e to arbitrary precision, and so on). Recurrent-CGP allows evolution to create programs which contain cyclic, as well as acyclic, connections. This enables application to tasks which require internal states or memory. It also allows CGP to create recursive equations. CGP encoded artificial neural networks represent a powerful training method for neural networks. This is because CGP is able to simultaneously evolve the networks connections weights, topology and neuron transfer functions. It is also compatible with Recurrent-CGP enabling the evolution of recurrent neural networks. The tutorial will cover the basic technique, advanced developments and applications to a variety of problem domains. It will present a live demo of how the open source cgplibrary can be used.
Article
Full-text available
A biologically inspired developmental model targeted at hardware implementation (off-shelf FPGA) is proposed which exhibits extremely robust transient fault-tolerant capability: in the software simulation of the experimen- tal application. In a 6x6 cell French Flag, some individuals were discovered us- ing evolution that have the ability to "recover" themselves from almost any kinds of transient faults, even in the worst case of only one "live" cell remain- ing. All cells in this model have identical genotype (physical structures), and only differ in internal states.
Article
Full-text available
Techniques from the field of Evolutionary Computation are used to evolve a wide variety of aesthetically pleasing images using Cartesian Genetic Programming (CGP). The challenges that arise from employing a fitness function based on aesthetics, and the benefits that CGP can provide, are investigated and discussed. A significant piece of software was developed that places a focus on providing the user with efficient control over the evolutionary process. Several 'non-user' fitness functions that assess the phenotypes and genotypes of the chromosomes were also employed with varying success. To improve these results, methods of maintaining diversity within the population that take advantage of the neutrality of CGP are implemented and tested.
Article
Full-text available
We argue that there is an upper limit on the complexity of software that can be constructed using current methods. Furthermore, this limit is orders of mag-nitude smaller than the complexity of living systems. We argue that many of the ad-vantages of autonomic computing will not be possible unless fundamental aspects of living systems are incorporated into a new paradigm of software construction. Truly self-healing and maintaining software will require methods of construction that mimic the biological development of multi-cellular organisms. We demonstrate a prototype system which is capable of autonomous repair and regeneration without using engineered methods. A method for evolving programs that construct multi-cellular structures (organisms) is described.
Conference Paper
Full-text available
As is typical in evolutionary algorithms, fitness evaluation in GP takes the majority of the computational effort. In this paper we demonstrate the use of the Graphics Processing Unit (GPU) to accelerate the evaluation of individuals. We show that for both binary and floating point based data types, it is possible to get speed increases of several hundred times over a typical CPU implementation. This allows for evaluation of many thousands of fitness cases, and hence should enable more ambitious solutions to be evolved using GP.
Chapter
Full-text available
Structure-based virtual screening is a technology increasingly used in drug discovery. Although successful at estimating binding modes for input ligands, these technologies are less successful at ranking true hits correctly by binding free energy. This chapter presents the automated removal of false positives from virtual hit sets, by evolving a post docking filter using Cartesian Genetic Programming(CGP). We also investigate characteristics of CGP for this problem and confirm the absence of bloat and the usefulness of neutral drift.
Chapter
Full-text available
Self-modifying Cartesian genetic programming (SMCGP) is a general purpose, graph-based, form of genetic programming founded on Cartesian genetic programming. In addition to the usual computational functions, it includes functions that can modify the program encoded in the genotype. SMCGP has high scalability in that evolved programs encoded in the genotype can be iterated to produce an infinite sequence of programs (phenotypes). It also allows programs to acquire more inputs and produce more outputs during iterations. Another attractive feature of SMCGP is that it facilitates the evolution of provably general solutions to various computational problems.
Article
Full-text available
An evolutionary algorithm is used as an engine for discovering new designs of digital circuits, particularly arithmetic functions. These designs are often radically different from those produced by top-down, human, rule-based approaches. It is argued that by studying evolved designs of gradually increasing scale, one might be able to discern new, efficient, and generalizable principles of design. The ripple-carry adder principle is one such principle that can be inferred from evolved designs for one and two-bit adders. Novel evolved designs for three-bit binary multipliers are given that are 20% more efficient (in terms of number of two-input gates used) than the most efficient known conventional design.
Conference Paper
Full-text available
A biologically inspired developmental model targeted at hardware implementation (off-shelf FPGA) is proposed which exhibits extremely robust transient fault-tolerant capability. All cells in this model have identical genotype (physical structures), and only differ in internal states. In a 3x3 cell digital organism, some individuals which implement a 2-bit multiplier were discovered using evolution that have the ability to “recover” themselves from almost any kinds of transient faults. An intrinsic evolvable hardware platform based on FPGA was realized to speed up the evolution process.
Article
Full-text available
In a previous work it was argued that by studying evolved designs of gradually increasing scale, one might be able to discern new, efficient, and generalisable principles of design. These ideas are tested in the context of designing digital circuits, particularly arithmetic circuits. This process of discovery is seen as a principle extraction loop in which the evolved data is analysed both phenotypically and genotypically by processes of data mining and landscape analysis. The information extracted is then fed back into the evolutionary algorithm to enhance its search capabilities and hence increase the likelihood of identifying new principles which explain how to build systems which are too large to evolve.
Conference Paper
Full-text available
The paper introduces a new approach to automatic design of image filters for a given type of noise. The approach employs evolvable hardware at simplified functional level and produces circuits that outperform conventional designs. If an image is available both with and without noise, the whole process of filter design can be done automatically, without influence of a designer.
Conference Paper
Full-text available
The design of a new biologically inspired artificial developmental system is described in this paper. In general, developmental systems converge slower and are more computationally expensive than direct evolution. However, the performance trends of development indicate that the full benefit of development will arise with larger and more complex problems that exhibit some sort of regularity in their structure: thus, the aim is to evolve larger electronic systems through the modularity allowed by development. The hope is that the proposed artificial developmental system will exhibit adaptivity and fault tolerance in the future. The cell signalling and the system of Gene Regulatory Networks present in biological organisms are modelled in our developmental system, and tailored for tackling real world problems on electronic hardware. For the first time, a Gene Regulatory Network system is successfully shown to develop the complete circuit structure of a desired digital circuit without the help of another mechanism or any problem specific structuring. Experiments are presented that show the modular behaviour of the developmental system, as well as its ability to solve non-modular circuit problems.
Conference Paper
Full-text available
An intrinsic evolvable hardware platform was realized to accelerate the evolutionary search process of a biologically inspired developmental model targeted at off-shelf FPGA implementation. The model has the capability of exhibiting very large transient fault-tolerance. The evolved circuits make up a digital "organism" from identical cells which only differ in internal states. Organisms implementing a 2-bit multiplier were evolved that can "recover" from almost any kinds of transient faults. This paper focuses on the design concerns and details of the evolvable hardware system, including the digital organism/cell and the intrinsic FPGA-based evolvable hardware platform.
Conference Paper
Full-text available
In this paper we present a new chromosome representation for evolving digital circuits. The representation is based very closely on the c hip architecture of the Xilinx 6216 FPGA. We e xamine the e ffectiveness of evolving circuit functionality by using randomly chosen examples taken from the truth table. We consider the merits of a cell architecture in which functional cells alternate with routing cells and compare this with an architecture in which any cell can implement a function o r be merely used for r outing signals. It i s noteworthy that t he presence of elitism significantly improves the Genetic Algorithm performance.
Conference Paper
Full-text available
We present a method for constructing electronic circuits that uses analogues of biological multi-cellular development, genetic regulatory networks, and transcription and translation processes to build circuits. We show how small circuits may be evolved and how they may be reused to build larger circuits. We also demonstrate that the artificial ‘organisms’ are capable of regeneration so that circuit functionality can be recovered after damage.
Conference Paper
Full-text available
This paper presents a method for co-evolving neuro-inspired developmental programs for playing checkers. Each player’s program is represented by seven chromosomes encoding digital circuits, using a form of genetic programming, called Cartesian Genetic Programming (CGP). The neural network that occurs by running the genetic programs has a highly dynamic morphology in which neurons grow, and die, and neurite branches together with synaptic connections form and change in response to situations encountered on the checkers board. The results show that, after a number of generations, by playing each other the agents play much better than those from earlier generations. Such learning abilities are encoded at a genetic level rather than at the phenotype level of neural connections.
Conference Paper
Full-text available
Embedded Cartesian Genetic Programming (ECGP) is a form of Ge- netic Programming based on an acyclic directed graph representation. In this paper we investigate the use of ECGP together with a technique called Product Reduction (PR) to reduce the time required to evolve a digital multiplier. The results are compared with Cartesian Genetic Programming (CGP) with and without PR and show that ECGP improves evolvability and also that PR im- proves the performance of both techniques by up to eight times on the digital multiplier problems tested.
Conference Paper
Full-text available
A review is given o f approaches to g rowing n eural networks and electronic circuits. A new method for growing graphs and circuits using a de- velopmental process is discussed. The method is inspired by the view that the cell is the basic unit of biology. Programs that construct circuits are evolved to bu ild a sequence of digital circuits at user specified iterations. The pro- grams can be run for an arbitrary number of iterations so circuits of huge size could be created that could not be evolved. It is shown that the circuit build- ing programs are capable of correctly predicting the next circuit in a sequence of larger even parity functions. The new method however finds building spe- cific circuits more difficult than a non-developmental method.
Conference Paper
Full-text available
The exploitation of the physical characteristics has already been dem- onstrated in the intrinsic evolution of electronic circuits. This paper is an initial attempt at creating a world in which "physics" can be exploited in simulation. As a starting point we investigate a model of gate-like components with added noise. We refer to this as a kind of messiness. The principal idea behind these messy gates is that artificial evolution makes a virtue of the untidiness. We are ultimately trying to study the question: What kind of components should we use in artificial evolution? Several experiments are described that show that the messy circuits have a natural robustness to noise, as well as an implicit fault- tolerance. In addition, it was relatively easy for evolution to generate novel cir- cuits that were surprisingly efficient.
Conference Paper
Full-text available
We analyze and compare four dierent evolvable hardware approaches for classication tasks: An approach based on a programmable logic array architecture, an approach based on two-phase incremental evolution, a generic logic architecture with automatic denition of build- ing blocks, and a specialized coarse-grained architecture with pre-dened building blocks. We base the comparison on a common data set and report on classication accuracy and training eort. The results show that classication accuracy can be increased by using modular, special- ized classier architectures. Furthermore, function level evolution, ei- ther with predened functions derived from domain-specic knowledge or with functions that are automatically dened during evolution, also gives higher accuracy. Incremental and function level evolution reduce the search space and thus shortens the training eort.
Conference Paper
Full-text available
The paper presents for the first time automatic module acquisition and evolution within the graph based Cartesian Genetic Programming method. The method has been tested on a set of even parity problems and compared with Cartesian Genetic Programming without modules. Results are given that show that the new modular method evolves solutions up to 20 times quicker than the original non-modular method and that the speedup is more pronounced on larger problems. Analysis of some of the evolved modules shows that often they are lower order parity functions. Prospects for further improvement of the method are discussed.
Conference Paper
Full-text available
Prime generating polynomial functions are known that can produce sequences of prime numbers (e.g. Euler polynomials). However, polynomials which produce consecutive prime numbers are much more difficult to obtain. In this paper, we propose approaches for both these problems. The first uses Cartesian Genetic Programming (CGP) to directly evolve integer based prime-prediction mathematical formulae. The second uses multi-chromosome CGP to evolve a digital circuit, which represents a polynomial. We evolved polynomials that can generate 43 primes in a row. We also found functions capable of producing the first 40 consecutive prime numbers, and a number of digital circuits capable of predicting up to 208 consecutive prime numbers, given consecutive input values. Many of the formulae have been previously unknown.
Conference Paper
Full-text available
This work is a study of neutrality in the context of Evolutionary Computation systems. In particular, we introduce the use of explicit neutrality with an integer string coding scheme to allow neutrality to be measured during evolution. We tested this method on a Boolean benchmark problem. The experimental results indicate that there is a positive relationship between neutrality and evolvability: neutrality improves evolvability. We also identify four characteristics of adaptive/neutral mutations that are associated with high evolvability. They may be the ingredients in designing effective Evolutionary Computation systems for the Boolean class problem.
Conference Paper
Full-text available
Self Modifying CGP (SMCGP) is a developmental form of Cartesian Genetic Programming(CGP). It is able to modify its own phe- notype during execution of the evolved program. This is done by the inclusion of modification operators in the function set. Here we present the use of the technique on several different sequence generation and regression problems.
Conference Paper
Full-text available
Cartesian Genetic Programming is a graph based representa- tion that has many benefits over traditional tree based methods, includ- ing bloat free evolution and faster evolution through neutral search. Here, an integer based version of the representation is applied to a traditional problem in the field: evolving an obstacle avoiding robot controller. The technique is used to rapidly evolve controllers that work in a complex en- vironment and with a challenging robot design. The generalisation of the robot controllers in different environments is also demonstrated. A novel fitness function based on chemical gradients is presented as a means of improving evolvability in such tasks.
Conference Paper
Full-text available
Embedded Cartesian Genetic Programming (ECGP) is an extension of Cartesian Genetic Programming (CGP) capable of acquiring, evolving and re-using partial solutions. In this paper, we apply for the first time CGP and ECGP to the ones-max and order-3 deceptive problems, which are normally associated with Genetic Algorithms. Our approach uses CGP and ECGP to evolve a sequence of commands for a tape-head, which produces an arbitrary length binary string on a piece of tape. Computational effort figures are calculated for CGP and ECGP and our results compare favourably with those of Genetic Algorithms.
Conference Paper
Full-text available
A cell based optimization (CBO) algorithm is proposed which takes inspiration from the collective behaviour of cellular slime molds (Dictyostellium discoideum). Experiments with CBO are conducted to study the ability of simple cell-like agents to collectively manage resources across a distributed network. Cells, or agents, only have local information can signal, move, divide, and die. Heterogeneous populations of the cells are evolved using Cartesian genetic programming (CGP). Several experiments were carried out to examine the adaptation of cells to changing user demand patterns. CBO performance was compared using various methods to change demand. The experiments showed that populations consistently evolve to produce effective solutions. The populations produce better solutions when user demand patterns fluctuated over time instead of environments with static demand. This is a surprising result that shows that populations need to be challenged during the evolutionary process to produce good results.
Conference Paper
Full-text available
Biological neurons are extremely complex cells whose morphology grows and changes in response to the external environment. Yet, artificial neural networks (ANNs) have represented neurons as simple computational devices. It has been evident for a long time that ANNs have learning abilities that are insignificant compared with some of the simplest biological brains. We argue that we understand enough neuroscience to create much more sophisticated models. In this paper, we report on our attempts to do this.We identify and evolve seven programs that together represents a neuron which grows post evolution into a complete 'neurological' system. The network that occurs by running the programs has a highly dynamic morphology in which neurons grow, and die, and neurite branches together with synaptic connections form and change. We have evaluated the capability of these networks for playing the game of checkers. Our method has no board evaluation function, no explicit learning rules and no human expertise at playing checkers is used. The learning abilities of these networks are encoded at a genetic level rather than at the phenotype level of neural connections.
Conference Paper
Full-text available
Self modifying CGP (SMCGP) is a developmental form of Cartesian genetic programming(CGP). It differs from CGP by including primitive functions which modify the program. Beginning with the evolved genotype the self-modifying functions produce a new program (phenotype) at each iteration. In this paper we have applied it to a well known digital circuit building problem: even-parity. We show that it is easier to solve difficult parity problems with SMCGP than either with CGP or modular CGP, and that the increase in efficiency grows with problem size. More importantly, we prove that SMCGP can evolve general solutions to arbitrary-sized even parity problems.
Conference Paper
Full-text available
Polymorphic digital circuits contain ordinary and polymorphic gates. In the past, Cartesian genetic programming (CGP) has been applied to synthesize polymorphic circuits at the gate level. However, this approach is not scalable. Experimental results presented in this paper indicate that larger and more efficient polymorphic circuits can be designed by a combination of conventional design methods (such as BDD, Espresso or ABC system) and evolutionary optimization (conducted by CGP). Proposed methods are evaluated on two benchmark circuits - multiplier/sorter and parity/majority circuits of variable input size.
Conference Paper
Full-text available
Simple digital FIR filters have recently been evolved directly in the reconfigurable gate array, ignoring thus a classical method based on multiply–and–accumulate structures. This work indicates that the method is very problematic. In this paper, the gate-level approach is extended to IIR filters, a new approach is proposed to the fitness calculation based on the impulse response evaluation and a comparison is performed between the evolutionary FIR filter design utilizing a full set and a reduced set of gates. The objective of these experiments is to show that the evolutionary design of digital filters at the gate level does not produce filters that are useful in practice when linearity of filters is not guaranteed by the evolutionary design method.
Conference Paper
Full-text available
This paper presents a novel representation of Cartesian genetic programming (CGP) in which multiple networks are used in the classification of high resolution X-rays of the breast, known as mammograms. CGP networks are used in a number of different recombination strategies and results are presented for mammograms taken from the Lawrence Livermore National Laboratory database.
Conference Paper
Full-text available
Evolutionary hardware design reveals the potential to provide autonomous systems with self-adaptation properties. We first outline an architectural concept for an intrinsically evolvable embedded system that adapts to slow changes in the environment by simulated evolution, and to rapid changes in available resources by switching to preevolved alternative circuits. In the main part of the paper, we treat evolutionary circuit design as a multi-objective optimization problem and compare two multi-objective optimizers with a reference genetic algorithm. In our experiments, the best results were achieved with TSPEA2, an optimizer that prefers a single objective while trying to maintain diversity.
Conference Paper
Full-text available
A study is made to learn h ow multicellular organisms might have e merged from single- celled organisms. An understanding of this phenomenon might provide a better understanding of natural and man-made multicellular systems. The experiment performed is an Artificial Life simulation that uses Cartesian Genetic Programming to evolve the behaviors of individual cells. Cells have the ability to sense c hemicals around them, signal, divide, and move towards or away from chemicals. Interesting g roup behavior emerged from the simple instruction sets used by the cells.
Conference Paper
Full-text available
Genetic Programming was first introduced by Koza using tree representation together with a crossover technique in which random sub-branches of the parents' trees are swapped to create the offspring. Later Miller and Thomson introduced Cartesian Genetic Programming, which uses directed graphs as a representation to replace the tree structures originally introduced by Koza. Cartesian Genetic Programming has been shown to perform better than the traditional Genetic Programming; but it does not use cross- over to create offspring, it is implemented using mutation only. In this paper a new crossover method in Genetic Programming is introduced. The new technique is based on an adaptation of the Cartesian Genetic Programming representation and is tested on two simple regression prob- lems. It is shown that by implementing the new crossover technique, convergence is faster than that of using mutation only in the Cartesian Genetic Programming method.
Conference Paper
Full-text available
Embedded Cartesian Genetic Programming (ECGP) is an extension of the directed graph based Cartesian Genetic Pr- ogramming (CGP), which is capable of automatically ac- quiring, evolving and re-using partial solutions in the form of modules. In this paper, we apply for the first time, CGP and ECGP to the well known Lawnmower problem and to the Hierarchical-if-and-Only-if problem. The latter is nor- mally associated with Genetic Algorithms. Computational eort figures are calculated from the results of both CGP and ECGP and our results compare favourably with other techniques.
Conference Paper
Full-text available
Embedded Cartesian Genetic Programming (ECGP) is a form of the graph based Cartesian Genetic Programming (CGP) in which modules are automatically acquired and evolved. In this paper we compare the efficiencies of the ECGP and CGP techniques on three classes of problem: digital adders, digital multipliers and digital comparators. We show that in most cases ECGP shows a substantial improvement in performance over CGP and that the computational speedup is more pronounced on larger problems.
Conference Paper
Full-text available
This paper shows that the evolutionary design of digital cir- cuits which is conducted at the gate level is able to produce human-competitive circuits at the transistor level. In addi- tion to standard gates, we utilize unconventional gates (such as the NAND/NOR gate and NOR/NAND gate) that con- sist of a few transistors but exhibit non-trivial 3-input logic functions. Novel implementations of adders and majority circuits evolved using these gates contain fewer transistors than the smallest existing implementations of these circuits. Moreover, it was shown that the use of these gates signi- cantly improves the success rate of the search process.
Conference Paper
Full-text available
Classical Evolutionary Programming (CEP) and Fast Evo- lutionary Programming (FEP) have been applied to real- valued function optimisation. Both of these techniques di- rectly evolve the real-values that are the arguments of the real-valued function. In this paper we have applied a form of genetic programming called Cartesian Genetic Program- ming (CGP) to a number of real-valued optimisation bench- mark problems. The approach we have taken is to evolve a computer program that controls a writing-head, which moves along and interacts with a finite set of symbols that are interpreted as real numbers, instead of manipulating the real numbers directly. In other studies, CGP has already been shown to benefit from a high degree of neutrality. We hope to exploit this for real-valued function optimisation problems to avoid being trapped on local optima. We have also used an extended form of CGP called Embedded CGP (ECGP) which allows the acquisition, evolution and re-use of modules. The effectiveness of CGP and ECGP are com- pared and contrasted with CEP and FEP on the benchmark problems. Results show that the new techniques are very ef- fective.
Conference Paper
Full-text available
A coevolutionary competitive learning environment for two antagonistic agents is presented. The agents are controlled by a new kind of computational network based on a com-partmentalised model of neurons. The genetic basis of neu-rons is an important [27] and neglected aspect of previous approaches. Accordingly, we have defined a collection of chromosomes representing various aspects of the neuron: soma, dendrites and axon branches, and synaptic connec-tions. Chromosomes are represented and evolved using a form of genetic programming (GP) known as Cartesian GP. The network formed by running the chromosomal programs, has a highly dynamic morphology in which neurons grow, and die, and neurite branches together with synaptic con-nections form and change in response to environmental in-teractions. The idea of this paper is to demonstrate the importance of the genetic transfer of learned experience and life time learning. The learning is a consequence of the com-plex dynamics produced as a result of interaction (coevolu-tion) between two intelligent agents. Our results show that both agents exhibit interesting learning capabilities.
Conference Paper
Full-text available
ABSTRACT The paper focuses on the evolution of algorithms for control of a machine in the presence of sensor faults, using Carte- sian Genetic Programming. The key challenges in creating training sets and a fltness function that encourage a general solution are discussed. The evolved algorithms are analysed and discussed. It was found that highly novel, mathemati- cally elegant and hitherto unknown,solutions were found. Categories and Subject Descriptors
Conference Paper
We are interested in engineering smart machines that enable backtracking of emergent behaviors. Our SSNNS simulator consists of hand-picked tools to explore spiking neural networks in more depth with flexibility. SSNNS is based on the Spike Response ...
Article
The development of an entire organism from a single cell is one of the most profound and awe inspiring phenomena in the whole of the natural world. The complexity of living systems itself dwarfs anything that man has produced. This is all the more the case for the processes that lead to these intricate systems. In each phase of the development of a multi-cellular being, this living system has to survive, whether stand-alone or supported by various structures and processes provided by other living systems. Organisms construct themselves, out of humble single-celled beginnings, riding waves of interaction between the information residing in their genomes - inherited from the evolutionary past of their species via their progenitors - and the resources of their environment. Permanent renewal and self-repair are natural extrapolations of developmental recipes, as is adaptation to different environmental conditions. Multi-cellular organisms consist of a huge amount of cells, the atoms of life, modular structures used to perform all the functions of a body. It is estimated that there are of the order of 1013 cells in the human body. Some of them are dying and
Conference Paper
An evolutionary algorithm automatically discovers suitable solutions to a problem, which may lie anywhere in a large search space of candidate solutions. In the case of genetic programming, this means performing an efficient search of all possible computer programs represented as trees. Exploration of the search space appears to be constrained by structural mechanisms that exist in genetic programming as a consequence of using trees to represent solutions. As a result, programs with certain structures are more likely to be evolved, and others extremely unlikely. We investigate whether the graph representation used in Cartesian genetic programming causes an analogous biasing effect, imposing natural limitations on the class of solution structures that are likely to be evolved. Representation bias and structural bias are identified: the rarer ldquoregularrdquo structures appear to be easier to evolve than more common ldquoirregularrdquo ones.
Conference Paper
This paper describes the application of intrinsic evolvable hardware to combinational circuit design and synthesis, as an alternative to conventional approaches. This novel reconfigurable architecture is inspired by Cartesian genetic programming and dedicated for implementing high performance digital image filters on a custom Xilinx Virtex FPGA xcv1000, together with a flexible local interconnection hierarchy. As a highly parallel architecture, it scales linearly with the filter complexity. It is reconfigured by an external genetic reconfiguration processing unit with a hardware GA implementation embedded. Due to pipelining, parallelization and no function call overhead, it yields a significant speedup of one to two orders of magnitude over a software implementation, which is especially useful for the real-time applications. The experimental results conclude that in terms of computational effort, filtered image signal and implementation cost, the intrinsic evolvable hardware solution outperforms traditional approaches.