Book

# Genetic Programming II: Automatic Discovery of Reusable Programs

Authors:
... Neural Architecture Search (NAS) uses different approaches like EAs, reinforcement learning, and Bayesian optimization to find near-optimal architectures for Deep Neural Networks (DNNs) (Miikkulainen et al., 2019;Zoph et al., 2018;Lu et al., 2019). Genetic Programming (GP) and Grammatical Evolution (GE) (Ryan et al., 1998) enable solution generation and optimization simultaneously (Koza, 1994;Ryan et al., 1998;Miller, 2011), which makes them suitable candidates for simultaneous generation and training of neural networks (Tsoulos et al., 2008;Ahmadizar et al., 2015;Assunção et al., 2017a,b;Miller, 2020). In particular, GE (Ryan et al., 1998) is a variant of GP that takes a user-defined Context Free Grammar (CFG) as its input to determine how genotypes are mapped to phenotypes. ...
... A modular solution contains almost independent units (modules) where each unit plays a specific role in solution's performance (Amer and Maul, 2019). Modularity provides important advantages: reuse, readability, and scalability (Koza, 1994;Swafford et al., 2011a). While the modules may be highly complex internally, they are loosely connected together usually in a modular solution. ...
... There are several studies that investigate the importance of modularity in the performance of GPs (Koza, 1994;Miller, 2011;Ellefsen et al., 2020;O'Neill and Ryan, 2000;Hemberg E., 2009;Harper and Blair, 2006;Swafford et al., 2011a,b). For example, Koza (1994) demonstrates that adding modularity to GP highly increases scalability. ...
Preprint
This paper presents a novel method, called Modular Grammatical Evolution (MGE), towards validating the hypothesis that restricting the solution space of NeuroEvolution to modular and simple neural networks enables the efficient generation of smaller and more structured neural networks while providing acceptable (and in some cases superior) accuracy on large data sets. MGE also enhances the state-of-the-art Grammatical Evolution (GE) methods in two directions. First, MGE's representation is modular in that each individual has a set of genes, and each gene is mapped to a neuron by grammatical rules. Second, the proposed representation mitigates two important drawbacks of GE, namely the low scalability and weak locality of representation, towards generating modular and multi-layer networks with a high number of neurons. We define and evaluate five different forms of structures with and without modularity using MGE and find single-layer modules with no coupling more productive. Our experiments demonstrate that modularity helps in finding better neural networks faster. We have validated the proposed method using ten well-known classification benchmarks with different sizes, feature counts, and output class count. Our experimental results indicate that MGE provides superior accuracy with respect to existing NeuroEvolution methods and returns classifiers that are significantly simpler than other machine learning generated classifiers. Finally, we empirically demonstrate that MGE outperforms other GE methods in terms of locality and scalability properties.
... For the Boolean logic applications, the multiplexer and even-parity problems are used. The challenge in the multiplexer problem (henceforth termed Multiplexer) [228,229] is for GP to use Boolean logic operators to reproduce the behaviour of an electronic multiplexer given all its possible input and output values. The challenge in the even-parity problem (Parity) [229,230] is for GP to use Boolean logic operators to find a solution that produces the value of the Boolean even parity given n independent Boolean inputs. ...
... The challenge in the multiplexer problem (henceforth termed Multiplexer) [228,229] is for GP to use Boolean logic operators to reproduce the behaviour of an electronic multiplexer given all its possible input and output values. The challenge in the even-parity problem (Parity) [229,230] is for GP to use Boolean logic operators to find a solution that produces the value of the Boolean even parity given n independent Boolean inputs. ...
... For the robot control application, GP is used to evolve solutions to the well known artificial ant problem (ANT) [229,231]. The challenge in ANT is to evolve a routine for navigating a robotic ant on a virtual field to help it find food items within a given time limit. ...
Thesis
Full-text available
The quest for simple solutions is not new in machine learning (ML) and related methods such as genetic programming (GP). GP is a nature-inspired approach to the automatic programming of computers used to create solutions to a broad range of computational problems. However, the evolving solutions can grow unnecessarily complex, which presents considerable challenges. Typically, the control of complexity in GP means reducing the sizes of the evolved expressions – known as bloat-control. However, size is a function of solution representation, and hence it does not consistently capture complexity across diverse GP applications. Instead, this thesis proposes to estimate the complexity of the evolving solutions by their evaluation time – the computational time required to evaluate a GP evolved solution on the given task. After all, the evaluation time depends not only on the size of the evolved expressions but also on other aspects such as their composition, thus acting as a more nuanced measure of model complexity than the expression size alone. Also, GP evaluates all the solutions in a population identically to determine their relative performance, for example, with the same dataset. Therefore, evaluation time can consistently compare the relative complexity. To discourage complexity using the proposed evaluation time, two approaches are used. The first approach explicitly penalises models with long evaluation times by customising well-tested techniques that traditionally control the size. The second uses a novel technique that implicitly discourages long evaluation times by incorporating a race condition in the GP process. The proposed methods yield accurate yet simple solutions; furthermore, the implicit method improves the runtime and training speed of GP. Across a diverse suite of GP applications, the evaluation time methods proffer several qualitative advantages over the bloat-control methods. They effectively manage the functional complexity of regression models to enable them to predict unseen data (generalise) better than those produced by bloat-control. In two feature engineering applications, they decrease the number of features – principally responsible for model complexity – while bloat-control does not. In a robot control application, they evolve accurate and efficient routines – efficient routines use fewer time steps to complete their tasks; bloat-control could not detect the efficiency of the programs. In Boolean logic problems where size emerges as the major cause of complexity, these methods are not hindered and perform at least as well as bloat-control. Overall, the proposed system characterises and manages various forms of complexity; also, it is broadly applicable and, hence, suitable for an automatic programming system.
... In their attempt for solving problems, men delegated computers to develop algorithms capable of performing certain tasks. The most prominent effort in this direction is Genetic Programming (GP) (Koza, 1992;Koza, 1994), an evolutionary technique used for breeding a population of computer programs. Instead of evolving solutions for a particular problem instance, GP is mainly intended for discovering computer programs capable of solving particular classes of optimization problems. ...
... There are many such approaches in literature concerning GP. Noticeable effort has been dedicated for evolving deterministic computer programs capable of solving specific problems such as symbolic regression (Koza, 1992;Koza, 1994), classification (Brameier et al., 2001a) etc. ...
... Another approach to the problem of evolving EAs could be based on Automatically Defined Functions (Koza, 1994). Instead of evolving an entire EA we will try to evolve a small pattern (sequence of instructions) that will be repeatedly used to generate new individuals. ...
Preprint
Full-text available
A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem, and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.
... Standard GP was able to solve up to even-5 parity when the set of gates F ={AND, OR, NAND, NOR} is used [5]. Improvements, such as Automatically Defined Functions [6] and Sub-symbolic node representation [14], allows GP programs to solve larger instances of the even-parity problem. Using MEP and reversible gates we are able to evolve a solution up to even-8-parity function using a reasonable population size. ...
... The particular function that we want to find is the Boolean even-parity function. This function has k Boolean arguments and it returns T (True) if an even number of its arguments are T. Otherwise the even-parity function returns F (False) [6]. According to [6] the Boolean even-parity functions appear to be the most difficult Boolean functions to detect via a blind random search. ...
... This function has k Boolean arguments and it returns T (True) if an even number of its arguments are T. Otherwise the even-parity function returns F (False) [6]. According to [6] the Boolean even-parity functions appear to be the most difficult Boolean functions to detect via a blind random search. ...
Preprint
Full-text available
Reversible computing basically means computation with less or not at all electrical power. Since the standard binary gates are not usually reversible we use the Fredkin gate in order to achieve reversibility. An algorithm for designing reversible digital circuits is described in this paper. The algorithm is based on Multi Expression Programming (MEP), a Genetic Programming variant with a linear representation of individuals. The case of digital circuits for the even-parity problem is investigated. Numerical experiments show that the MEP-based algorithm is able to easily design reversible digital circuits for up to the even-8-parity problem.
... Genetic Programming (GP) [42,43,44] is an evolutionary technique used for breeding a population of computer programs. Whereas the evolutionary schemes employed by GP are similar to those used by other techniques (such as Genetic Algorithms [39], Evolutionary Programming [102], Evolution Strategies [88]), the individual representation and the corresponding genetic operators are specific only to GP. Due to its nonlinear individual representation (GP individuals are usually represented as trees) GP is widely known as a technique that creates computer programs. ...
... The sextic polynomial [43]. Find a mathematical expression that satisfies best a set of fitness cases generated by the function: ...
... There are many such approaches so far in the GP literature [42,43,44]. The evolving of deterministic computer programs able to solve specific problems requires a lot of effort. ...
Preprint
Full-text available
Automatic Programming is one of the most important areas of computer science research today. Hardware speed and capability have increased exponentially, but the software is years behind. The demand for software has also increased significantly, but it is still written in old fashion: by using humans. There are multiple problems when the work is done by humans: cost, time, quality. It is costly to pay humans, it is hard to keep them satisfied for a long time, it takes a lot of time to teach and train them and the quality of their output is in most cases low (in software, mostly due to bugs). The real advances in human civilization appeared during the industrial revolutions. Before the first revolution, most people worked in agriculture. Today, very few percent of people work in this field. A similar revolution must appear in the computer programming field. Otherwise, we will have so many people working in this field as we had in the past working in agriculture. How do people know how to write computer programs? Very simple: by learning. Can we do the same for software? Can we put the software to learn how to write software? It seems that is possible (to some degree) and the term is called Machine Learning. It was first coined in 1959 by the first person who made a computer perform a serious learning task, namely, Arthur Samuel. However, things are not so easy as in humans (well, truth to be said - for some humans it is impossible to learn how to write software). So far we do not have software that can learn perfectly to write software. We have some particular cases where some programs do better than humans, but the examples are sporadic at best. Learning from experience is difficult for computer programs. Instead of trying to simulate how humans teach humans how to write computer programs, we can simulate nature.
... Traceless Genetic Programming (TGP) 1 is a GP [2,3] variant as it evolves a population of computer programs. TGP is a hybrid method combining a technique for building the individuals and technique for representing the individuals. ...
... Evolutionary techniques have been extensively used for evolving digital circuits [1,2,3,8,4,5,6,7,11], due to their practical importance. The case of even-parity circuits was deeply analyzed [2,3,8,7] due to their simple representation. ...
... Evolutionary techniques have been extensively used for evolving digital circuits [1,2,3,8,4,5,6,7,11], due to their practical importance. The case of even-parity circuits was deeply analyzed [2,3,8,7] due to their simple representation. ...
Preprint
Full-text available
A genetic programming (GP) variant called traceless genetic programming (TGP) is proposed in this paper. TGP is a hybrid method combining a technique for building individuals and a technique for representing individuals. The main difference between TGP and other GP techniques is that TGP does not explicitly store the evolved computer programs. Two genetic operators are used in conjunction with TGP: crossover and insertion. TGP is applied for evolving digital circuits for the even-parity problem. Numerical experiments show that TGP outperforms standard GP with several orders of magnitude.
... Neural Architecture Search (NAS) uses different approaches like EAs, reinforcement learning, and Bayesian optimization to find near-optimal architectures for Deep Neural Networks (DNNs) (Miikkulainen et al., 2019;Zoph et al., 2018;Lu et al., 2019). Genetic Programming (GP) and Grammatical Evolution (GE) (Ryan et al., 1998) enable solution generation and optimization simultaneously (Koza, 1994;Ryan et al., 1998;Miller, 2011), which makes them suitable candidates for simultaneous generation and training of neural networks (Tsoulos et al., 2008;Ahmadizar et al., 2015;Assunção et al., 2017a,b;Miller, 2020). In particular, GE (Ryan et al., 1998) is a variant of GP that takes a user-defined Context Free Grammar (CFG) as its input to determine how genotypes are mapped to phenotypes. ...
... A modular solution contains almost independent units (modules) where each unit plays a specific role in solution's performance (Amer and Maul, 2019). Modularity provides important advantages: reuse, readability, and scalability (Koza, 1994;Swafford et al., 2011a). While the modules may be highly complex internally, they are loosely connected together usually in a modular solution. ...
... There are several studies that investigate the importance of modularity in the performance of GPs (Koza, 1994;Miller, 2011;Ellefsen et al., 2020;O'Neill and Ryan, 2000;Hemberg E., 2009;Harper and Blair, 2006;Swafford et al., 2011a,b). For example, Koza (1994) demonstrates that adding modularity to GP highly increases scalability. ...
Article
This paper presents a novel method, called Modular Grammatical Evolution (MGE), towards validating the hypothesis that restricting the solution space of NeuroEvolution to modular and simple neural networks enables the efficient generation of smaller and more structured neural networks while providing acceptable (and in some cases superior) accuracy on large data sets. MGE also enhances the state-of-the-art Grammatical Evolution (GE) methods in two directions. First, MGE's representation is modular in that each individual has a set of genes, and each gene is mapped to a neuron by grammatical rules. Second, the proposed representation mitigates two important drawbacks of GE, namely the low scalability and weak locality of representation, towards generating modular and multi-layer networks with a high number of neurons. We define and evaluate five different forms of structures with and without modularity using MGE and find single-layer modules with no coupling more productive. Our experiments demonstrate that modularity helps in finding better neural networks faster. We have validated the proposed method using ten well-known classification benchmarks with different sizes, feature counts, and output class counts. Our experimental results indicate that MGE provides superior accuracy with respect to existing NeuroEvolution methods and returns classifiers that are significantly simpler than other machine learning generated classifiers. Finally, we empirically demonstrate that MGE outperforms other GE methods in terms of locality and scalability properties.
... The theoretical analysis and the proof of the parametric reduction of ARX-Laguerre with respect to the ARX model are given in [2] and [3] where the parsimony of the expansion is strongly linked to the choice of the poles defining both Laguerre bases. Then we propose in this paper to estimate, from input/output measurements, the Laguerre pole optimization by using the genetic algorithm (GA) [8,10,11] which are one of the stochastic optimization algorithms that can express the complex structure problems with its hierarchy and can determine the feasible solving space automatically without giving any superstructure. Then, the optimum can be determined automatically in the genetic evolutionary process. ...
... Since the ARX-Laguerre model (1) is nonlinear with respect to the Laguerre poles, then a nonlinear optimization problems can be applied such as the GA. In this section, we propose to compute the optimum values of Laguerre poles by using GA classified as evolutionary optimization methods [8,10,11]. They allow to define an optimum of a function defined on a data space. ...
Article
Full-text available
This work is dedicated to the synthesis of a new fault detection and identification scheme for the actuator and/or sensor faults modeled as unknown inputs of the system. The novelty of this scheme consists in the synthesis of a new structure of proportional-integral observer (PIO) refor-mulated from the new linear ARX-Laguerre representation with filters on system input and output in order to estimate the unknown inputs presented as faults. The designed observer exploits the input/output measurements to reconstruct the Laguerre filter outputs where the stability and the convergence properties are ensured by using Linear Matrix Inequality. However, a significant reduction of this model is subject to an optimal choice of both Laguerre poles which is achieved by a new proposed identification approach based on a genetic algorithm. The performances of the proposed identification approach and the resulting PIO are tested on numerical simulation and validated on a 2 nd order electrical linear system.
... In standard regression, the functional form is determined in advance, so model discovery amounts to parameter fitting. In symbolic regression (SR) [28,29], the functional form is not determined in advance. It may apply operations from a given list, e.g., +, −, ×, and ÷, so the functional form is calculated from the data. ...
... SR is mostly solved with genetic programming (GP) [28,29,6,41], though mixed-integer nonlinear programming (MINLP)-based methods have recently been developed for SR [7,14,15]. We develop a new MINLP-based SR solver (described in the Appendix). ...
Preprint
Scientists have long aimed to discover meaningful equations which accurately describe data. Machine learning algorithms automate construction of accurate data-driven models, but ensuring that these are consistent with existing knowledge is a challenge. We developed a methodology combining automated theorem proving with symbolic regression, enabling principled derivations of laws of nature. We demonstrate this for Kepler's third law, Einstein's relativistic time dilation, and Langmuir's theory of adsorption, in each case, automatically connecting experimental data with background theory. The combination of logical reasoning with machine learning provides generalizable insights into key aspects of the natural phenomena.
... In this section we describe the way in which the Automatically Defined Functions [9] are implemented within the context of Multi Expression Programming. ...
... A function definition is especially efficient when it is repeatedly called with different instantiations of its arguments. GP with ADFs have shown significant improvements over the standard GP for most of the considered test problems [8,9]. ...
Preprint
Full-text available
Multi Expression Programming (MEP) is a Genetic Programming variant that uses a linear representation of chromosomes. MEP individuals are strings of genes encoding complex computer programs. When MEP individuals encode expressions, their representation is similar to the way in which compilers translate C or Pascal expressions into machine code. A unique MEP feature is the ability of storing multiple solutions of a problem in a single chromosome. Usually, the best solution is chosen for fitness assignment. When solving symbolic regression or classification problems (or any other problems for which the training set is known before the problem is solved) MEP has the same complexity as other techniques storing a single solution in a chromosome (such as GP, CGP, GEP or GE). Evaluation of the expressions encoded into a MEP individual can be performed by a single parsing of the chromosome. Offspring obtained by crossover and mutation are always syntactically correct MEP individuals (computer programs). Thus, no extra processing for repairing newly obtained individuals is needed.
... In this section we describe the way in which the Automatically Defined Functions [9] are implemented within the context of Multi Expression Programming. ...
... A function definition is especially efficient when it is repeatedly called with different instantiations of its arguments. GP with ADFs have shown significant improvements over the standard GP for most of the considered test problems [8,9]. ...
Preprint
Full-text available
Multi Expression Programming (MEP) is a Genetic Programming variant that uses a linear representation of chromosomes. MEP individuals are strings of genes encoding complex computer programs. When MEP individuals encode expressions, their representation is similar to the way in which compilers translate C or Pascal expressions into machine code. A unique MEP feature is the ability to store multiple solutions to a problem in a single chromosome. Usually, the best solution is chosen for fitness assignment. When solving symbolic regression or classification problems (or any other problems for which the training set is known before the problem is solved) MEP has the same complexity as other techniques storing a single solution in a chromosome (such as GP, CGP, GEP, or GE). Evaluation of the expressions encoded into an MEP individual can be performed by a single parsing of the chromosome. Offspring obtained by crossover and mutation is always syntactically correct MEP individuals (computer programs). Thus, no extra processing for repairing newly obtained individuals is needed.
... GP techniques are very suitable for the MSP paradigm because the trees offer an implicit multiple solutions representation: each sub-tree may be considered as a potential solution of the problem. Three Genetic Programming (GP) [11,12] variants are tested with the proposed model: Multi Expression Programming (MEP) [17,18], Linear Genetic Programming (LGP) [4,5,6,16] and Infix Form Genetic Programming (IFGP) [19]. ...
... Test problem f 2 is also known as quartic polynomial and f 4 is known as sextic polynomial [11,12]. ...
Preprint
Full-text available
We investigate the possibility of encoding multiple solutions of a problem in a single chromosome. The best solution encoded in an individual will represent (will provide the fitness of) that individual. In order to obtain some benefits the chromosome decoding process must have the same complexity as in the case of a single solution in a chromosome. Three Genetic Programming techniques are analyzed for this purpose: Multi Expression Programming, Linear Genetic Programming, and Infix Form Genetic Programming. Numerical experiments show that encoding multiple solutions in a chromosome greatly improves the search process.
... Koza [11,12] suggested that Genetic Programming (GP) may be used for solving equations. In that approach [11] each GP tree represented a potential solution of the problem (equation). ...
... The quality of a GP individual is usually computed by using a set of fitness cases [11,12]. For instance, the aim of symbolic regression is to find a mathematical expression that satisfies a set of m fitness cases. ...
Preprint
Full-text available
Traceless Genetic Programming (TGP) is a Genetic Programming (GP) variant that is used in cases where the focus is rather the output of the program than the program itself. The main difference between TGP and other GP techniques is that TGP does not explicitly store the evolved computer programs. Two genetic operators are used in conjunction with TGP: crossover and insertion. In this paper, we shall focus on how to apply TGP for solving multi-objective optimization problems which are quite unusual for GP. Each TGP individual stores the output of a computer program (tree) representing a point in the search space. Numerical experiments show that TGP is able to solve very fast and very well the considered test problems.
... Actually, GEP is considered as an enhanced version of Genetic Programming (GP) developed by Koza [68], which handled the GP issues, such as the limited regression strategies [64]. ...
... The following is a description of the GEP technique's stage-by-stage [68]: ...
Article
Accurate estimation of thermal conductivity (TC) of nanofluids is essential in thermophysical studies of nanofluids. In this study, more than 3200 data of thermal conductivity of nanofluids were collected from literature to develop simple-to-use and accurate correlations for predicting relative thermal conductivity of nanofluids. The dataset includes 13 different nanofluids with temperature from −30.00 to 149.15 °C, particle size from 5.00 to 150.00 nm, particle thermal conductivity from 1.20 to 1000.00 W/mk, particle volume fractions from 0.01 to11.22%, and base fluid thermal conductivity from 0.11 to 0.69 W/mk. Group method of data handling (GMDH) and gene expression programming (GEP) as two white-box powerful models were used for modeling. The results of proposed models were compared to 23 well-known theoretical and empirical models. The statistical and graphical results showed that the proposed models are more precise and reliable than the existing ones in literature. The GMDH model showed a better performance compared to GEP, and could predict all data with an average absolute relative error of 2.27% in training and 2.44% in testing data set. In addition, it was found that the proposed models could capture the physically expected trends with variation of temperature, size of nanoparticles, and volume fraction. The sensitivity analysis illustrated that temperature has the highest effect on the relative TC followed by base fluid TC and particle volume fraction
... It can easily adapt to different kinds of optimization problems by using parameter tuning and modifying the operations. Metaheuristic algorithms are divided into four classes; (1) Evolutionary Algorithms (EAs), such as Genetic Algorithm (GA) [84], Genetic Programming (GP) [84], and Differential Evolution [134]. (2) human-based algorithms, such as Tabu search (TS) [58], Translation Lookaside Buffer(TLB) [106], and Socio-evolution and Learning Optimization (SELO) [87]. ...
... It can easily adapt to different kinds of optimization problems by using parameter tuning and modifying the operations. Metaheuristic algorithms are divided into four classes; (1) Evolutionary Algorithms (EAs), such as Genetic Algorithm (GA) [84], Genetic Programming (GP) [84], and Differential Evolution [134]. (2) human-based algorithms, such as Tabu search (TS) [58], Translation Lookaside Buffer(TLB) [106], and Socio-evolution and Learning Optimization (SELO) [87]. ...
Article
This paper introduces a comprehensive survey of a new swarm intelligence optimization algorithm so-called Harris hawks optimization (HHO) and analyzes its major features. HHO is counted as an example of the most effective Optimization algorithm and utilized in different problems in various domains, successfully. For example, energy and Power Flow, engineering, medical applications, networks, and image processing. This review introduces the available related works of HHO where the main topics include; HHO variants, modification, and Hybridization, HHO applications, analysis and differentiation between HHO and other algorithms in the literature. Finally, the conclusions concentrate on the existing work on HHO, showing its disadvantages, and propose future works. The review paper will be helpful for the researchers and practitioners of HHO belonging to a wide range of audiences from the domains of optimization, engineering, medical, data mining and clustering. As well, it is wealthy in research on health, environment and public safety. Also, it will aid those who are interested by providing them with potential future research.
... To compare our results to current state-of-the-art approaches, we use the Nguyen benchmark [Uy et al., 2011], R rationals [Krawiec and Pawlak, 2013], Livermore, [Petersen, 2019], Keijzer [Keijzer, 2003], Constant, and Koza [Koza, 1994]. The complete benchmark functions are given in Supp. ...
Preprint
Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The current state of the art is based on Neural Radiance Fields (NeRFs), and while achieving impressive results, the methods suffer from long training times as they require evaluating thousands of 3D point samples via a deep neural network for each image. We propose a 2D-only method that maps multiple context views and a query pose to a new image in a single pass of a neural network. Our model uses a two-stage architecture consisting of a codebook and a transformer model. The codebook is used to embed individual images into a smaller latent space, and the transformer solves the view synthesis task in this more compact space. To train our model efficiently, we introduce a novel branching attention mechanism that allows us to use the same model not only for neural rendering but also for camera pose estimation. Experimental results on real-world scenes show that our approach is competitive compared to NeRF-based methods while not reasoning in 3D, and it is faster to train.
... Step 4: Generate a new solution by flying randomly according to (27). ...
Article
Full-text available
In this paper, a novel Gaussian bare-bones bat algorithm (GBBBA) and its modified version named as dynamic exploitation Gaussian bare-bones bat algorithm (DeGBBBA) are proposed for solving optimal reactive power dispatch (ORPD) problem. The optimal reactive power dispatch (ORPD) plays a fundamental role in ensuring stable, secure, reliable as well as economical operation of the power system. The ORPD problem is formulated as a complex and nonlinear optimization problem of mixed integers including both discrete and continuous control variables. Bat algorithm (BA) is one of the most popular metaheuristic algorithms which mimics the echolocation of the microbats and which has also outperformed some other metaheuristic algorithms in solving various optimization problems. Nevertheless, the standard BA may fail to balance exploration and exploitation for some optimization problems and hence it may often fall into local optima. The proposed GBBBA employs the Gaussian distribution in updating the bat positions in an effort to mitigate the premature convergence problem associated with the standard BA. The GBBBA takes advantages of Gaussian sampling which begins from exploration and continues to exploitation. DeGBBBA is an advanced variant of GBBBA in which a modified Gaussian distribution is introduced so as to allow the dynamic adaptation of exploitation and exploitation in the proposed algorithm. Both GBBBA and DeGBBBA are used to determine the optimal settings of generator bus voltages, tap setting transformers and shunt reactive sources in order to minimize the active power loss, total voltage deviations and voltage stability index. Simulation results show that GBBBA and DeGBBBA are robust and effective in solving the ORPD problem.
... Symbolic regression was proposed and developed by Koza [119,120], relying on genetic programming and evolutionary algorithms, including three operators as selection, crossover, and mutation. Automatic and simultaneous searches to find the best functional forms and the respective coefficients of predictor variables given a dataset, and no need for prior assumptions are the main advantages of symbolic regression. ...
Article
This paper develops a nonmodel-based approach for seismic loss assessment of buildings with eccentrically braced frames (EBFs). An extensive database is utilized to predict engineering demand parameters (EDPs), including, peak and residual story drift ratios, peak story link rotations, and also peak floor absolute accelerations, along with the height of the buildings. The estimated EDPs are used to assess the total economic losses associated with the collapse, demolition, and repair of structural and/or non-structural components. The database includes seismic intensity measure, improved wavelet-based refined damage-sensitive feature (rDSF) assembled only by the roof absolute acceleration response, geometric information, and EDPs of the 4-, 8-, and 16-story prototype models, extracted from incremental dynamic analyses subjected to 44 far-field ground motions. The nonlinear model of the aforementioned structures was already developed by the authors to capture the response of the structures up to collapse. Symbolic and Bayesian regressions are carried out in addition to ordinary least square linear regression to develop the empirical equations. Moreover, a thorough study is performed on the optimal selection of the intensity measures proposed in the literature as the input variable of the predictive equations through reliability-based sensitivity analysis. To estimate the first mode period of the considered structures to compute the improved wavelet-based rDSF and promote a nonmodel-based approach, Auto-Regressive model with exogenous input is employed. The results show that the story-based EDPs and also the corresponding expected economic losses of EBF buildings are accurately predicted compared with those obtained from incremental dynamic analyses. Consequently, the proposed nonmodel-based approach paves the path for rapid earthquake-induced economic loss assessment of EBF buildings in emergency operations.
... Each individual is regarded as a chromosome. The chromosomes are combined, and mutations are made in them after evaluating the initial population's fitness to organize and reproduce better chromosomes utilizing genetic actuators, and eventually combining the current population with a new population emerging from a combination and mutation in chromosomes [56]. The method is a new regression technique with a high capability for the automatic evolution of programs. ...
Article
Full-text available
This work attempts to estimate the amount of fly-rock in the Angoran mine in the Zanjan province (Iran) using the gene expression programming (GEP) predictive technique. For this, the input data including the fly-rock, mean depth of the hole, powder factor, stemming, explosive weight, number of holes, and booster is collected from the mine. Then using GEP, a series of intelligent equations are proposed in order to predict the fly-rock distance. The best GEP equation is selected based on some well-established statistical indices in the next stage. The coefficient of determination for the training and testing datasets of the GEP equation are 0.890 and 0.798, respectively. The model obtained from the GEP method is then optimized using the teaching-learning-based optimization (TLBO) algorithm. Based on the results obtained, the correlation coefficient of the training and testing data increase to 91% and 89%, which increases the accuracy of the equation. This new intelligent equation could forecast fly-rock resulting from mine blasting with a high level of accuracy. The capabilities of this intelligent technique could be further extended to the other blasting environmental issues. Keywords Blasting operations Fly-rock Gene expression programming Teaching-learning-based optimization algorithm
... If for a given problem the best solution is known for one instance and if repeating the runs significant number of times returns a solution that is 99% as good as the known solution, then we can be confident that a solution to a new instance of the problem will be 98% as good as the result based on several runs but Evaluating the performance of stochastic algorithms is not easy. John Koza [49] proposed a performance measure called computational effort. ...
Thesis
One of the main branches of signal processing is spectral analysis. Many experimental devices produce signals which are sums of damped sines. But with advancements in these devices, the volume of data they generate continues to grow. In this thesis, we focus on data from Fourier Transform Ion Cyclotron Resonance Mass Spectrometer (FT-ICR) and also on a simulated data. Our contribution consists in exploring the contribution of evolutionary methods to overcome the limitations of the Fourier Transform (FT) method. We carried out a comparative study of the methods by FT and by artificial evolution. The results obtained with SINUS-IT are of better quality than those of FT, without requiring denoising or apodization. SINUS-IT was able to determine phase parameter with good precision and results were obtained using less number of samples which would decrease the acquisition time.
... The classical evolutionary algorithms can be distinguished by the nature of their solutions representation and operators employing for their evolution. The evolutionary strategies employs mutation operator to create new solutions which are represented in real-numbers [14,15,16],evolutionary programming also required solutions that are represented in real numbers or integers [17] and genetic algorithms employ crossover operators to evolve its population [18,19] and genetic programming required tree based representation of computer programs to perform search process [20,21,22].The main drawbacks associated with most of the classical EAs are their high computational cost, poor constraint-handling abilities, problem-specific parameter tuning and limited problem size and lack of ability to cope with large scale global optimization problems. ...
Preprint
Full-text available
Teaching learning based optimization (TLBO) is a stochastic algorithm which was first proposed for unconstrained optimization problems. It is population based, nature-inspired, and meta-heuristic that imitates teaching learning process. It has two phases, teacher and learner. In teacher phase, the teacher who is well-learned person transfers his/her knowledge to the learners to raise their grades/results; while in learner phase, learners/pupils learn and refine their knowledge through mutual interconnection. To solve constrained optimization problems (COPs) through TLBO we need to merge it with some constraint handling technique (CHT). Superiority of feasibility (SF) is a concept for making CHTs, existed in different forms based on various decisive factors. Most commonly used decision making factors in SF are number of constraints violated (NCV) and weighted mean (WM) values for comparing solutions. In this work, SF based on number of constraints violated (NCVSF) and weighted mean (WMSF) are incorporated in the framework of TLBO. These are tested upon CEC-2006 constrained suit with the remark that single factor used for the decision making of winner is not a wise idea. Mentioned remark leads us to made a single CHT that carries the capabilities of both discussed CHTs. It laid the foundation of hybrid superiority of feasiblity (HSF); where NCV and WM factors are combined with giving dominance to NCV over WM. In current research three constrained versions of TLBO are formulated by the name NCVSF-TLBO, WMSF-TLBO, and HSF-TLBO; while implanting NCVSF, WMSF, and HSF in the framework of TLBO, respectively. These constrained versions of TLBO are evaluated on CEC-2006 with the remarks that HSF-TLBO got prominent and flourishing status among these.
... It is a progressive modeling technique which is developed by [12], [13]. GEP is an extension of genetic programming (GP) developed by [30]. It is considered as a generalization of genetic algorithms (GAs) which is elaborated by Goldberg (1989). ...
Article
Full-text available
Estimation of reference evapotranspiration (ET o) is vibrantly required for estimating crop water requirement and budgeting irrigation scheduling. The beneficial use of water is of great importance due to shortage issues especially in developing countries like Pakistan. The Food and agricultural organization (FAO) developed a Peneman-Montieth (PM) method which can be globally considered as a standard method for estimation of ET o, but it requires numerous climatic data. Consequently, there is a need to find out the next best suitable method after PM method. The Multi-layer perceptron (MLP), Gene expression programing (GEP) and Radial basis function (RBF) were utilized to calculate ET o values. Monthly meteorological data of six different stations located in arid, semi-arid and humid regions of Paki-stan covered from 1980 to 2015. Seventeen input combinations comprise of various climatic variables were developed to evaluate the impact on ET o. Of the available meteorological data, 70% was employed in training while remaining 30% used in testing process. The yielded values of the developed models were compared with the ET o estimated by PM method. The outcome of the study was also applied on some other climatic regions located in USA, New Zealand and China for numerous duration only three climatic parameters, namely, maximum temperature, mean relative humidity and wind velocity had a large positive effect on increasing the accuracy of estimating ET o. By comparing the eight performing indices, MLP among all the powerful predictive modeling techniques can also be considered as the superior alternative to the conventional methods in estimation of ET o .
... The patterns in an evolutionary algorithm can be assimilated with the Automatically Defined Functions (ADFs) in Genetic Programming [19]. ...
Preprint
Full-text available
A new model for evolving Evolutionary Algorithms (EAs) is proposed in this paper. The model is based on the Multi Expression Programming (MEP) technique. Each MEP chromosome encodes an evolutionary pattern that is repeatedly used for generating the individuals of a new generation. The evolved pattern is embedded into a standard evolutionary scheme that is used for solving a particular problem. Several evolutionary algorithms for function optimization are evolved by using the considered model. The evolved evolutionary algorithms are compared with a human-designed Genetic Algorithm. Numerical experiments show that the evolved evolutionary algorithms can compete with standard approaches for several well-known benchmarking problems.
... Authors consider the case of a line with two, three or four work-stations and exponentially distributed processing times . The problem is approached by applying GP techniques, (Angeline and Kinnear 1996;Koza, 1992Koza, , 1994Poli et al., 2008) to extract functional general formulae. These formulae are expressed in GP like LISP-like symbolic expressions to estimate a performance measure (maximum possible utilization or throughput) as a function of the mean service rates of each server which follows the exponential distribution. ...
Article
Full-text available
The analytical evaluation of production system performance measures is a difficult task. Over the years, various methods have been developed to solve specific cases of very short production lines. However, formulae for estimating the mean production rate (throughput) are lacking. Recent developments in artificial intelligence simplify their use in the solution of symbolic regression problems. In this work, we use genetic programming (GP) to obtain approximate formulae for calculating the throughput of short reliable approximately balanced production lines, for which the processing times are exponentially distributed. A hybrid GP&GA scheme reduces the search space, in which GP uses genetic algorithms (GA) as a search engine. The scheme produces polynomial formulae for throughput estimation for the first time. To train the GP algorithm we use MARKOV, an accurate algorithm for calculating numerically the exact throughput of short exponential production lines. A few formulae, not previously reported in the literature, are presented. These formulae give close results to the exact results from the MARKOV algorithm, for short (up to five stations) reliable approximately balanced production lines without intermediate buffers. Also, the robustness of these formulae is satisfactory. In addition, the proposed hybrid GP&GA scheme is useful for design/production engineers to adjust the formulae to other ranges of the mean processing rates; the algorithms are quickly retrained to generate a new approximate formula.
... The first category, Evolutionary Algorithms (EAs), refers to algorithms which are inspired by nature and simulate natural creatures' behaviors such as mutations, crossover, selection, elitism, and reproduction. Examples of these algorithms are the Genetic Algorithm (GA) by [14], Genetic Programming (GP) by [58], Differential Evolution (DE) by [16], Evolutionary Programming (EP) by [59], Evolution Strategy (ES) by [60], Biogeography-Based Optimizer (BBO) by [24], and Backtracking Search Algorithm (BSA) by [61]. The second category is Swarm-Intelligence (SI)-based algorithms, which are inspired from the social behavior of swarms, birds, insects, fish, and animals. ...
Article
Full-text available
The Harris hawk optimizer is a recent population-based metaheuristics algorithm that simulates the hunting behavior of hawks. This swarm-based optimizer performs the optimization procedure using a novel way of exploration and exploitation and the multiphases of search. In this review research, we focused on the applications and developments of the recent well-established robust optimizer Harris hawk optimizer (HHO) as one of the most popular swarm-based techniques of 2020. Moreover, several experiments were carried out to prove the powerfulness and effectivness of HHO compared with nine other state-of-art algorithms using Congress on Evolutionary Computation (CEC2005) and CEC2017. The literature review paper includes deep insight about possible future directions and possible ideas worth investigations regarding the new variants of the HHO algorithm and its widespread applications.
... • Respectively, in which the procedures operate in an integrated procedure, where each method executes at the same time from the rest. For further reading you can visit: Koza [6], Simon [7], Alatas [8], Kirkpatrick [9], Webster and Bernhard [10], Erol and Eksin [11], Rashedi et al. [12], Kaveh and Talatahari [13], Formato [14], Hatamlou [15] Kaveh and Khayatazad [16], Du and Zhuang [17], Moghaddam [18], Shah-Hosseini [19], Gao et al. [20], Tavakkoli-Moghaddam et al. [21], Drezner [22], Jaszkiewicz and Kominek [23] Lee et al. [24], Alba et al. [25], Chiu et al. [26]. ...
Article
In this paper, a facility location model with fuzzy values parameters based on the hybrid meta-heuristic method is investigated. The proposed model uses fuzzy values to solve the installation problem. Problem hypotheses are considered fuzzy random variables, and the capacity of each facility is unlimited. This paper combines a modern nature-inspired procedure called the Whale Algorithm (WA) with genetic methods. WA and Genetic Algorithm (GA) have been tested with scientific optimization problems and modeling problems. To evaluate the performance of the proposed methods, we apply the methods to our spatial models in which fuzzy coefficients are used. The results of numerical optimization show that the proposed combined method performs better than conventional methods.
... This evolutionary-based and advanced approach is an appropriate method by which different systems could be described using inputs and desirable outputs. The GEP was first developed and introduced by Ferrera [35,36] and has been contemplated as a new variation of genetic programming [37]. In the older version of GEP, problems, such as faulty explorations and a limited number of regression methods, have been addressed [35,36]. ...
Article
Full-text available
Carbon geo-sequestration (CGS), as a well-known procedure, is employed to reduce/store greenhouse gases. Wettability behavior is one of the important parameters in the geological CO2 sequestration process. Few models have been reported for characterizing the contact angle of the brine/CO2/mineral system at different environmental conditions. In this study, a smart machine learning model, namely Gene Expression Programming (GEP), was implemented to model the wettability behavior in a ternary system of CO2, brine, and mineral under different operating conditions, including salinity, pressure, and temperature. The presented models provided an accurate estimation for the receding, static, and advancing contact angles of brine/CO2 on various minerals, such as calcite, feldspar, mica, and quartz. A total of 630 experimental data points were utilized for establishing the correlations. Both statistical evaluation and graphical analyses were performed to show the reliability and performance of the developed models. The results showed that the implemented GEP model accurately predicted the wettability behavior under various operating conditions and a few data points were detected as probably doubtful. The average absolute percent relative error (AAPRE) of the models proposed for calcite, feldspar, mica, and quartz were obtained as 5.66%, 1.56%, 14.44%, and 13.93%, respectively, which confirm the accurate performance of the GEP algorithm. Finally, the investigation of sensitivity analysis indicated that salinity and pressure had the utmost influence on contact angles of brine/CO2 on a range of different minerals. In addition, the effect of the accurate estimation of wettability on CO2 column height for CO2 sequestration was illustrated. According to the impact of wettability on the residual and structural trapping mechanisms during the geo-sequestration of the carbon process, the outcomes of the GEP model can be beneficial for the precise prediction of the capacity of these mechanisms
... The PSO algorithm is based on a population of "swarm agents" within candidate solutions, which are known as particles. In his book Genetic Programming [22], the author laid the groundwork for the optimization technique known as "machine learning," which is widely used today in many fields of computer science. Storn and Price [23] proposed "differential evolution" in 1997, which is considered the best optimization technique in various applications when compared to genetic algorithms (GAs). ...
... As well as other evolutionary algorithms, GP based on the principles of Darwian's evolutionary theory and implements such methods as mutation Fig. 3, crossover Fig. 4 and selection [33]. For representation purposes of the mentioned above mutation and crossover methods, we used the following equations:ŷ 1 = 1.2x 1 × x 3 + 4x 2 and For using GP method should be performed in the next five preparation steps first [34]: ...
... Genel olarak, gradyan tabanlı ve metasezgisel olarak iki tür optimizasyon algoritması vardır. Geleneksel yöntemle karşılaştırıldığında, metasezgisel algoritmalar global optimum arama için gradyan bilgileri gerektirmeyen tekniklerdir (Qing, 2009 (Holland, 1975), diferansiyel gelişim (Storn ve Price, 1995), evrim stratejisi (Beyer ve Schwefel, 2002), genetik programlama (Koza, 1992), fiziksel algoritmalar; benzetilmiş tavlama (Kirkpatrick, 1983), büyük patlama-büyük çöküş (Erol ve Eksin, 2006), kara delik (Hatamlou, 2013) ve ışın optimizasyon algoritmalarıdır (Kaveh ve Khayatazad, 2012). Tüm bu metasezgisel algoritmalar, popülasyon büyüklüğü ve maksimum iterasyon sayısı gibi ortak kontrol parametrelerinin yanı sıra kendi özel kontrol parametrelerine sahiptir. ...
... Meanwhile, in Germany, Rechenberg and Schwefel introduced in the 60s Evolution Strategies (ES) [RTE65,Sch65]. In the 1990s, these three interpretations were merged into one technology called Evolutionary Computation while a fourth inspiration had emerged: Genetic Programming (GP) [Koz94]. ...
Thesis
Radar networks are complex systems that need to be configured to maximize their coverage or the probability of detection of a target.The optimization of radar networks is a challenging task that is typically performed by experts with the help of simulators.Alternatively, black-box optimization algorithms can be used to solve these complex problems. Many heuristic algorithms were developed to solve black-box optimization problems and these algorithms exhibit complementarity of performance depending on the structure of the problem.Therefore, selecting the appropriate algorithm is a crucial task. The objective of this CIFRE PhD is to perform a landscape-aware algorithm selection of metaheuristics in order to optimize radar networks.The main contributions of this PhD thesis are twofold.In this thesis, we define six properties that landscape features should satisfy and we study to what degree landscape features satisfy these properties. One of the six properties is the invariance to the sampling strategy. We found that, surprisingly to what was recommended in the literature, the sampling strategy actually matters. We found important discrepancies in the feature values computed from different sampling strategies. Overall, we found that none of the features satisfy all defined properties. These features represent the core of a landscape-aware algorithm selection. We applied the landscape-aware algorithm selection of metaheuristics on the optimization of radar network use-cases. On this use-cases, algorithms have similar performances and the gain to perform an automated selection of algorithms is small. Nevertheless, the performance of the landscape-aware algorithm selection of metaheuristics is similar to the performance of the single best solver (SBS).
... Genetic Programming Koza (1994) authored the GP in 1994 based on the concept of GA to produce a nonlinear mathematical model output for given input values. The GA and GP were adopted for solving and generating equations, respectively. ...
Article
While power plant projects have seriously been encountered cost overruns, previous studies have paid less attention to predict the cost overruns in assisting contingency cost planning. Considering the critical risks producing cost overruns, this study aims to develop a Hybrid Predictive-Probabilistic-based Model (HPPM) by integrating genetic programming with the Monte Carlo technique. The proposed HPPM is based on collecting the data from thermal power plant projects (TPPP). Sensitivity analysis is also conducted by identifying the critical risks in simulating cost overruns. The simulation outcomes show that 40.48% of a project’s initial estimated budget is the most probable cost overrun, while a project is experienced 25% and 75% cost overruns with 50% and 90% probabilities respectively. These findings assist managers in improving the initial budget accuracy of power plant projects and cost management throughout the project execution phases. Besides, the applications of this model are prospective to similar infrastructure projects.
... SR has great flexibility in generating mathematical expressions; hence, it does not need a predefined model to capture the relationships in the dataset. However, classical SR methods, such as GP [14,30], GEP [8], and linear GP [2], usually handle the symbolic function ( ) with a single output ′ , i.e., ′ is a number and not a vector. They cannot represent the relationship ( ℎ −1 + ) of each layer in a NN because each layer's output is a vector, matrix, or tensor. ...
Preprint
Full-text available
Many recent studies focus on developing mechanisms to explain the black-box behaviors of neural networks (NNs). However, little work has been done to extract the potential hidden semantics (mathematical representation) of a neural network. A succinct and explicit mathematical representation of a NN model could improve the understanding and interpretation of its behaviors. To address this need, we propose a novel symbolic regression method for neural works (called SRNet) to discover the mathematical expressions of a NN. SRNet creates a Cartesian genetic programming (NNCGP) to represent the hidden semantics of a single layer in a NN. It then leverages a multi-chromosome NNCGP to represent hidden semantics of all layers of the NN. The method uses a (1+$\lambda$) evolutionary strategy (called MNNCGP-ES) to extract the final mathematical expressions of all layers in the NN. Experiments on 12 symbolic regression benchmarks and 5 classification benchmarks show that SRNet not only can reveal the complex relationships between each layer of a NN but also can extract the mathematical representation of the whole NN. Compared with LIME and MAPLE, SRNet has higher interpolation accuracy and trends to approximate the real model on the practical dataset.
... For example x + x 2 − x and x 2 belong to the same equivalence class and are functionally equivalent expressions. Identifying and eliminating such equivalences is difficult in the tree representation of programs, and several approaches have been proposed to overcome this issue such as calculating the subtree mean and variance [1], or code editing [21]. Our algorithm identifies x 2 as the class representative and eliminates all other functionally equivalent programs via set equality. ...
Preprint
Full-text available
We develop a symbolic regression framework for extracting the governing mathematical expressions from observed data. The evolutionary approach, faiGP, is designed to leverage the properties of a function algebra that have been encoded into a grammar, providing a theoretical guarantee of universal approximation and a way to minimize bloat. In this framework, the choice of operators of the grammar may be informed by a physical theory or symmetry considerations. Since there is currently no theory that can derive the 'constants of nature', an empirical investigation on extracting these coefficients from an evolutionary process is of methodological interest. We quantify the impact of different types of regularizers, including a diversity metric adapted from studies of the transcriptome and a complexity measure, on the performance of the framework. Our implementation, which leverages neural networks and a genetic programmer, generates non-trivial symbolically equivalent expressions ("Ramanujan expressions") or approximations with potentially interesting numerical applications. To illustrate the framework, a model of ligand-receptor binding kinetics, including an account of gene regulation by transcription factors, and a model of the regulatory range of the cistrome from omics data are presented. This study has important implications on the development of data-driven methodologies for the discovery of governing equations in experimental data derived from new sensing systems and high-throughput screening technologies.
... An in-depth discussion about genetic algorithms is out of the scope of the paper. The interested reader may refer to [44][45][46][47][48]. ...
Article
Full-text available
State-to-state numerical simulations of high-speed reacting flows are the most detailed but also often prohibitively computationally expensive. In this work, we explore the usage of machine learning algorithms to alleviate such a burden. Several tasks have been identified. Firstly, data-driven machine learning regression models were compared for the prediction of the relaxation source terms appearing in the right-hand side of the state-to-state Euler system of equations for a one-dimensional reacting flow of a N2/N binary mixture behind a plane shock wave. Results show that, by appropriately choosing the regressor and opportunely tuning its hyperparameters, it is possible to achieve accurate predictions compared to the full-scale state-to-state simulation in significantly shorter times. Secondly, several strategies to speed-up our in-house state-to-state solver were investigated by coupling it with the best-performing pre-trained machine learning algorithm. The embedding of machine learning algorithms into ordinary differential equations solvers may offer a speed-up of several orders of magnitude. Nevertheless, performances are found to be strongly dependent on the interfaced codes and the set of variables onto which the coupling is realized. Finally, the solution of the state-to-state Euler system of equations was inferred by means of a deep neural network by-passing the use of the solver while relying only on data. Promising results suggest that deep neural networks appear to be a viable technology also for this task.
... Genetic operators are applied in a two-step procedure. In the first step, the standard subtree crossover and mutation operators are applied to priority functions (Koza 1994). The second step represents the proposed feature selection mechanism as follows. ...
Article
Because of advances in computational power and machine learning algorithms, the automated design of scheduling rules using Genetic Programming (GP) is successfully applied to solve dynamic job shop scheduling problems. Although GP-evolved rules usually outperform dispatching rules reported in the literature, intensive computational costs and rule interpretability persist as important limitations. Furthermore, the importance of features in the terminal set varies greatly among scenarios. The inclusion of irrelevant features broadens the search space. Therefore, proper selection of features is necessary to increase the convergence speed and to improve rule understandability using fewer features. In this paper, we propose a new representation of the GP rules that abstracts the importance of each terminal. Moreover, an adaptive feature selection mechanism is developed to estimate terminals’ weights from earlier generations in restricting the search space of the current generation. The proposed approach is compared with three GP algorithms from the literature and 30 human-made rules from the literature under different job shop configurations and scheduling objectives, including total weighted tardiness, mean tardiness, and mean flow time. Experimentally obtained results demonstrate that the proposed approach outperforms methods from the literature in generating more interpretable rules in a shorter computational time without sacrificing solution quality.
... Artificial intelligence (AI) was utilized in geotechnical engineering since the 1985's; its applications became widely acceptable today. Genetic Programming (GP) is one of the recent (AI) techniques; it was established by Cramer [28] and developed by Koza [29]. These days, (GP) is a main technique includes many sub-techniques all of them were developed using the same bases of (GP) such as Gene Expression Pro- ...
Article
Full-text available
Modulus of subgrade reaction (Ks) is a simplified and approximated approach to present the soil-structure interaction. It is widely used in designing combined and raft foundations due to its simplicity. (Ks) is not a soil propriety, its value depends on many factors including soil properties, shape, dimensions and stiffness of footing and even time (for saturated cohesive soils). Many earlier formulas were developed to estimate the (Ks) value. This research is concerned in studying the effect of de-stressing and shoring rigidity of deep excavation on the (Ks) value. A parametric study was carried out using 27 FEM models with different configurations to generate a database, then a well-known “Genetic Programming” technique was applied on the database to develop a formula to correlate the (Ks) value with the deep excavation configurations. The results indicated that (Ks) value increased with increasing the diaphragm wall stiffness and decreases with increasing the excavation depth.
... The GA provides the solutions as the strings of numbers, whereas the GP provides the solutions, mathematical models, in terms of tree structures as mentioned earlier. Therefore, unlike the previously discussed ML techniques, including the ANN, SVM, ELM, and FLM, that provide the solution models of the problems under investigation as black boxes, the GP provides explicit mathematical models relating to the input variables [92][93][94][95][96][97][98][99]. Since its inception, it has been implemented for classification, feature extraction, and regression problems in engineering and science [100][101][102]. ...
Chapter
Artificial Intelligence (AI) garners many front-page headlines every day as the technology enables machines to learn from experience and perform human-like tasks. Though the term “AI” was coined in the 1950s, the field is still evolving due to the invention of new high-speed computing devices. Therefore, many variations of AI techniques have been reported and applied in anomaly detection, pattern recognition, natural language processing, feature extraction, regression, data augmentation, and many other fields of study. This chapter presents a brief history of AI, along with its foundation and basic components. Then, it discusses the basics of the popular machine learning techniques (a subset of AI), including the artificial neural networks, support vector machines, extreme learning machines, fuzzy logic models, genetic programming, and deep learning techniques. Besides, this chapter also briefly sheds light on hybrid, ensemble, and other AI techniques. It is expected that the presented discussions of this chapter help the readers to understand the foundation of the popular machine learning techniques and select appropriate strategies to deal with their problems. Furthermore, many of the discussed AI techniques are employed in solving power system fault diagnosis in the subsequent chapters of this book.
Article
We evolve floating point Sextic polynomial populations of genetic programming binary trees for up to a million generations. We observe continued innovation but this is limited by tree depth. We suggest that deep expressions are resilient to learning as they disperse information, impeding evolvability, and the adaptation of highly nested organisms, and we argue instead for open complexity. Programs with more than 2,000,000,000 instructions (depth 20,000) are created by crossover. To support unbounded long-term evolution experiments in genetic programming (GP), we use incremental fitness evaluation and both SIMD parallel AVX 512-bit instructions and 16 threads to yield performance equivalent to 1.1 trillion GP operations per second, 1.1 tera GPops, on an Intel Xeon Gold 6136 CPU 3.00 GHz server.
Thesis
Full-text available
This dissertation aims on analyzing fundamental concepts and dogmas of a graph-based genetic programming approach called Cartesian Genetic Programming (CGP) and introduces advanced genetic operators for CGP. The results of the experiments presented in this thesis lead to more knowledge about the algorithmic use of CGP and its underlying working mechanisms. CGP has been mostly used with a parametrization pattern, which has been prematurely generalized as the most efficient pattern for standard CGP and its variants. Several parametrization patterns are evaluated and analyzed with more detailed and comprehensive experiments by using meta-optimization. This thesis also presents a first runtime analysis of CGP. The time complexity of a simple (1+1)-CGP algorithm is analyzed with a simple mathematical problem and a simple Boolean function problem. In the subfield of genetic operators for CGP, new recombination and mutation techniques that work on a phenotypic level are presented. The effectiveness of these operators is demonstrated on a widespread set of popular benchmark problems. Especially the role of recombination can be seen as a big open question in the field of CGP, since the lack of an effective recombination operator limits CGP to mutation-only use. Phenotypic exploration analysis is used to analyze the effects caused by the presented operators. This type of analysis also leads to new insights into the search behavior of CGP in continuous and discrete fitness spaces. Overall, the outcome of this thesis leads to a reconsideration of how CGP is effectively used and extends its adaption from Darwin's and Lamarck's theories of evolution.
Article
Full-text available
The self-organizing migrating algorithm (SOMA) is a population-based meta-heuristic that belongs to swarm intelligence. In the last 20 years, we can observe two main streams in the publications. First, novel approaches contributing to the improvement of its performance. Second, solving the various optimization problems. Despite the different approaches and applications, there exists no work summarizing them. Therefore, this work reviews the research papers dealing with the principles and application of the SOMA. The second goal of this work is to provide additional information about the performance of the SOMA. This work presents the comparison of the selected algorithms. The experimental results indicate that the best-performing SOMAs provide competitive results comparing the recently published algorithms.
Article
Full-text available
To solve constrained optimization problems (COPs), teaching learning-based optimization (TLBO) has been used in this study as a baseline algorithm. Different constraint handling techniques (CHTs) are incorporated in the framework of TLBO. The superiority of feasibility (SF) is one of the most commonly used and much effective CHTs with various decisive factors. The most commonly utilized decision-making factors in SF are a number of constraints violated (NCV) and weighted mean (WM) values for comparing solutions. In this paper, SF based on a number of constraints violated (NCVSF) and weighted mean (WMSF) is incorporated in the framework of TLBO and applied on CEC-2006 constrained benchmark functions. The use of a single factor for making the decision of the winner might be not a good idea. The combined use of NCV and WM factors in hybrid superiority of feasibility (HSF) has shown the dominating role of NCV over WM. We have employed NCVSF, WMSF, and HSF in the TLBO framework and suggested three constrained versions, namely NCVSF-TLBO, WMSF-TLBO, and HSF-TLBO. The performance of the proposed algorithms is evaluated upon CEC-2006 constrained benchmark functions. Among them, HSF-TLBO has shown better performance on most of the used constrained optimization problems in terms of proximity and diversity.
Thesis
Full-text available
Nowadays, in a world covered by networks, there are more smart devices than peoples, since a person owns different smart devices in different forms. These devices, which interconnect and exchange a very large flow of data, perform several functions including monitoring, data collection, and data evaluation. In this thesis, we will focus on this new trend of interconnected objects used to improve the daily life of individuals. For this, the exploitation of the Internet of Things in the field of monitoring and control is a recent research axis that helps human beings to ensure this task based on the data captured by the intelligent devices that will be subsequently analyzed and processed by different methods. It is in this context that we orient our research on the concept of linking objects to the Internet, known today as the Internet of Things. Our work is articulated around two issues, physical activity and fall prevention in the elderly and the security of international borders. In our first work, we proposed an approach based on metaheuristics for real-time security and boundary protection. This technique is inspired by the behavior of natural cockroaches and the phenomenon of seeking the most attractive and secure place to hide. In our second work, we used classification algorithms to combat the risk of falls in the elderly and enable these individuals to continue their lives in the best possible condition. We examine the applicability of three data mining algorithms for real-world IoT datasets. These include K-nn, Naive Bayes, and Decision Tree. The main contribution of this work is the analysis of the efficiency of three data mining algorithms. All the experiments carried out and the results obtained have shown the benefits derived from the use of our system.
Chapter
The focus of this chapter is some general knowledge and application of Genetic Programming (GP) especially in water and environmental science. A brief introduction and literature review of the GP and Genetic Algorithm (GA) are presented. Then the natural process, the basic GP algorithm iteration procedure, and the computational steps of GP algorithm are detailed. Moreover, several main steps of problem-solving in GP process explained. Finally, a pseudo code of GP algorithm is also stated to demonstrate the implementation of this technique.
Thesis
Full-text available
In this study, Gene Expression Programming (GEP) was used to derive a predictive model of One-Day Compressive Strength of Rapid Hardening Concrete (RHC) mixes. The first objective of developing a database was achieved by doing an extensive literature review of the internationally published research studies. The established database contains 115 data points of 12 numerical variables. The input variables used for the regression model are ordinary portland cement, magnesium phosphate cement, type 3 cement, high alumina cement, fine aggregate, coarse aggregate, water, super plasticizer, accelerator, retarder, silica fume, and fly ash. GeneXproTools 5.0 were used in our analysis. GEP Regression Analysis was used with function finding analysis in GeneXproTools 5. Various quantitative and qualitative measures were observed during the analysis i.e., R-Squared Value, Mean Absolute Error (MAE), regression plot, residual plot, variable importance, etc. GEP was observed to be an excellent tool in evaluating and constructing statistical models for the compressive strength of RHC. The derived models can be used in the practical pre-planning phase and pre-design phase in terms of a wide range of cementitious materials, admixtures, and additives.
ResearchGate has not been able to resolve any references for this publication.