Evolutionary Computation: The Fossil Record
Abstract
Featuring copious introductory material by distinguished scientist Dr. David B. Fogel, this formidable collection of 30 landmark papers spans the entire history of evolutionary computation--from today’s investigations back to its very origins more than 40 years ago. Chapter by chapter, Fogel highlights how early ideas have developed into current thinking and how others have been lost and await rediscovery. The introductions to each chapter reflect Fogel’s one-on-one conversations with the authors and their colleagues, conducted over a period of four years. Evolutionary Computation: The Fossil Record provides in-depth historical information and technical detail that is simply unmatched in the field. This volume is complete with an extensive bibliography of related literature. Evolutionary Computation: The Fossil Record will be of particular interest to researchers and students in need of a comprehensive resource on this fascinating area of computer science. Historia s will also find the book thoroughly engaging.
... The first adaptation of this work into computers were made in the 1950s as shown in David Fogel's Fossil Record [53], with Fraser, Friedberg and Friedman, who presented how binary strings could be evolved through crossovers [54], how computers could self-program using mutations [55,56], and how evolution could be digitally simulated [57]. ...
One of the main branches of signal processing is spectral analysis. Many experimental devices produce signals which are sums of damped sines. But with advancements in these devices, the volume of data they generate continues to grow. In this thesis, we focus on data from Fourier Transform Ion Cyclotron Resonance Mass Spectrometer (FT-ICR) and also on a simulated data. Our contribution consists in exploring the contribution of evolutionary methods to overcome the limitations of the Fourier Transform (FT) method. We carried out a comparative study of the methods by FT and by artificial evolution. The results obtained with SINUS-IT are of better quality than those of FT, without requiring denoising or apodization. SINUS-IT was able to determine phase parameter with good precision and results were obtained using less number of samples which would decrease the acquisition time.
... These methods are designed to mimic hereditary transmission of successful behaviours through generations. Sub-classes of evolutionary algorithms include evolution strategies (Rechenberg, 1965), evolutionary programming (Fogel et al., 1966), genetic algorithms (Fogel, 1998) and memetic algorithms (Moscato et al., 1989). Such approaches have been shown to be fairly efficient for many tasks, including autonomous car driving (Togelius et al., 2007), mobile robots (Valsalam et al., 2012) and gaming (Togelius et al., 2011). ...
The last decade has seen the re-emergence of machine learning methods based on formal neural networks under the name of deep learning. Although these methods have enabled a major breakthrough in machine learning, several obstacles to the possibility of industrializing these methods persist, notably the need to collect and label a very large amount of data as well as the computing power necessary to perform learning and inference with this type of neural network. In this thesis, we propose to study the adequacy between inference and learning algorithms derived from biological neural networks and massively parallel hardware architectures. We show with three contribution that such adequacy drastically accelerates computation times inherent to neural networks. In our first axis, we study the adequacy of the BCVision software engine developed by Brainchip SAS for GPU platforms. We also propose the introduction of a coarse-to-fine architecture based on complex cells. We show that GPU portage accelerates processing by a factor of seven, while the coarse-to-fine architecture reaches a factor of one thousand. The second contribution presents three algorithms for spike propagation adapted to parallel architectures. We study exhaustively the computational models of these algorithms, allowing the selection or design of the hardware system adapted to the parameters of the desired network. In our third axis we present a method to apply the Spike-Timing-Dependent-Plasticity rule to image data in order to learn visual representations in an unsupervised manner. We show that our approach allows the effective learning a hierarchy of representations relevant to image classification issues, while requiring ten times less data than other approaches in the literature.
... Also some modified version of them is made too. Recently some other intelligent edge detection algorithms have appeared, which they use Evolutionary Algorithms (EA) [2] to determine the vital parts of the image. We are going to use the third version, which is evolutionary one, but with some modification and also some pre and post processing technique to make a bit slow but robust edge detection method. ...
Edge detection is very important technique to reveal significant areas in the digital image, which could aids the feature extraction techniques. In fact it is possible to remove un-necessary parts from image, using edge detection. A lot of edge detection techniques has been made already, but we propose a robust evolutionary based system to extract the vital parts of the image. System is based on a lot of pre and post-processing techniques such as filters and morphological operations, and applying modified Ant Colony Optimization edge detection method to the image. The main goal is to test the system on different color spaces, and calculate the system’s performance. Another novel aspect of the research is using depth images along with color ones, which depth data is acquired by Kinect V.2 in validation part, to understand edge detection concept better in depth data. System is going to be tested with 10 benchmark test images for color and 5 images for depth format, and validate using 7 Image Quality Assessment factors such as Peak Signal-to-Noise Ratio, Mean Squared Error, Structural Similarity and more (mostly related to edges) for prove, in different color spaces and compared with other famous edge detection methods in same condition. Also for evaluating the robustness of the system, some types of noises such as Gaussian, Salt and pepper, Poisson and Speckle are added to images, to shows proposed system power in any condition. The goal is reaching to best edges possible and to do this, more computation is needed, which increases run time computation just a bit more. But with today’s systems this time is decreased to minimum, which is worth it to make such a system. Acquired results are so promising and satisfactory in compare with other methods available in validation section of the paper.
Quantum few-body systems are deceptively simple. Indeed, with the notable exception of a few special cases, their associated Schrödinger equation cannot be solved analytically for more than two particles. One has to resort to approximation methods to tackle quantum few-body problems. In particular, variational methods have been proposed to ease numerical calculations and obtain precise solutions. One such method is the Stochastic Variational Method, which employs a stochastic search to determine the number and parameters of correlated Gaussian basis functions used to construct an ansatz of the wave function. Stochastic methods, however, face numerical and optimization challenges as the number of particles increases.We introduce a family of gradient variational methods that replace stochastic search with gradient optimization. We comparatively and empirically evaluate the performance of the baseline Stochastic Variational Method, several instances of the gradient variational method family, and some hybrid methods for selected few-body problems. We show that gradient and hybrid methods can be more efficient and effective than the Stochastic Variational Method. We discuss the role of singularities, oscillations, and gradient optimization strategies in the performance of the respective methods.
Molecular retrosynthesis is a significant and complex problem in the field of chemistry, however, traditional manual synthesis methods not only need well-trained experts but also are time-consuming. With the development of Big Data and machine learning, artificial intelligence (AI) based retrosynthesis is attracting more attention and has become a valuable tool for molecular retrosynthesis. At present, Monte Carlo tree search is a mainstream search framework employed to address this problem. Nevertheless, its search efficiency is compromised by its large search space. Therefore, this paper proposes a novel approach for retrosynthetic route planning based on evolutionary optimization, marking the first use of Evolutionary Algorithm (EA) in the field of multi-step retrosynthesis. The proposed method involves modeling the retrosynthetic problem into an optimization problem, defining the search space and operators. Additionally, to improve the search efficiency, a parallel strategy is implemented. The new approach is applied to four case products and compared with Monte Carlo tree search. The experimental results show that, in comparison to the Monte Carlo tree search algorithm, EA significantly reduces the number of calling single-step model by an average of 53.9%. The time required to search three solutions decreases by an average of 83.9%, and the number of feasible search routes increases by 1.38 times. The source code is available at https://github.com/ilog-ecnu/EvoRRP.
Due to their agility, cost-effectiveness, and high maneuverability, Unmanned Aerial Vehicles (UAVs) have attracted considerable attention from researchers and investors alike. Path planning is one of the practical subsets of motion planning for UAVs. It prevents collisions and ensures complete coverage of an area. This study provides a structured review of applicable algorithms and coverage path planning solutions in Three-Dimensional (3D) space, presenting state-of-the-art technologies related to heuristic decomposition approaches for UAVs and the forefront challenges. Additionally, it introduces a comprehensive and novel classification of practical methods and representational techniques for path-planning algorithms. This depends on environmental characteristics and optimal parameters in the real world. The first category presents a classification of semi-accurate decomposition approaches as the most practical decomposition method, along with the data structure of these practices, categorized by phases. The second category illustrates path-planning processes based on symbolic techniques in 3D space. Additionally, it provides a critical analysis of crucial influential approaches based on their importance in path quality and researchers' attention, highlighting their limitations and research gaps. Furthermore, it will provide the most pertinent recommendations for future work for researchers. The studies demonstrate an apparent inclination among experimenters towards using the semi-accurate cellular decomposition approach to improve 3D path planning.
In some instances, intelligent, autonomous robotic systems (IARS) are transforming how the private and public sectors provide emergency services, deliver products, protect national interests, and accomplish their missions. Some futures, however, envision missions performed by cooperative teams of lARS-commonly described as swarms-making the purposeful design of swarms as information processing and communication systems an imperative. Furthermore, more optimal swarm performance depends on the degree to which its design fits its environment. In the context of organizational information processing and transactive memory theories, this paper explores the degree to which the centralization of decision-making and employment of distributed expertise may impact mission performance. Against the backdrop of collecting information on grey whale behaviors and migration routes, three swarm designs, starling, hive, and wolf-pack, are modeled and their simulated performance compared. The wolf-pack design, which is characterized by a largely decentralized decision-making structure and differentiated transactive memory system, out-performed the other designs based upon total work volumes and mission durations. Future studies should empirically investigate the impacts of cognitive slack and other swarm characteristics on mission accomplishment in a broader spectrum of scenarios.
This paper addresses the problem of dynamic multi- objective optimization problems (DMOPs), by demonstrating new approaches to change detection and change prediction in an evolutionary algorithm framework. Because the objectives of such problems change over time, the Pareto optimal set (PS) and Pareto optimal front (PF) are also dynamic. First, we propose a new change detection method which achieves greater sensitivity by considering changes in both the PS and the PF, unlike most previous approaches. Second, when changes occur, a second-order (acceleration-based) prediction strategy is proposed to predictively reinitialize the population close to the new set of optima. We compare the performance of the proposed algorithm against two other state-of-the-art algorithms from the literature, using ten different dynamic benchmark problems. Experimental results show that the proposed change detection strategy in this paper can not only consider the effect of the optimal individuals but also can consider the effect of their corresponding objective values. Compared with the other two methods, the DMOPs achieved both the ability of precisely predicting the direction of changes and the ability of predicting the future trend of change direction. So, the DMOPs can also converge to the true PF in much less iterations compared with other methods. After multiple experiments, the proposed method outperforms the other algorithms on most of the test problems.
A systematic device-model calibration (extraction) methodology has been proposed to reduce parameter calibration time of advanced compact model for modern nano-scale semiconductor devices. The adaptive pattern search algorithm is a variant of the direct search method, which explore in the parameter space with adaptive searching step and direction. It is very straightforward, but powerful, in high dimensional optimization problem since adaptive step and direction are decided by simple computation. The proposed method iterates less but shows superior accuracy over the conventional method. It is possible to be applied to a behavioral or empirical model correspond to emerging devices, such as tunneling field-effect transistor (TFET) and negative capacitance field-effect transistor (NCFET) due to its universality in parameter calibration for the model accuracy.
Many mobile games adopt autobattle systems in which the major consideration of players is how to assemble strong teams. The automated team assembly (ATA) becomes a crucial issue from different standpoints, such as assisting the game designers in performing balance analysis and directing the players to configure teams. Since the ATA is generally a combinatorial optimization problem, this article exploits the evolutionary optimizers. However, unlike the traditional problems that the evaluation functions are explicit, in the ATA, we are unable to evaluate the team strengths straightforwardly. To address this issue, we collect data from the server and build an end-to-end deep learning surrogate for estimating the team strengths. The model has a three-layer architecture of a feature embedding layer, a sequential relation layer, and a regression layer, which is able to characterize the complex dependencies between the sparse input features and the team strengths. The evolutionary algorithms are then guided by the constructed surrogate to seek for the strongest teams. Several adjustments are also made on the evolutionary algorithms to adapt it to the ATA problem with multiple constraints. Simulations on the game
Romance of the Three Kingdoms: Strategy Edition
validate a good performance of the proposed data-driven evolutionary optimizers.
Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence, and internet of things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human–computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: Intelligent computing is not only intelligence oriented but also intelligence driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing. Intelligent computing is still in its infancy, and an abundance of innovations in the theories, systems, and applications of intelligent computing is expected to occur soon. We present the first comprehensive survey of literature on intelligent computing, covering its theory fundamentals, the technological fusion of intelligence and computing, important applications, challenges, and future perspectives. We believe that this survey is highly timely and will provide a comprehensive reference and cast valuable insights into intelligent computing for academic and industrial researchers and practitioners.
The purpose of the present article is to introduce theoretical and algorithmic approaches to address the problem of finding optimal test-control incomplete block designs with unequal block sizes where intra-block observations are correlated. Theoretical approach is used to find Etc-optimal designs analytically. In addition, due to the computational complexity of theoretical methods, in this article a two-phase optimization algorithm is proposed to construct ϕ-optimal or nearly ϕ-optimal designs. The effectiveness of the proposed algorithm is validated by comparing our results with optimal designs presented in several prior studies. Our algorithm has the advantages of being independent from the sizes of blocks, structure of correlation, and the optimality criteria. Moreover, it takes only a few minutes to obtain the optimal designs.
Marine Predator Algorithm (MPA) is a meta-heuristic algorithm based on the foraging behavior of marine animals. It has the advantages of few parameters, simple setup, easy implementation, accurate calculation, and easy application. However, compared with other meta-heuristic algorithms, this algorithm has some problems, such as a lack of transition between exploitation and exploration and unsatisfactory global optimization performance. Aiming at the shortage of MPA, this paper proposes a multi-disturbance Marine Predator Algorithm based on oppositional learning and compound mutation (mMPA-OC). Firstly, the optimal value selection process is improved by using Opposition-Based Learning mechanism and enhance MPA’s exploration ability. Secondly, the combined mutation strategy was used to improve the predator position updating mechanism and improve the MPA’s global search ability. Finally, the disturbances factors are improved to multiple disturbances factors, so that the MPA could maintain the population diversity. In order to verify the performance of the mMPA-OC, experiments are conducted to compare mMPA-OC with seven meta-heuristic algorithms, including MPA on different dimensions of the CEC-2017 benchmark function, complex CEC-2019 benchmark function, and engineering optimization problems. Experiments have shown that the mMPA-OC is more efficient than other meta-heuristic algorithms.
In this paper, the use of Artificial Neural Networks (ANNs) in the form of Convolutional Neural Networks (AlexNET) for the fast and energy-efficient fitting of the Dynamic Memdiode Model (DMM) to the conduction characteristics of bipolar-type resistive switching (RS) devices is investigated. Despite an initial computationally intensive training phase the ANNs allow obtaining a mapping between the experimental Current-Voltage (I-V) curve and the corresponding DMM parameters without incurring a costly iterative process as typically considered in error minimization-based optimization algorithms. In order to demonstrate the fitting capabilities of the proposed approach, a complete set of I-Vs obtained from Y2O3-based RRAM devices, fabricated with different oxidation conditions and measured with different current compliances, is considered. In this way, in addition to the intrinsic RS variability, extrinsic variation is achieved by means of external factors (oxygen content and damage control during the set process). We show that the reported method provides a significant reduction of the fitting time (one order of magnitude), especially in the case of large data sets. This issue is crucial when the extraction of the model parameters and their statistical characterization are required.
Deep learning has been increasingly used in various applications such as image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. In deep learning, a convolutional neural network (CNN) is regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The full connectivity of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include penalizing parameters during training or trimming connectivity. CNNs use relatively little pre-processing compared to other image classification algorithms. Given the rise in popularity and use of deep neural network learning, the problem of tuning hyperparameters is increasingly prominent tasks in constructing efficient deep neural networks. In this paper, the tuning of deep neural network learning (DNN) hyper-parameters is explored using an evolutionary based approach popularized for use in estimating solutions to problems where the problem space is too large to get an exact solution.
Abstract This paper studies unmanned aerial vehicles (UAV) enabled industrial Internet of Things while a UAV dispatched to collect data of low‐power ground sensor nodes (SNs) in multi‐obstacle environment. The authors aim to minimize the completion time while satisfying the communication link constraints of each SN and obstacle avoidance, data collection requirements etc. To this end, the authors first formulate the completion time minimization problem by jointly optimizing the UAV trajectory and collection sequence of SNs. The problem is difficult to be optimally solved, as it is non‐convex. To tackle this problem, the authors first transform the original problem to a Traveling Salesman Problem‐like (TSP‐like) problem based on a hover point that can naturally satisfy the communication link constraints of data collection. The dynamic programming (DP) algorithm to figure out the order in which the UAV collects each SN, which gives the initial path of the UAV traversing each SN from the beginning point to the end point. Next, the authors consider the general scenarios of data collection tasks where the UAV also communicates while flying. The authors construct an equivalent problem with integer variable constraints for the original problem with indicative function constraints. The authors rewrite the non‐convex constraints of the equal problem by introducing slack variables and leveraging the SCA, and add the discrete region threat constraints for the traditional path discretization method. Finally, the simulation results verify the effectiveness of the proposed algorithm under different parameter configurations.
Recently, spam on online social networks has attracted attention in the research and business world. Twitter has become the preferred medium to spread spam content. Many research efforts attempted to encounter social networks spam. Twitter brought extra challenges represented by the feature space size, and imbalanced data distributions. Usually, the related research works focus on part of these main challenges or produce black-box models. In this paper, we propose a modified genetic algorithm for simultaneous dimensionality reduction and hyper parameter optimization over imbalanced datasets. The algorithm initialized an eXtreme Gradient Boosting classifier and reduced the features space of tweets dataset; to generate a spam prediction model. The model is validated using a 50 times repeated 10-fold stratified cross-validation, and analyzed using nonparametric statistical tests. The resulted prediction model attains on average 82.32% and 92.67% in terms of geometric mean and accuracy respectively, utilizing less than 10% of the total feature space. The empirical results show that the modified genetic algorithm outperforms
and
PCA
feature selection methods. In addition, eXtreme Gradient Boosting outperforms many machine learning algorithms, including BERT-based deep learning model, in spam prediction. Furthermore, the proposed approach is applied to SMS spam modeling and compared to related works.
The application of population-based optimization algorithms in design is heavily driven by the translation and analysis of various data sets that represent a design problem; in evolutionary-based algorithms, these data sets are illustrated through two primary data streams: genes and fitness functions. The latter is frequently examined when analyzing the algorithm’s output, and the former is comparatively less so. This paper examines the role of genomic analysis in applying multi-objective evolutionary algorithms (MOEA) in design. The results demonstrate the significance of utilizing the genetic analysis to understand better the relationships between parameters used in the design problem’s formulation and differentiate between morphological differences in the algorithmic output not commonly observed through fitness-based analyses.
Global optimization solves real-world problems numerically or analytically by minimizing their objective functions. Most of the analytical algorithms are greedy and computationally intractable (Gonzalez in Handbook of approximation algorithms and metaheuristics: contemporary and emerging applications, vol. 2. CRC Press, Boca Raton, 2018). Metaheuristics are generally nature-inspired optimization algorithms. They numerically find a near-optimal solution for optimization problems in a reasonable amount of time. We propose a novel metaheuristic algorithm for global optimization. It is based on the shooting and jumping behaviors of the archerfish for hunting aerial insects. We name our proposed algorithm the archerfish hunting optimizer (AHO). The AHO algorithm has two parameters (the swapping angle and the attractiveness rate) to set. We execute the AHO algorithm using five different values for each parameter. In all, we perform 25 simulations for four distinct values of the search space dimension (i.e., 5, 10, 15, and 20). We run the Friedman test to determine the best values of parameters for each dimension. We perform three different comparisons to validate the proposed algorithm’s performance. First, AHO is compared to 12 recent metaheuristic algorithms (the accepted algorithms for the 2020’s competition on single-objective bound-constrained numerical optimization) on ten test functions of the benchmark CEC 2020 for unconstrained optimization. The experimental results are evaluated using the Wilcoxon signed-rank test. Experimental outcomes show that the AHO algorithm, in terms of robustness, convergence, and quality of the obtained solution, is significantly competitive compared to state-of-the-art methods. Second, the performance of AHO and three recent metaheuristic algorithms is evaluated using five engineering design problems taken from the benchmark CEC 2020 for non-convex constrained optimization. The obtained results are ranked using the ranking scheme detailed in the corresponding paper, and the obtained ranks illustrate that AHO is very competitive when opposed to the considered algorithms. Finally, the performance of AHO in solving five engineering design problems is assessed and compared to several well-established state-of-the-art algorithms. We analyzed the obtained numerical results in detail. These results show that the AHO algorithm is significantly better than, or at least comparable to the considered algorithms with very efficient performance in solving many optimization problems. The statistical indicators illustrate that the AHO algorithm has a high ability to significantly outperform the well-established optimizers.
The nurse scheduling problem (NSP) is the problem involving allocating the monthly shifts (day and night shifts, holidays, and so on) for nurses under various constraints. Generally, the NSP has a lot of constraints. As a result, it needs a lot of knowledge and experience to make the scheduling table with its constraints, and it has been made by the head nurse or the authority in the hospitals. This allocation of the shifts gives a lot of burden (time and efforts) to them, and it has been growing the demand for the automatic nurse scheduling system. This chapter aims to develop a genetic algorithm application for the Nurse Scheduling Problem (NSP). The application will be developed using Microsoft Visual Studio in C# programming language.
This paper explores the promise of genetic algorithms as a tool for optimization of buildings at a neighborhood scale across the conflicting demands of social, environmental, and economic sustainability. A large urban site in Chicago, Illinois, is selected to test the viability of using a multi criteria genetic algorithm to optimize the potential building mix in a newly planned development. Two variables, the number of buildings of a given use-type and their height, are analyzed against cost functions for social, economic, and environmental objectives. Single-objective algorithms are utilized to optimize the variables individually. A non-dominated genetic sorting algorithm (NGSAII) is then utilized to identify the Pareto-optimal solutions considering the three objectives simultaneously. Single-objective results are found to vary substantially by objective, with different variable values for social, economic, and environmental sustainability. For multi-objective algorithms, the results support Campbell’s notion of the three nodes of sustainability being in conflict. Solutions performing well across economic and environmental objectives were most common. Solutions performing well among environmental and social objectives were less common. Solutions performing well across economic and social performance were rare. This suggests that while economic and environmental conflicts are to some degree resolvable, conflicts between social and either of economic or environmental performance are more difficult to resolve. The failure of any solution to perform well across all three objectives lends credence to the idea of design as a series of trade-offs and that one super optimum solution may not exist. The process provides insights into the trade-offs implicit in the building design and development process and raises questions regarding the balancing of competing sustainability objectives.
RealTimeBattle is an environment in which robots controlled by programs fight each other. Programs control the simulated robots using low-level messages (e.g., turn radar, accelerate). Unlike other tools like Robocode, each of these robots can be developed using different programming languages. Our purpose is to generate, without human programming or other intervention, a robot that is highly competitive in RealTimeBattle. To that end, we implemented an Evolutionary Computation technique: Genetic Programming. The robot controllers created in the course of the experiments exhibit several different and effective combat strategies such as avoidance, sniping, encircling and shooting. To further improve their performance, we propose a function-set that includes short-term memory mechanisms, which allowed us to evolve a robot that is superior to all of the rivals used for its training. The robot was also tested in a bout with the winner of the previous "RealTimeBattle Championship," which it won. Finally, our robot was tested in a multi-robot battle arena, with five simultaneous opponents, and obtained the best results among the contenders.
This research aims to propose a framework for evaluating credit applications by assigning a binary score to the applicant. The score is targeted to determine whether the credit application is ‘good’ or ‘bad’ in small business purpose loans. Even tiny performance improvements in small businesses may yield a positive impact on the economy as they generate more than 60% of the value. The method presented in this paper hybridizes the Genetic Algorithm (GA) and the Support Vector Machine (SVM) in a bi-level feeding mechanism for increased prediction accuracy. The first level is to determine the parameters of SVM and the second is to find a feature set that increases classification accuracy. To test the proposed approach, we have investigated three different data sets; UCI Australian data set for preliminary works, Lending Club data set for large training and testing, and UCI German and Australian datasets for benchmarking against some other notable methods that use GA. Our computational results show that our proposed method using a feedback mechanism under the hybrid bi-level GA-SVM structure outperforms other classification algorithms in the literature, namely Decision Tree, Random Forests, Logistic Regression, SVM and Artificial Neural Networks, effectively improves the classification accuracy.
An accurate method based on evolutionary correlation filtering to solve pose estimation of highly occluded targets is presented. The proposed method performs multiple correlation operations between an input scene and a bank of filters designed in frequency-domain. Each filter is computed with statistical parameters of a real-world scene and a template that contains information of the target in a single pose parameter configuration. A vast set of templates is generated from multiple views of a three-dimensional model of the target, which are created synthetically with computer graphics. An evolutionary approach in the bank of filter construction for optimizing the pose estimation parameters is implemented. The evolutionary computation technique based on a pseudo-bacterial genetic algorithm yields high estimation accuracy finding the best filter that produces the highest matching score. The proposed evolutionary correlation filtering yields good convergence of the bank of filter optimization, which produces a reduction of the number of computational operations. Experimental results demonstrate the robustness of the proposed method in terms of detection performance and pose estimation of highly occluded targets compared with state-of-the-art methods.
With the epidemic progression in resources on IoT, discovery emerges as an eminent challenge due to requirement of their self-automation. The traditional resource discovery approaches do not provide efficient methodologies due to continuously changing IoT search metrics such as syntax, access, architecture, etc. To address the gap, the paper proposes an optimized technique, namely, Modified Genetic Algorithm for Resource Selection (MGA-RS) that intends to discover optimum data (resources) is short period of time by considering the bit strings of chromosomes. It is evaluated on datasets of Ionosphere from machine learning repository of university college, London. The best and mean fitness are selected in a way that they should be close to each other at the time when MGA-RS reaches termination condition and to minimize classification error from kNN. It is found that MGA-RS outperforms well with kNN based fitness function and is approximately 14% and 15% better than simple and rastrigin fitnesses, respectively, for selecting the optimal resources in IoT.
This chapter is an introductory chapter that attempts to highlight the concept of computational intelligence and its application in the field of computing security; it starts with a brief description of the underlying principles of artificial intelligence and discusses the role of computational intelligence in overcoming conventional artificial intelligence limitations. The chapter then briefly introduces various tools or components of computational intelligence such as neural networks, evolutionary computing, swarm intelligence, artificial immune systems, and fuzzy systems. The application of each component in the field of computing security is highlighted.
The authors believe that the hybridization of two different approaches results in more complex encryption outcomes. The proposed method combines a symbolic approach, which is a table substitution method, with another paradigm that models real-life neurons (connectionist approach). This hybrid model is compact, nonlinear, and parallel. The neural network approach focuses on generating keys (weights) based on a feedforward neural network architecture that works as a mirror. The weights are used as an input for the substitution method. The hybrid model is verified and validated as a successful encryption method.
In this work, a mobile robot path-planning algorithm based on the evolutionary artificial potential field (EAPF) for non-static environments is presented. With the aim to accelerate the path planning computation, the EAPF algorithm is implemented employing novel parallel computing architectures. The EAPF algorithm is capable of deriving optimal potential field functions using evolutionary computation to generate accurate and efficient paths to drive a mobile robot from the start point to the goal point without colliding with obstacles in static and non-static environments. The algorithm allows parallel implementation to accelerate the computation to obtain better results in a reasonable runtime. Comparative performance analysis in terms of path length and computation time is provided. The experiments were specifically designed to show the effectiveness and the efficiency of the mobile robot path-planning algorithm based on the EAPF in a sequential implementation on CPU, a parallel implementation on CPU, and a parallel implementation on GPU.
A simple solution of simultaneous non-linear equations is one of the most important tasks in the analysis of the systems used in different domains of engineering, social sciences, and medical sciences. Though there are many conventional methods to solve these equations, these methods have high time, cost, and space complexity. In this work, Genetic Algorithm based technique is used to solve both single and multi-objective optimization problems by using standard benchmark problems. The soundness of the work is argued by comparing the results with other methods. The research also opens the door for the application of Genetic Algorithm in getting cost-effective solutions for complex mathematical equations.
This work presents biologically inspired method of gait generation. It uses the reference to the periodic signals generated by biological Central Pattern Generator (CPG). The coupled oscillators with correction functions are used to produce leg joint trajectories. The human gait is used as the reference pattern. The features of generated gait are compared to the human walk. The example illustrates well the profit offered by the optimization using genetic algorithm. The problem would be impossible to solve using traditional approach.
This paper presents a review of quantum-inspired population-based metaheuristics. Quantum-inspired algorithms were born when there were no quantum computers; they demonstrated to have interesting characteristics providing good results in classical computers. At present, when the first quantum computers are available, scientists are working to confirm the quantum supremacy in different fields. After almost 20 years that the first metaheuristic inspired in quantum phenomena was published, a large number of works have been proposed. This paper aims to look back to see which quantum-inspired metaheuristics could be translated to be used in the existing quantum computers based on the circuit model programming paradigm. Reviewed metaheuristics were classified according to their main source of inspiration; just some representative works of each classification were selected because of the vast number of existing works on each one. The analysis was done for the circuit model and metrics as width, size, and length were used to determine their viability of being implemented in a real quantum computer. Moreover, comparative results using metrics such as performance and running time for quantum-inspired metaheuristic were included.
An efficient algorithm for path generation in autonomous mobile robots using a visual recognition approach is presented. The proposal includes image filtering techniques by employing an inspecting camera to sense a cluttered environment. Template matching filters are used to detect several environment elements, such as obstacles, feasible terrain, the target location, and the mobile robot. The proposed algorithm includes the parallel evolutionary artificial potential field to perform the path planning for autonomous navigation of the mobile robot. Our problem to be solved for autonomous navigation is to safely take a mobile robot from the starting point to the target point employing the path with the shortest distance and which also contains the safest route. To find the path that satisfies this condition, the proposed algorithm chooses the best candidate solution from a vast number of different paths calculated concurrently. For achieving efficient autonomous navigation, the proposal employs a parallel computation approach for the evolutionary artificial potential field algorithm for path generation and optimization. Experimental results yield accuracy in environment recognition in terms of quantitative metrics. The proposed algorithm demonstrates efficiency in path generation and optimization.
The nurse scheduling problem (NSP) is the problem involving allocating the monthly shifts (day and night shifts, holidays, and so on) for nurses under various constraints. Generally, the NSP has a lot of constraints. As a result, it needs a lot of knowledge and experience to make the scheduling table with its constraints, and it has been made by the head nurse or the authority in the hospitals. This allocation of the shifts gives a lot of burden (time and efforts) to them, and it has been growing the demand for the automatic nurse scheduling system. This chapter aims to develop a genetic algorithm application for the Nurse Scheduling Problem (NSP). The application will be developed using Microsoft Visual Studio in C# programming language.
Meta-heuristic algorithms have gained substantial popularity in recent decades and have focused on applications in a wide spectrum of fields. In this paper, a new and powerful physics-based algorithm named nuclear reaction optimization (NRO) is presented. Meanwhile, NRO imitates the nuclear reaction process and consists of two phases, namely, a nuclear fission (NFi) phase and a nuclear fusion (NFu) phase. The Gaussian walk and differential evolution operators between nucleus and neutron are employed for exploitation and appropriate exploration in the (NFi) phase, respectively. Meanwhile, the variants of differential evolution operator are utilized for exploration in the NFu phase, which consists of the ionization and fusion stages. Additionally, variants of Levy flight are used for random searching to escape from the local optima in each stage of NFu phase. The exploration and exploitation abilities of NRO can be balanced due to a combination of the two phases. Both constrained and unconstrained benchmark functions are employed for testing the performance of NRO. To make comparisons between NRO and the state-of-the-art algorithms, twenty-three classic benchmark functions and twenty-night modern benchmark functions are performed. Moreover, three engineering design optimization problems are solved as constrained benchmark functions by using NRO and the compared algorithms. The results illustrate that the proposed nuclear reaction optimization algorithm is a potential and powerful approach for global optimization.
Two-echelon vehicle routing problem is a challenging problem that involves both strategic and tactical planning decisions on both echelons. The satellite locations and the customer distribution affect the cost of different components on the second echelon, thus the possibilities of satellite-to-customer assignment complicates the problem. In this study, we propose a graph-based fuzzy evolutionary algorithm for solving 2E-VRP. The proposed method integrates a graph-based fuzzy assignment scheme into an iteratively evolutionary learning process to minimize the total cost. To resolve the possibilities of the satellite-to-customer assignment, graph-based fuzzy operator is used to take advantage of population evolution and avoid excessive fitness evaluations of unpromising moves in different satellites. Each offspring is produced via graph-based fuzzy assignment procedure out of an assignment graph from parent individuals, and fuzzy local search procedure is used to further improve the offspring. Experimental results on the public test sets demonstrate the competitiveness of the proposed method.
Evolutionary algorithms (EAs) are population-based metaheuristics, originally inspired by aspects of natural evolution. Modern varieties incorporate a broad mixture of search mechanisms, and tend to blend inspiration from nature with pragmatic engineering concerns; however, all EAs essentially operate by maintaining a population of potential solutions and in some way artificially 'evolving' that population over time. Particularly well-known categories of EAs include genetic algorithms (GAs), Genetic Programming (GP), and Evolution Strategies (ES). EAs have proven very successful in practical applications, particularly those requiring solutions to combinatorial problems. EAs are highly flexible and can be configured to address any optimization task, without the requirements for reformulation and/or simplification that would be needed for other techniques. However, this flexibility goes hand in hand with a cost: the tailoring of an EA's configuration and parameters, so as to provide robust performance for a given class of tasks, is often a complex and time-consuming process. This tailoring process is one of the many ongoing research areas associated with EAs. © Springer International Publishing AG, part of Springer Nature 2018. All rights reserved.
Bee colony algorithms are new swarm intelligence techniques inspired from the smart behaviors of real honeybees in their foraging behavior. This paper examines the use of Artificial Bee Colony (ABC) to train a multi-layer feed forward neural network for demand forecasting. We use in this paper two data sets represent the weekly demand data for cement and towels, which have been outfitted by the Sorthern General Company for Cement and General Company of prepared clothes respectively. The results showed superiority of trained neural networks using ABC on neural networks trained using error back propagation because their ability to escape from local optima.
To avoid problems such as premature convergence and falling into a local optimum, this paper proposes an improved real-coded genetic algorithm (RCGA-rdn) to improve the performance in solving numerical function optimization. These problems are mainly caused by the poor search ability of the algorithm and the loss of population diversity. Therefore, to improve the search ability, the algorithm integrates three specially designed operators: ranking group selection (RGS), direction-based crossover (DBX) and normal mutation (NM). In contrast to the traditional strategy framework, RCGA-rdn introduces a new step called the replacement operation, which periodically performs a local initialization operation on the population to increase the population diversity. In this paper, comparisons with several advanced algorithms were performed on 21 complex constrained optimization problems and 10 high-dimensional unconstrained optimization problems to verify the effectiveness of RCGA-rdn. Based on the results, to further verify the feasibility of the algorithm, it was applied to a series of practical engineering optimization problems. The experimental results show that the proposed operations can effectively improve the performance of the algorithm. Compared with the other algorithms, the improved algorithm (RCGA-rdn) has a better search ability, faster convergence speed and can maintain a certain population diversity.
Population-based meta-heuristic is a high-level method intended to provide sufficient solution for problems with incomplete information among a massive volume of solutions. However, it does not guarantee to attain global optimum in a reasonable time. To improve the time and accuracy of the coverage in the population-based meta-heuristic, this paper presents a novel algorithm called the Raccoon Optimization Algorithm (ROA). The ROA is inspired by the rummaging behaviours of real raccoons for food. Raccoons are successful animals because of their extraordinarily sensitive and dexterous paws and their ability to find solutions for foods and remember them for up to three years. These capabilities make raccoons expert problem solvers and allow them to purposefully seek optimum solutions. These behaviours exploited in the ROA to search the solution spaces of nonlinear continuous problems to find the global optimum with higher accuracy and lower time coverage. To evaluate the ROA’s ability in addressing complicated problems, it has been tested on several benchmark functions. The ROA is then compared with nine other well-known optimization algorithms. These experiments show that the ROA achieves higher accuracy with lower coverage time.
ResearchGate has not been able to resolve any references for this publication.