Article

An overview of evolutionary algorithms: Practical issues and common pitfalls

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

An overview of evolutionary algorithms is presented covering genetic algorithms, evolution strategies, genetic programming and evolutionary programming. The schema theorem is reviewed and critiqued. Gray codes, bit representations and real-valued representations are discussed for parameter optimization problems. Parallel Island models are also reviewed, and the evaluation of evolutionary algorithms is discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... They also need the parameters for population size, mutation rate, and crossover rate to be carefully tuned. The size and complexity of the task can significantly lengthen the computing time [90]. The following are some salient features of GA: ...
... Unlike the more recent CS and HS, GA and ACO have been extensively used for job shop scheduling and are backed by substantial research outlining strategies to improve their performance [80,190]. GA and ACO are known to be computationally expensive because of their complicated processes, such as crossover and mutation [90,190]. ...
Article
Full-text available
This article aims to review the industrial applications of AI-based intelligent system algorithms in the manufacturing sector to find the latest methods used for sustainability and optimisation. In contrast to previous review articles that broadly summarised existing methods, this paper specifically emphasises the most recent techniques, providing a systematic and structured evaluation of their practical applications within the sector. The primary objective of this study is to review the applications of intelligent system algorithms, including metaheuristics, evolutionary algorithms, and learning-based methods within the manufacturing sector, particularly through the lens of optimisation of workflow in the production lines, specifically Job Shop Scheduling Problems (JSSPs). It critically evaluates various algorithms for solving JSSPs, with a particular focus on Flexible Job Shop Scheduling Problems (FJSPs), a more advanced form of JSSPs. The manufacturing process consists of several intricate operations that must be meticulously planned and scheduled to be executed effectively. In this regard, Production scheduling aims to find the best possible schedule to maximise one or more performance parameters. An integral part of production scheduling is JSSP in both traditional and smart manufacturing; however, this research focuses on this concept in general, which pertains to industrial system scheduling and concerns the aim of maximising operational efficiency by reducing production time and costs. A common feature among research studies on optimisation is the lack of consistent and more effective solution algorithms that minimise time and energy consumption, thus accelerating optimisation with minimal resources.
... Due to the varying simulation backgrounds of these algorithms, different metaheuristic algorithms are suited to different problems [12]. According to the 'No Free Lunch' theorem [13][14][15], there is no single metaheuristic algorithm that is universally applicable to all optimization problems. Therefore, to better solve the proposed optimization problems, it is necessary to continuously develop new high-performance optimization algorithms to achieve higher solution accuracy. ...
... According to Equation (14), b is a random parameter with two possible values that determines whether the prey will attack or flee. If b=1, the prey chooses to flee; if b=-1, the prey chooses to attack. ...
Preprint
Full-text available
This paper introduces a novel optimizer based on animal survival experiments called Savannah Bengal Tiger Optimization (SBTO). Inspired by the survival behavior of Bengal tigers on the African savannah, SBTO aims to address continuous complex constrained optimization problems. SBTO simulates the group hunting behavior of Bengal tigers and integrates the support of Kalman filters, employing three strategies: prey search, stealth approach, and hunting. The prey search strategy reflects SBTO's exploration capabilities, while the stealth approach and hunting strategies primarily demonstrate its exploitation capabilities. Compared to other metaheuristic algorithms, SBTO has an advantage in population distribution, maintaining good exploration performance while performing exploitation, which helps the algorithm escape local optima in a timely manner. Finally, SBTO was experimentally evaluated against 10 popular algorithms and recently proposed algorithms on CEC2017, CEC2020, CEC2022 test functions, and 9 engineering problems. The results indicate that SBTO achieved the best fitness ratio of 27/30, 8/10, and 8/12 in the test functions, with Wilcoxon rank-sum tests showing significance proportions of 260/300, 89/100, and 104/120, respectively. In the 9 engineering problems, SBTO obtained the best average and optimal fitness in 7 problems, demonstrating exceptional performance in constrained optimization problems and complex multi-modal functions. The source code for SBTO is publicly available at https://www.mathworks.com/matlabcentral/fileexchange/172500-sbto.
... Evolutionary algorithms are suitable for design optimisation and have been widely applied to explore creative designs in the schematic design stage [16]. As the most recognised form of evolutionary algorithms, genetic algorithms, initially proposed by Holland [25], are renowned for their robustness in navigating vast search spaces and have long been viewed as effective tools in search, design and optimisation [26,27]. Owing to their stochastic nature, genetic algorithms facilitate the exploration of the design space and the generation of diverse design solutions with less dependence on extensive specialised domain knowledge. ...
... As the most recognised form of evolutionary algorithm, genetic algorithms are search algorithms that have long been viewed as effective tools in search, design and optimisation [26,27]. These algorithms mimic the process of natural selection, sustaining and evolving a pool of candidate solutions, often referred to as the "population," to conduct a comprehensive, multidirectional search. ...
Preprint
Full-text available
The determination of space layout is one of the primary activities in the schematic design stage of an architectural project. The initial layout planning defines the shape, dimension, and circulation pattern of internal spaces; which can also affect performance and cost of the construction. When carried out manually, space layout planning can be complicated, repetitive and time consuming. In this work, a generative design framework for the automatic generation of spatial architectural layout has been developed. The proposed approach integrates a novel physics-inspired parametric model for space layout planning and an evolutionary optimisation metaheuristic. Results revealed that such a generative design framework can generate a wide variety of design suggestions at the schematic design stage, applicable to complex design problems.
... These methods included the generalized reduced gradient method [16], the utilization of Lagrange multipliers [17], a combination of Lagrange multipliers with modular and decomposition approaches [11], the optimal gradient method [18], and the deployment of genetic algorithms in [19,20]. The genetic algorithm optimization approach is meticulously engineered to effectively identify global optimal solutions, even when the system exhibits numerous local maxima [21,22]. This stochastic method is readily accessible in the Professional version of EES, where it is harnessed for optimization tasks. ...
... This stochastic method is readily accessible in the Professional version of EES and is employed to execute the thermoeconomic optimization tasks within this study. What distinguishes this method from conventional procedures, as highlighted in references [21,22], is its reliance solely on the objective function, rendering derivatives or auxiliary calculus unnecessary. This streamlined approach boasts straightforward programming and consistently delivers favorable outcomes, even in scenarios involving complex, multimodal functions. ...
Preprint
Full-text available
In a realm where finite natural fossil fuel reservoirs coexist with escalating energy requisites and critical ecological contamination thresholds, matters pertaining to the configuration of thermal systems, encompassing energy efficacy, financial assessment, project intricacy, ecological consciousness, and fine-tuned optimization, have progressively piqued the scientific community's curiosity. Hence, thermoeconomic optimization emerges as a promising avenue for enhancing the efficiency of thermal system designs. Nevertheless, the intricacies of thermoeconomic optimization in thermal system design typically involve a multitude of components, interconnected processes, and flows, which collectively give rise to a complex system of nonlinear equations stemming from both thermodynamic and economic modeling. Moreover, the inherent objective functions in these optimization challenges are analytically daunting, characterized by traits like discontinuity, multimodality, and non-differentiability, further compounded by a multitude of decision variables. In this context, metaheuristic methods present themselves as promising and appealing tools for optimizing such intricate systems. In this study, we employ two metaheuristic methods, namely the Genetic Algorithm (GA) and the Gray Wolf Optimizer (GWO), to optimize the regenerative gas turbine cogeneration system, recognized in the literature as the CGAM problem. The thermoeconomic optimization challenge is tackled and resolved through the computational integration of a commercial software package (EES) and a mathematical platform (Matlab). Within this framework, the thermodynamic and economic modeling, as well as the thermoeconomic optimization components, are seamlessly integrated into the Engineering Equation Solver (EES). EES, in turn, calculates the thermodynamic properties for all streams within the cogeneration system while concurrently solving mass and energy balances as necessitated by the evaluation of the objective function. It is worth noting that the GA operates as an optimization tool within EES, whereas the GWO is implemented in Matlab and effectively integrated with EES. This study reveals that, despite GWO's relatively longer computational time attributable to the integration between Matlab and EES, it stands out as notably efficient in addressing the given problem, primarily owing to its reduced demand for objective function evaluations during the optimization process. Moreover, both the decision variables and the objective function tend to converge towards values closely aligned with those found in the reference literature.
... Evolutionary computation solves problems by mimicking evolution steps in nature [6] [7]. Mimicking a natural process gives some abilities [8][9] to evolutionary computation. ...
... The cycle for https://doi.org/10.36287/setsci. 6 [13]. When a swarm system is established, it is hard to realize its algorithms in real life [14]. ...
... However, most of these can be traced back to slightly modified forms of the general sequence from Fig. 4. Furthermore, two main branches of EAs can be distinguished. On the one hand, genetic algorithms (GA) are specially designed for optimization within discrete solution spaces (Mitchell, 1997;Spears et al., 1993;Whitley, 2001). On the other hand, evolution strategies (ES) usually perform optimizations on continuous solution spaces (Beyer & Schwefel, 2002). ...
... Second, the plus selection [e.g. (μ + λ)-GA] whereby the next generation consists of the best individuals of the totality of the prior generation (here, μ denotes the number of individuals within a population) and the generated offspring (Bäck & Schwefel, 1993;Whitley, 2001). ...
Article
Full-text available
Offering Product-Service Systems (PSS) becomes an established strategy for companies to increase the provided customer value and ensure their competitiveness. Designing PSS business models, however, remains a major challenge. One reason for this is the fact that PSS business models are characterized by a long-term nature. Decisions made in the development phase must take into account possible scenarios in the operational phase. Risks must already be anticipated in this phase and mitigated with appropriate measures. Another reason for the design phase being a major challenge is the size of the solution space for a possible business model. Developers are faced with a multitude of possible business models and have the challenge of selecting the best one. In this article, a simheuristic optimization approach is developed to test and evaluate PSS business models in the design phase in order to select the best business model configuration beforehand. For optimization, a proprietary evolutionary algorithm is developed and tested. The results validate the suitability of the approach for the design phase and the quality of the algorithm for achieving good results. This could even be transferred to already established PSS.
... Evolutionary algorithms (EAs) [71] are optimization algorithms that simulate biological evolution to solve complex optimization problems. On the basis of Darwin's theory of evolution, EAs progressively optimize solutions by simulating natural selection, inheritance, and mutation processes. ...
... Evolutionary algorithms (EAs) [71] are optimization algorithms that simulate bi ical evolution to solve complex optimization problems. On the basis of Darwin's theo evolution, EAs progressively optimize solutions by simulating natural selection, in itance, and mutation processes. ...
Article
Full-text available
The constraints in traditional music style transfer algorithms are difficult to control, thereby making it challenging to balance the diversity and quality of the generated music. This paper proposes a novel weak selection-based music generation algorithm that aims to enhance both the quality and the diversity of conditionally generated traditional diffusion model audio, and the proposed algorithm is applied to generate natural sleep music. In the inference generation process of natural sleep music, the evolutionary state is determined by evaluating the evolutionary factors in each iteration, while limiting the potential range of evolutionary rates of weak selection-based traits to increase the diversity of sleep music. Subjective and objective evaluation results reveal that the natural sleep music generated by the proposed algorithm has a more significant hypnotic effect than general sleep music and conforms to the rules of human hypnosis physiological characteristics.
... Genetic algorithms are metaheuristics belonging to the family of evolutionary algorithms [8,9]. Based on naturally occurring evolution and natural selection, they are commonly used for optimization and search problems where the search space is extensive, exact methods are unavailable, or the time constraints are too strict. ...
... W 0 x 0 + W 1 x 1 + · · · + W n x n + x A = |L|, (8) ∀i ∈ {0, . . . , n}, x i ≤ |L|, ...
Article
Full-text available
In this paper, we proposed a new method for image-based grammatical inference of deterministic, context-free L-systems (D0L systems) from a single sequence. This approach is characterized by first parsing an input image into a sequence of symbols and then, using a genetic algorithm, attempting to infer a grammar that can generate this sequence. This technique has been tested using our test suite and compared to similar algorithms, showing promising results, including solving the problem for systems with more rules than in existing approaches. The tests show that it performs better than similar heuristic methods and can handle the same cases as arithmetic algorithms.
... Evolutionary computation (EC), including evolutionary algorithms (EAs) [18], [19] and swarm intelligence (SI) [20], [21], is a flourishing research realm for complex optimization. On the one hand, EC algorithms are a kind of derivative free optimization method, which is powerful and robust for complex optimization problems with characteristics like nonconvex, large-scale, black-box, expensive, etc. [22], [23], [24]. ...
... The third limitation is that EC is difficult to evolve without accurate fitness evaluation. EC conducts iterative evolution based on the principle of survival of the fittest, in which the fitness evaluation is specifically known [18], [19], [20], [21]. However, in distributed optimization, the heterogeneity of devices and worker behaviors may lead to uncertainties and reduce the quality of data [32]. ...
Preprint
Crowdsourcing is an emerging computing paradigm that takes advantage of the intelligence of a crowd to solve complex problems effectively. Besides collecting and processing data, it is also a great demand for the crowd to conduct optimization. Inspired by this, this paper intends to introduce crowdsourcing into evolutionary computation (EC) to propose a crowdsourcing-based evolutionary computation (CEC) paradigm for distributed optimization. EC is helpful for optimization tasks of crowdsourcing and in turn, crowdsourcing can break the spatial limitation of EC for large-scale distributed optimization. Therefore, this paper firstly introduces the paradigm of crowdsourcing-based distributed optimization. Then, CEC is elaborated. CEC performs optimization based on a server and a group of workers, in which the server dispatches a large task to workers. Workers search for promising solutions through EC optimizers and cooperate with connected neighbors. To eliminate uncertainties brought by the heterogeneity of worker behaviors and devices, the server adopts the competitive ranking and uncertainty detection strategy to guide the cooperation of workers. To illustrate the satisfactory performance of CEC, a crowdsourcing-based swarm optimizer is implemented as an example for extensive experiments. Comparison results on benchmark functions and a distributed clustering optimization problem demonstrate the potential applications of CEC.
... By allowing LLMs to perform the evolutionary search process, we anticipate gradually optimizing pruning strategies through iterations, thereby effectively enhancing the accuracy of existing pruning methods. This approach not only fully utilizes the powerful reasoning capabilities of LLMs (Huang & Chang, 2022) but also leverages the advantage of evolutionary algorithms in finding solution within complex search spaces (Whitley, 2001), providing a novel perspective for addressing the challenging problem of model pruning. ...
Preprint
Despite exceptional capabilities, Large Language Models (LLMs) still face deployment challenges due to their enormous size. Post-training structured pruning is a promising solution that prunes LLMs without the need for retraining, reducing computational overhead, and it is hardware-deployment friendly. However, the training-free nature of post-training structured pruning leads to significant performance degradation. We argue that the key to mitigating this issue lies in accurately determining the pruning rate for each layer. Meanwhile, we find that LLMs may have prior knowledge about their own redundancy. Based on this insight, we introduce Self-Pruner\textbf{Self-Pruner} an end-to-end automatic self-pruning framework for LLMs, which efficiently search layer-wise pruning rates. Specifically, Self-Pruner\textbf{Self-Pruner} leverages LLMs to autonomously execute the entire evolutionary search process to search for pruning rate configurations. In this process, LLMs are used to generate populations, select parent solutions from the current population, and perform crossover and mutation operations to produce offspring solutions. In this way, LLMs automatically generate and evaluate a large number of candidate solutions, effectively converging to find the pruning rate configurations with minimal human intervention. Extensive experiments demonstrate Self-Pruner\textbf{Self-Pruner}'s better performance compared to existing state-of-the-art methods. Notably, Self-Pruner\textbf{Self-Pruner} prunes LLaMA-2-70B to 49B level with only 0.80%\% drop in accuracy across seven commonsense reasoning tasks, achieving a 1.39×\times speedup on NVIDIA A100 80GB GPU. Further pruning to 35B level resulted in only a 3.80%\% decrease in accuracy while obtaining a 1.70×\times speedup.
... These algorithms tackle diverse challenges by incorporating prior knowledge into an evolutionary search process, effectively exploring a solution space filled with potential solutions. Though EAs are unable to achieve the best solution for many problems in spite of the above-mentioned benefits, hence, several academics have combined these techniques with existing technologies to enhance their solutions [6]. ...
Article
Full-text available
In this paper, an optimization algorithm called supercell thunderstorm algorithm (STA) is proposed. STA draws inspiration from the strategies employed by storms, such as spiral motion, tornado formation, and the jet stream. It is a computational algorithm specifically designed to simulate and model the behavior of supercell thunderstorms. These storms are known for their rotating updrafts, strong wind shear, and potential for generating tornadoes. The optimization procedures of the STA algorithm are based on three distinct approaches: exploring a divergent search space using spiral motion, exploiting a convergent search space through tornado formation, and navigating through the search space with the aid of the jet stream. To evaluate the effectiveness of the proposed STA algorithm in achieving optimal solutions for various optimization problems, a series of test sequences were conducted. Initially, the algorithm was tested on a set of 23 well-established functions. Subsequently, the algorithm’s performance was assessed on more complex problems, including ten CEC2019 test functions, in the second experimental sequence. Finally, the algorithm was applied to five real-world engineering problems to validate its effectiveness. The experimental results of the STA algorithm were compared to those of contemporary metaheuristic methods. The analysis clearly demonstrates that the developed STA algorithm outperforms other methods in terms of performance.
... The gSDR algorithm was inspired by evolutionary processes and the broader family of genetic optimizations 23,68,69 , because of the stochastic nature of search through parameter space. Currently, gSDR with biophysically plausible neuronal networks might not surpass other learning algorithms (e.g., ANNs and DNNs trained with gradient descent) in task performance but offer a more natural learning framework, mimicking the innate mechanisms observed in the brain. ...
Preprint
Full-text available
Neurophysiology studies propose that predictive coding is implemented via alpha/beta (8-30 Hz) rhythms preparing specific pathways to process predicted inputs. This leads to a state of relative inhibition, reducing feedforward gamma (40-90 Hz) rhythms and spiking for predictable inputs. This is called predictive routing model. It is unclear which circuit mechanisms implement this push-pull interaction between alpha/beta and gamma rhythms. To explore how predictive routing is implemented, we developed a self-supervised learning algorithm we call generalized Stochastic Delta Rule (gSDR). Development of this algorithm was necessary because manual tuning of parameters (frequently used in computational modeling) is inefficient to search through a non-linear parameter space that leads to emergence of neuronal rhythms. We used gSDR to train biophysical neural circuits and validated the algorithm on simple objectives. Then we applied gSDR to model observed neurophysiology. We asked the model to reproduce a shift from baseline oscillatory dynamics (∼<20Hz) to stimulus induced gamma (∼40-90Hz) dynamics recorded in the macaque visual cortex. This gamma oscillation during stimulation emerged by self-modulation of synaptic weights via gSDR. We further showed that the gamma-beta push-pull interactions implied by predictive routing could emerge via stochastic modulation of the local circuitry as well as top-down modulatory inputs to a network. To summarize, gSDR succeeded in training biophysical neural circuits to satisfy a series of neuronal objectives. This revealed the inhibitory neuron mechanisms underlying the gamma-beta push-pull dynamics that are observed during predictive processing tasks in systems and cognitive neuroscience. Significant Statement This study contributes to the advancement of self-supervised learning for modeling the behavior of complex neural circuits and specifically, biophysical modeling. We performed simulations in order to examine basic mechanisms in predictive routing framework. Since generalized stochastic delta rule (gSDR) is in the family of evolutionary algorithms and does not rely on specific model-based assumptions, it could improve computational neuroscience studies by adding autonomous approaches in neural network research with emphasis on neurobiology. In addition, it is allows to expand bio-plausibility by defining multiple objectives, making it capable of expanding its neurobiological constraints.
... The main difference between the subpopulation-based genetic algorithm and classical genetic algorithms (Mitchell, 1998;Whitley, 2001) Both crossover and mutation are used to produce offspring in classical genetic algorithms: crossover is applied with a high probability and mutation is applied with a small probability (Larrañaga et al., 1999;Kroll, 2013). Crossover dominates the search progress of classical genetic algorithms, which could be due to the fact that mutation trends to a random search. ...
Preprint
The performance of different mutation operators is usually evaluated in conjunc-tion with specific parameter settings of genetic algorithms and target problems. Most studies focus on the classical genetic algorithm with different parameters or on solving unconstrained combinatorial optimization problems such as the traveling salesman problems. In this paper, a subpopulation-based genetic al-gorithm that uses only mutation and selection is developed to solve multi-robot task allocation problems. The target problems are constrained combinatorial optimization problems, and are more complex if cooperative tasks are involved as these introduce additional spatial and temporal constraints. The proposed genetic algorithm can obtain better solutions than classical genetic algorithms with tournament selection and partially mapped crossover. The performance of different mutation operators in solving problems without/with cooperative tasks is evaluated. The results imply that inversion mutation performs better than others when solving problems without cooperative tasks, and the swap-inversion combination performs better than others when solving problems with cooperative tasks.
... To create training data D 1 for ′ , we propose the use of an evolution strategy (ES) to efficiently query for labels. An ES is a method used to solve optimization problems by simulating the process of natural evolution [22]. This algorithm operates on a population of candidate solutions, represented as = { 1 , 2 , . . . ...
Preprint
Full-text available
This paper introduces a novel data-free model extraction attack that significantly advances the current state-of-the-art in terms of efficiency, accuracy, and effectiveness. Traditional black-box methods rely on using the victim's model as an oracle to label a vast number of samples within high-confidence areas. This approach not only requires an extensive number of queries but also results in a less accurate and less transferable model. In contrast, our method innovates by focusing on sampling low-confidence areas (along the decision boundaries) and employing an evolutionary algorithm to optimize the sampling process. These novel contributions allow for a dramatic reduction in the number of queries needed by the attacker by a factor of 10x to 600x while simultaneously improving the accuracy of the stolen model. Moreover, our approach improves boundary alignment, resulting in better transferability of adversarial examples from the stolen model to the victim's model (increasing the attack success rate from 60\% to 82\% on average). Finally, we accomplish all of this with a strict black-box assumption on the victim, with no knowledge of the target's architecture or dataset. We demonstrate our attack on three datasets with increasingly larger resolutions and compare our performance to four state-of-the-art model extraction attacks.
... Population diversity is generally known to be an important condition for effective search [3,21]. Research shows that homogeneity in a population reduces selection pressure, leading to stagnation and premature convergence [22]. Figure 1 is a visual representation of the stagnation risk associated with constraint inconsistent genetic operators. ...
Preprint
Full-text available
This work presents a novel lattice-based methodology for incorporating multidimensional constraints into continuous decision variables within a genetic algorithm (GA) framework. The proposed approach consolidates established transcription techniques for crossover of continuous decision variables, aiming to leverage domain knowledge and guide the search process towards feasible regions of the design space. This work offers a robust and general purpose lattice-based GA that is applicable to a broad range of optimization problems. Monte Carlo analysis demonstrates that lattice-based methods find solutions two orders of magnitude closer to optima in fewer generations. The effectiveness of the lattice-based approach is showcased through two illustrative multi-objective design problems: (1) optimal telescope placement for astrophotography and (2) optimal design of a satellite constellation for maximizing ground station access. The optimal telescope placement example shows that lattice-based methods converge to the Pareto front in 15% fewer generations than traditional methods. The orbit design example shows that lattice-based methods discover an order of magnitude more Pareto-optimal solutions than traditional methods in a highly constrained design space. Overall, the results show that the lattice-based method exhibits enhanced exploration capabilities, traversing the solution space more comprehensively and achieving faster convergence compared to conventional GAs.
... However, this convergence time increases with the system complexity. We use two evolutionary algorithms within the ML techniques in this chapter, which are both standard methods in the ML community [109][110][111]. The first evolutionary algorithm used is a genetic algorithm (GA) [112,113] that is implemented standalone, and the second is a differential evolution (DE) [114,115] algorithm, which is incorporated into the process of the predictive model-based machine learning algorithm [103], discussed more in Section 4.2.5. ...
Thesis
Full-text available
... Once a solution that is better than others arises, it tends to prevail over all others, reducing the chances of further improvement [58]. To ensure that the solution space is adequately searched -especially in the initial optimization phases -it is necessary to maintain a diverse population of solutions [59]. Larger populations lead to better outcomes due to the larger pool of different schemas available [60]. ...
Preprint
Full-text available
This paper presents a genetic algorithm designed to predict RNA secondary structures, which utilizes selection criteria based on free energy (fitness) and topological similarity. This approach represents structural information using a simple number, facilitating comparisons between foldings. The simplified graph representation identifies similarities between structures that have the same type of branches. The results demonstrate that the algorithm identifies the final secondary structure with the same level of precision as the commonly used dynamic programming, but with the advantage of producing more optimal structures with different topologies. This approach maintains high population diversity and allows for the exploration of many suboptimal structures in parallel, avoiding the possibility of getting stuck in a local minimum. This permits the investigation of not only the structure with the minimum free energy, but also of other low-energy structures with different topologies that are closer to the natural fold.
... Klasik kodlama bit dizilerinin belirli bir aralıktaki gerçel sayılara eşlenmesi yoluyla yapılmaktadır ancak genetik algoritmaların modern versiyonlarında gerçel değişkenlerin kendileri kullanılabilmektedir. Kodlama sistemi çoğu zaman probleme özeldir veya tercihe bağlıdır, çünkü bir kodlama sisteminin diğer bir kodlama sistemine üstünlüğü henüz kanıtlanmış değildir (Whitley, 2001). Kodlama sistemine karar verdikten sonra genetik algoritma operatörleri ile doğal seçilim, çaprazlama ve mutasyon işlemleri gerçekleştirilmiş olur. ...
Thesis
Full-text available
Genetik algoritmalar doğadaki doğal seçilim ilkelerini taklit eden arama ve optimizasyon yöntemleridir. Bazı ekonometrik yöntem ve sorunların çözümleri, optimize edilecek olan fonksiyonların özünde doğrusal olmaması veya kombinasyonal olması durumunda aşırı işlem zorlukları içerirler. Yüksek dereceden otokorelasyon katsayılarının ve ağırlıklı en küçük kareler parametrelerinin tahmin edilmesi, ölçme hatası ve modeldeki eksik değişken sapmasının azaltılması, doğrusal regresyonda ve zaman serileri modellerinde tüm mümkün alt modellerin incelenmesine dayalı değişken seçimi birer optimizasyon problemi olarak düşünülebilir ve genetik algoritmalar kullanılarak çözümlenebilir. Bu tezde, bazı ekonometrik yöntem ve sorunlar için çeşitli genetik algoritma çözümleri ve uygulamaları sunulacaktır.
... Alternatively, the application of machine learning offers an approach to exploring the ECM solution space that can be more transparent and reproducible. [31][32][33][34][35][36][37][38][39][40] Evolutionary algorithms have been used since 2004 to explore ECM structure and have shown great promise in ECMs-based EIS analysis. 31,39,40 By combining genetic algorithm (GA) 41 and gene expression programming (GEP), 42 the evolutionary algorithms can automatically optimize both the circuit structure and fitting parameters of circuit components by sequentially searching through the solution space. ...
Article
Full-text available
Electrochemical impedance spectroscopy (EIS) is a powerful tool for electrochemical analysis; however, its data can be challenging to interpret. Here, we introduce a new open-source tool named AutoEIS that assists EIS analysis by automatically proposing statistically plausible equivalent circuit models (ECMs). AutoEIS does this without requiring an exhaustive mechanistic understanding of the electrochemical systems. We demonstrate the generalizability of AutoEIS by using it to analyze EIS datasets from three distinct electrochemical systems, including thin-film oxygen evolution reaction electrocatalysis, corrosion of self-healing multi-principal components alloys, and a carbon dioxide reduction electrolyzer device. In each case, AutoEIS identified competitive or in some cases superior ECMs to those recommended by experts and provided statistical indicators of the preferred solution. The results demonstrated AutoEIS’s capability to facilitate EIS analysis without expert labels while diminishing user bias in a high-throughput manner. AutoEIS provides a generalized automated approach to facilitate EIS analysis spanning a broad suite of electrochemical applications with minimal prior knowledge of the system required. This tool holds great potential in improving the efficiency, accuracy, and ease of EIS analysis and thus creates an avenue to the widespread use of EIS in accelerating the development of new electrochemical materials and devices.
... Moreover, most of these approaches adopted local search strategies to search for an optimized order of test cases for regression testing, meanwhile, these strategies mostly terminate at local optima [10] and [11]. Consequently, an evolutionary optimization technique based on genetic algorithm (GA) Mitchell [12] and Whitley [13] has been reported to produce an astonishingly better result when applied for propitiating test cases. In this paper, we proposed an evolutionary cost-cognizant regression TCP approach for OOP based on the use of the previous test case execution record and a GA. ...
Article
Full-text available
Test case prioritization (TCP) is a software testing technique that finds an ideal ordering of test cases for regression testing, so that testers can obtain the maximum benefit of their test suite, even if the testing process is stop at some arbitrary point. The recent trend of software development uses OO paradigm. This paper proposed a cost-cognizant TCP approach for object-oriented software that uses path-based integration testing. Path-based integration testing will identify the possible execution path and extract these paths from the Java System Dependence Graph (JSDG) model of the source code using forward slicing technique. Afterward evolutionary algorithm (EA) was employed to prioritize test cases based on the severity detection per unit cost for each of the dependent faults. The proposed technique was known as Evolutionary Cost-Cognizant Regression Test Case Prioritization (ECRTP) and being implemented as regression testing approach for experiment.
... The genetic algorithm (GA) is a variable selection technique inspired by the theory of evolution. This method mathematically reproduces natural selection whose combinations of independent variables are submitted to a certain number of generations (Andersen and Bro, 2010), at a series of steps (Leardi, 2007;Whitley, 2001). At the first step of the algorithm, a randomized population of a previously defined number of individuals is generated and called chromosomes. ...
Article
Full-text available
The soybean grain yield is affected by several factors, among them, the nutritional deficiency caused by low levels of potassium (K⁺) is one of the main responsible for the reduction in grain yield both in Brazil and worldwide. Traditional methods of nutrient determination involve leaf collection and laboratory procedures with toxic reagents, which is a destructive, time-consuming, expensive, and environmentally unfriendly method. In this context, the use of hyperspectral data and machine learning regression models can be a powerful tool in the nutritional diagnosis of plants. However, the comparison among different machine learning algorithms for K⁺ estimation in soybean leaves from hyperspectral reflectance data is yet to be reported. From this, the goal of this research was to obtain K⁺ prediction models in soybean leaves at different stages of development using hyperspectral data and machine learning regression models with wavelength selection algorithms. The experiment was carried out at the National Soybean Research Centre (Embrapa Soja) in the 2017/2018, 2018/2019 and 2019/2020 soybean crop season, at the stages of development V4–V5, R1–R2, R3–R4 and R5.1–R5.3. The experimental plots were managed to obtain different conditions of K⁺ availability for the plants, from severe deficiency level to the appropriate level of nutrient, under the following experimental treatments: severe potassium deficiency, moderate potassium deficiency and adequate supply of potassium. Spectral data were obtained by the ASD Fieldspec 3 Jr. hyperspectral sensor in the visible/near-infrared spectral range (400–1000 nm) and correlated to leaf K⁺ through ten machine learning methods: Partial Least Square Regression (PLSR), interval Partial Least Squares (iPLS), Genetics Algorithm (GA), Competitive Adaptive Reweighted Sampling (CARS), Random Frog (RF, Frog), Variable combination population analysis (VCPA), Principal Component Regression (PCR), Support Vector Machine (SVM), Successive projections algorithm (SPA), and Stepwise. The results showed that K⁺ deficiency significantly reduce grain yield and nutrient content in the leaf, making enabling the clustering separation of all treatments by Tukey’s test. Among the 601 wavelengths obtained by the sensor, the algorithms selected from 1 to 33.28%, largely distributed in the regions of red, green, blue, red-edge and NIR. In all stages of development, it was possible to quantify the nutrient with high accuracy (R² ≅ 0.88). The multivariate regression models from the selection of variables contributed to increase the accuracy (R²) in about 7.65% for the calibration step and 6.45% for the cross-validation step, when compared to the model using the full spectra. The results obtained demonstrate that the monitoring of K⁺ in soybean leaves is possible and has the potential to determine the nutritional content in the early stages of plant development.
... At each generation, each parent generates one offspring, the parent and the offspring are evaluated, and the best μ individuals are selected as new parents (see Figure 3). It is a method belonging to the class proposed by [71][72]. Differently from previous related methods like the Steady State [68], the authors introduced the possibility of adding noise to the fitness, thus making the selective process stochastic. ...
Preprint
Full-text available
The mutual relationship between evolution and learning is a controversial argument among the artificial intelligence and neuro-evolution communities. After more than three decades, there is still no common agreement on the matter. In this paper the author investigates whether combining learning and evolution permits to find better solutions than those discovered by evolution alone. More specifically, the author presents a series of empirical studies that highlight some specific conditions determining the success of such a combination, like the introduction of noise during the learning and selection processes. Results are obtained in two qualitatively different domains, where agent/environment interactions are minimal or absent.
... Reading and reviewing the WOA variant papers show that they are mostly hybridized with one or more algorithms from three main categories, evolutionary, physics-based, and swarm intelligence. Evolutionary algorithms have prominent abilities, such as effective global search strategies, self-adaptive mechanisms, and archiving methods [196,197], encouraging researchers to alleviate the weak WOA points using such abilities. However, most WOA variants claim that the canonical WOA suffers from a meaningful search strategy to solve real-world optimization problems, which causes them to encounter issues such as premature convergence, local optima trapping, and population diversity loss. ...
Article
Despite the simplicity of the whale optimization algorithm (WOA) and its success in solving some optimization problems, it faces many issues. Thus, WOA has attracted scholars' attention, and researchers frequently prefer to employ and improve it to address real-world application optimization problems. As a result, many WOA variations have been developed, usually using two main approaches improvement and hybridization. However, no comprehensive study critically reviews and analyzes WOA and its variants to find effective techniques and algorithms and develop more successful variants. Therefore, in this paper, first, the WOA is critically analyzed, then the last 5 years' developments of WOA are systematically reviewed. To do this, a new adapted PRISMA methodology is introduced to select eligible papers, including three main stages: identification, evaluation, and reporting. The evaluation stage was improved using three screening steps and strict inclusion criteria to select a reasonable number of eligible papers. Ultimately, 59 improved WOA and 57 hybrid WOA variants published by reputable publishers, including Springer, Elsevier, and IEEE, were selected as eligible papers. Effective techniques for improving and successful algorithms for hybridizing eligible WOA variants are described. The eligible WOA are reviewed in continuous, binary, single-objective, and multi/many-objective categories. The distribution of eligible WOA variants regarding their publisher, journal, application, and authors' country was visualized. It is also concluded that most papers in this area lack a comprehensive comparison with previous WOA variants and are usually compared only with other algorithms. Finally, some future directions are suggested.
... Neuroevolution uses evolutionary algorithms [18] to optimise neural networks in various problem domains. These evolutionary algorithms follow the principles of Darwinian evolution by repeatedly selecting, mutating and mating the best-performing individuals from a population of networks. ...
... Alternatively, the application of machine learning offers an approach to exploring the ECM solution space that can be more transparent and reproducible [28][29][30][31][32][33][34][35][36][37]. Evolutionary algorithms have been used since 2004 to explore ECM structure and have shown great promise in ECMsbased EIS analysis [28,36,37]. ...
Preprint
Full-text available
Electrochemical Impedance Spectroscopy (EIS) is a powerful tool for electrochemical analysis; however, its data can be challenging to interpret. Here, we introduce a new open-source tool named AutoEIS that assists EIS analysis by automatically proposing statistically plausible equivalent circuit models (ECMs). AutoEIS does this without requiring an exhaustive mechanistic understanding of the electrochemical systems. We demonstrate the generalizability of AutoEIS by using it to analyze EIS datasets from three distinct electrochemical systems, including thin-film oxygen evolution reaction (OER) electrocatalysis, corrosion of self-healing multi-principal components alloys, and a carbon dioxide reduction electrolyzer device. In each case, AutoEIS identified competitive or in some cases superior ECMs to those recommended by experts and provided statistical indicators of the preferred solution. The results demonstrated AutoEIS's capability to facilitate EIS analysis without expert labels while diminishing user bias in a high-throughput manner. AutoEIS provides a generalized automated approach to facilitate EIS analysis spanning a broad suite of electrochemical applications with minimal prior knowledge of the system required. This tool holds great potential in improving the efficiency, accuracy, and ease of EIS analysis and thus creates an avenue to the widespread use of EIS in accelerating the development of new electrochemical materials and devices.
... The extracted blobs are then labelled as vehicles or non-vehicles using vehicle categorization. Foreground segmentation using a stationary camera is a mature technique (Whitley, 2001). For example, the background difference technique primarily utilises the difference between the target and background images in terms of colour, edge, intensity, and gradient direction in order to determine what should be extracted as the foreground. ...
... Evolutionary algorithms provide a solution to this problem [11,12]. The evolutionary algorithm of swarm intelligence [13] is a heuristic calculation method that simulates the evolutionary behavior of various biological groups through simulation. ...
Article
With the deepening of hospital informatization construction, the electronic health record (EHR) system has been widely used in the clinical diagnosis and treatment process, resulting in a large amount of medical data. Electronic medical records contain a large amount of rich medical information, which is an important resource for disease prediction, personalized information recommendation, and drug mining. However, the medical information contained in electronic medical records cannot be automatically acquired, analyzed and utilized by computers. In this paper, we utilize machine learning algorithms for intelligent analysis of large-scale electronic medical records to explore and develop general methods and tools suitable for electronic medical record analysis in medical databases. This is of great value for summarizing the therapeutic effects of various diagnosis and treatment programs, disease diagnosis, treatment, and medical research. We propose an ECML-based intelligent analysis method for electronic medical records. First, we perform data preprocessing on the electronic medical record. Second, we design an intelligent analysis method for electronic medical records based on a deep learning model. Third, we design a model hyperparameter optimization method based on evolutionary algorithms. Finally, we compare and analyze the performance of the proposed model through experiments, and the experimental results show that the model proposed in this paper has good performance.
Book
This book covers the Proceedings of the 13th International Conference on Information Systems and Advanced Technologies “ICISAT’2023.” One of the evocative and valuable dimensions of this conference is the way it brings together researchers, scientists, academics, and engineers in the field from different countries and enables discussions and debate of relevant issues, challenges, opportunities, and research findings. The ICISAT’2023 conference provided a forum for research and developments in the field of information systems and advanced technologies and new trends in developing information systems organizational aspects of their development and intelligent aspects of the final product. The aim of the ICISAT’2023 is to report progress and development of methodologies, technologies, planning and implementation, tools, and standards in information systems, technologies, and sciences. ICISAT’2023 aims at addressing issues related to the intelligent information, data science, and decision support system, from multidisciplinary perspectives and to discuss the research, teaching, and professional practice in the field. The book of ICISAT’2023 includes selected papers from the 13th International Conference on Information Systems and Advanced Technologies “ICISAT’2023,” organized during December 29–30, 2023. In this book, researchers, professional software, and systems engineers from around the world addressed intelligent information, data science, and decision support system for the conference. The ideas and practical solutions described in the book are the outcome of dedicated research by academics and practitioners aiming to advance theory and practice in this research domain. The list of topics is in all the areas of modern intelligent information systems and technologies such as neural networks, evolutionary computing, adaptive systems, pervasive system, ubiquitous system, E-learning and teaching, knowledge-based paradigms, learning paradigms, intelligent data analysis, intelligent decision making and support system, intelligent network security, web intelligence, deep learning, natural language processing, image processing, general machine learning, and unsupervised learning.
Article
Crowdsourcing utilizes the crowd intelligence for pervasive data sensing and processing. When the processing task is a decision-making and optimization problem, the objective is evaluated based on sensed data, which is defined as crowdsourcing-based distributed optimization (CrowdDO). As evolutionary computation (EC) is a powerful technique for black-box and data-driven optimization problems, this paper combines crowdsourcing and EC to propose crowdsourcing-based EC (CrowdEC) for CrowdDO. CrowdEC performs optimization based on a server and a crowd of workers. Once receiving a CrowdDO request, the server posts the problem to workers. Each worker senses its own data and makes local decisions by local EC optimizer. Due to the heterogeneity of worker behaviors and devices, the data sensed are partial with noises, and thus the server needs to coordinate global optimization based on workers information. To avoid the leakage of worker privacy, workers only compare optimization results with adjacent workers and report comparison results to the server. With partial comparison results, the server adopts the competitive ranking to guide workers cooperation and develop the reliability detection to distinguish unreliable workers. A crowdsourcing-based level-based learning swarm optimizer is implemented as an example. Comparison experiments on benchmark testsuites and distributed clustering optimization demonstrate the potential applications of CrowdEC.
Book
Full-text available
Книга адресована як дослідникам, які професійно займаються проблемами охорони середовища і атмосфери, так і науковцям, зацікавлених парадигмою інтелектуальних обчислень, що є універсальними методами розв’язування широкого класу задач моделювання й аналізу великих множин даних
Article
Full-text available
Temperature control for fermentation is crucial as it directly affects microorganism growth, productivity, and metabolic activity. This work proposes a modified fractional order proportional‐integral‐derivative (FOPID‐DF) controller that incorporates the system model in the control loop configuration utilizing a differential filter for precise temperature control within a narrow operating range. PID, fractional order PID (FOPID), modified fractional order PID (MFOPID), and fractional order internal model control (IMC) controllers were also designed for comparative analysis. The simulation results demonstrated that the proposed FOPID‐DF controller outperforms the other designed controllers, shown by a 22.33% reduction in integral absolute error (IAE) and a 29.06% decrease in integral square error (ISE) compared to the FOPID controller and various other improved performance indicators. Parameter variation and noise analysis highlighted the ability of the controller to maintain stability and performance under changing conditions. The simulation outcomes suggested that the FOPID‐DF controller excels in temperature control, ensuring optimal microorganism growth and metabolic activity.
Article
Full-text available
This paper presents a comprehensive review of deep learning applications in the video game industry, focusing on how these techniques can be utilized in game development, experience, and operation. As relying on computation techniques, the game world can be viewed as an integration of various complex data. This examines the use of deep learning in processing various types of game data. The paper classifies the game data into asset data, interaction data, and player data, according to their utilization in game development, experience, and operation, respectively. Specifically, this paper discusses deep learning applications in generating asset data such as object images, 3D scenes, avatar models, and facial animations; enhancing interaction data through improved text-based conversations and decision-making behaviors; and analyzing player data for cheat detection and match-making purposes. Although this review may not cover all existing applications of deep learning, it aims to provide a thorough presentation of the current state of deep learning in the gaming industry and its potential to revolutionize game production by reducing costs and improving the overall player experience.
Article
The move to predictive and prescriptive maintenance underlines the need for cost-effective methods to monitoring degradation and predicting failures. It is in hydroelectric power plants (HPPs) where the most powerful and unique techniques of this evolution were observed. The final premise of this transition is the need to improve the decision-making process for planning activities in the Prescriptive Maintenance Planning (MP). This encourages the use of fuzzy set theory in process management. Although there are many studies on MP in the literature, these studies do not fully reflect the real life as they ignore potential schedule delays and the indirect impact of sustainable strategy on revenues. Therefore, in the first stage of this study, it is aimed to weight the maintenance types (MTs) on a plant basis for the generator, which is the most critical system of the pumped storage HPP, and in the second stage, it is aimed to create MPs that minimize delays due to system shutdowns with weighted MTs. For this purpose, in the first stage, the MTs for the HPP were weighted by AHP-TOPSIS combination augmented with Pythagorean fuzzy numbers, and in the second stage, a Genetic Algorithm model was designed for the problem of coordinating maintenance durations and deadlines in addition to these weights. Averaged over 50 iterations, the average number of delayed MPs is 142.55, indicating that the algorithm is successful in reducing delayed MPs, while the average weighted delay value is 172.85, indicating that more work is needed in terms of weighted delay.
Article
Full-text available
In the animal kingdom, a mutually-beneficial ecosystemic coexistence and partnership in predation between wolves and ravens, known as the wolf-bird relationship, is observed in various cultures. The Wolf-Bird Optimizer (WBO), a novel metaheuristic algorithm inspired by this natural zoological relationship, is proposed. This method is developed based on the foraging behaviors of ravens and wolves, wherein the intelligence of ravens in finding prey and sending signals to wolves for assistance in hunting is considered. Furthermore, a framework for resource tradeoffs in project scheduling using metaheuristic algorithms and the Building Information Modeling (BIM) approach is established in this research. For statistical analysis, the algorithms are independently run 30 times with a preset stopping condition, enabling the calculation of descriptive statistical metrics such as mean, standard deviation (SD), and the required number of objective function evaluations. To ensure the statistical significance of the results, several inferential statistical methods, including the Kolmogorov-Smirnov, Wilcoxon, Mann-Whitney, and Kruskal-Wallis tests, are employed. Additionally, the capability of the proposed algorithm in solving resource tradeoff problems in four construction projects is assessed. The performance of the WBO algorithm is also evaluated in two benchmark construction projects, with the results indicating the algorithm's ability to produce competitive and exceptional outcomes regarding tradeoffs.
Article
Full-text available
INTRODUCTION: The Evolutionary algorithms created back in 1953, have gone through various phases of development over the years. It has been put to use to solve various problems in different domains including complex problems such as the infamous problem of Travelling Salesperson (TSP).OBJECTIVES: The main objective of this research is to find out the advancements in Evolutionary algorithms and to check whether it is still relevant in 2023.METHODS: To give an overview of the related concepts, subdomains, pros, and cons, the historical and recent developments are discussed and critiqued to provide insights into the results and a better conception of the trends in the domain. RESULTS: For a better perception of the development of evolutionary algorithms over the years, decade-wise trend analysis has been done for the past three decades.CONCLUSION: Scope of research in the domain is ever expanding and to name a few EAs for Data mining, Hybrid EAs are still under development.
Chapter
El Comité Español de Automática (CEA) es una asociación científica sin ánimo de lucro que impulsa el desarrollo, la investigación y las enseñanzasuniversitarias en Automática. Es miembro nacional de la Federación Internacional de Control Automático (IFAC), y celebra anualmente desde el año 1977 las Jornadas de Automática. Estas se organizan por distintas universidades o centros de investigación de habla hispana. El objetivo de las mismas es reunir a profesores, investigadores, estudiantes y profesionales del ámbito de la Automática para tratar temas vinculados con la docencia y la investigación (automatización y control, instrumentación, robótica, modelado y simulación de sistemas, visión por computador, ingeniería biomédica, inteligencia artificial, educación). Las XLIV Jornadas de Automática se celebran de manera totalmente presencial en Zaragoza los días 6, 7 y 8 de septiembre de 2023. Hoy es ya la cuarta ciudad de España, con unos 700.000 habitantes, una gran actividad y oferta de turismo de ocio, de negocios, de congresos y reuniones, cultural y gastronómica. Tiene una historia que se aprecia en el contraste entre sus rincones históricos y su arquitectura moderna. La ubicación de Zaragoza es estratégicamente inmejorable y se encuentra perfectamente comunicada con las principales ciudades de España. Se trata de una ciudad altamente dinámica en muchos aspectos y en particular en la creación de empresas. La Universidad de Zaragoza tiene su origen en un estudio de artes, creado por la Iglesia en el siglo XII, donde se enseñaban gramática y filosofía, y que concedía títulos de bachiller. En noviembre de 1582, Pedro Cerbuna, prior de la catedral de San Salvador de Zaragoza y más tarde obispo de Tarazona, aportó los medios económicos necesarios para abrir la nueva universidad, que se inauguró el 24 de mayo de 1583. Las XLIV Jornadas de Automática tienen su sede en la Escuela de Ingeniería y Arquitectura (EINA) de la Universidad de Zaragoza, la universidad pública de Aragón. En la EINA se imparten diferentes grados y másteres relacionados con los ámbitos de la ingeniería y la arquitectura, y participa en diversos programas de doctorado. La presente edición está organizada por profesores e investigadores pertenecientes principalmente al Área de Ingeniería de Sistemas y Automática y al Instituto de Investigación en Ingeniería de Aragón. Cuenta con la participación de más de 250 asistentes. El programa de esta edición desarrolla, como en ediciones anteriores, una variedad de actividades científico-técnicas, sociales y culturales. En eprograma científico-técnico, las sesiones de los 9 Grupos Temáticos de CEA están dedicadas a la discusión de las actividades de cada uno de ellos desarrolladas y a desarrollar en el futuro. Además, en esta edición en cadaGrupo Temático se exponen oralmente comunicaciones seleccionadas entre los 154 trabajos aceptados para su publicación en las Actas y su exposición como pósteres durante todo el desarrollo de las Jornadas. Hay dos sesiones plenarias a cargo de dos investigadoras muy relevantes en el panorama internacional, una mesa redonda sobre financiación de la investigación, una sesión Industria 4.0 en la que se presentarán casos de éxito de colaboración empresa-universidad, y una sesión especial en colaboración con la sección española de la International Society of Automation (ISA). Asimismo, en los stands se desarrollan demostraciones de aplicaciones con robots y sistemas automáticos. En el evento se entregan premios a los mejores trabajos y tesis doctorales en el marco de trabajo de los Grupos Temáticos, el “Premio Nacional de Automática” (edición 2023) y el “Premio CEA al Joven Talento Femenino en Automática”. En esta edición se realiza un homenaje especial al fallecido Profesor Dr. Manuel Silva Suarez, persona de extraordinaria relevancia científica nacional e internacional y querido amigo. También se realizan actividades culturales para dar a conocer mejor la ciudad, en su aspecto histórico, con visitas guiadas a los “2000 años de historias” y al Palacio de la Aljafería, y en su aspecto actual, con una visita guiada al Museo de la Movilidad en el Pabellón Puente. El Comité Organizador quiere agradecer a los participantes las contribuciones al evento; a las empresas patrocinadores de los premios y actividades, a los comités de dirección, científico, a la Universidad de Zaragoza, y a las autoridades académicas, locales y de la Comunidad Autónoma de Aragón. Queremos transmitir un agradecimiento especial al Servicio de Bibliotecas de la Universidade da Coruña (UDC), por el trabajo desarrollado para la edición de las Actas de las Jornadas. Se editan en formato electrónico y se alojan para su consulta en acceso abierto en el RUC, el repositorio institucional de la UDC, bajo una licencia Creative Commons CC BY-NCSA 4.0. Además del ISBN y del DOI del volumen completo, a cada comunicación aceptada se le ha asignado un DOI, con el con el fin de facilitar su localización en línea y su citación bibliográfica.
Chapter
The impact of climate change on the one hand and global population growth on the other highlight the need for innovative technologies to ensure the world’s future. Scarcity of resources, rising protein demand, and depleted ocean stocks all add to the value of the aquaculture sector. As a result, there are numerous companies that address the issues that this valuable sector faces. From disease control to feed optimization, innovative deep tech companies can be found in the countries with the highest aquaculture production in Asia, Europe, and Latin America.
Thesis
Full-text available
In order to identify and sustainably utilize the services rendered by the ecosystem, it is important to understand the process of land cover and land use change and its implications for environmental conditions and ecosystem functioning. Together with GIS, remotely sensed data enhance the capacity to assess the human effects on the environment in quantitative, qualitative and spatial terms. Thus, this study assessed the pattern of LULC dynamics and its impact on soil erosion, soil quality, and organic carbon stocks in Bururi catchment, western highlands of Ethiopia. In this study aerial photographs, Landsat imageries, digital elevation model (DEM), rainfall data, soil data, and socio-economic survey data were analyzed. These data were obtained from Ethiopian Mapping Agency (EMA), online archives USGS, and National meteorological service agency. The soil data were generated by Laboratory analysis using a standard procedure while the socio-economic data were gathered from 170 sample farm households, key informants, and natural resources management experts of the study area. The Long-term (1957-2018) LULC change analysis was conducted using ERDAS Imagine (2015) software. To verify the accuracy of image classification, Kappa coefficient, producer’s accuracy, and user’s accuracy were generated. The soil loss of the study catchment was computed by employing a GIS-based Revised Universal Soil Loss Equation (RUSLE) model. To examine the effect of LULC and topographic elevation on soil quality indicators and organic carbon stocks, principal component analysis (PCA) and multi-variate analysis of variance (MANOVA) in the general linear model (GLM) procedure of SPSS (v.24) were used. The socio-economic data collected through questionnaire survey were analyzed in SPSS (v. 24) by employing both descriptive statistics and binomial logistic regression. The qualitative data gathered through interview, FGD, and field observations were used to substantiate the quantitative analysis of the questionnaire data. The result of LULC change analysis revealed that there have been remarkable changes in the LULC between 1957 and 2018. Forest cover and grassland decreased by 6.7% and 2.8%, respectively. Agricultural, Shrub, and bare/degraded lands had shown increment by 1.3%, 1.9%, and 6.3%, respectively. Due to the effect of the observed LULC change, the mean annual soil loss recorded in 1957 was 41.04 t ha-1 yr-1 whereas in 2018 increased to 48.91 t ha-1 yr -1. On the other hand, LULC and topographic elevation were found to significantly influence the spatial distribution of soil quality indicators and organic carbon stocks. Higher SOC stocks were found under forest cover, shrub land, and grassland compared to agricultural and degraded lands. The result of binomial logistic regression revealed that out of sixteen hypothesized independent variables, twelve were found to have significant influence on farmers’ decision to implement land management practices. In general, the LULC dynamics observed in the area remarkably impacted soil erosion, soil quality indicators, and organic carbon stocks. Given the severity and magnitude of the problem following LULC dynamics, the study recommends a design of proper land use policy and sustainable management of land resources to reverse the prevailing land degradation in the study area. The designed land management options need to consider the socio-demographic, biophysical, and institutional factors in the study catchment.
Article
Full-text available
This study simulate and optimize the yield and yield parameters of tomato using AquaCrop model and genetic algorthm (GA) respectively. The AquaCrop model was firstly calibrated using the data obtained from the field and was later used to simulate the observed yield, water productivity and biomass of tomato. The Root Mean Square Error (RMSE), Coefficient of Residual Mass (CRM) Normalized Root Mean Square Error (NRMSE) and Modelling efficiency (EF) were used to compare the observed and simulated values. The governing equation of AquaCrop simulation software was then optimized using the evolutionary optimization method of GA with MATLAB programming software. All the statistical indices except CRM used in comparing the simulated and observed values indicated good agreement. The CRM values of-0.11,-0.06 and-0.20 were obtained for the yield, biomass and water productivity of tomato which indicated a very slight over-estimation of the observed results by the AquaCrop model. The optimization algorithm terminated when the optimal values of yield and biomass were 4.496 ℎ −1 and 4.90 ℎ −1 respectively. The GA revealed that the yield and biomass of tomato can be increased by 57% and 23% respectively if the optimized parameters were either attained on the field experiment or used during simulation. Thus, the study ascertained that crop simulation models such as AquaCrop and optimization algorithms can be used to identify optimal parameters that if maintained on the field could improve the yield of crops such as tomato.
Article
Biodiesel is an emblematic energy of green power and developing vigorously, which is conducive to an important strategic significance covering the sustainable development of the global economy, the promotion of energy substitution, the reduction of environmental pressure, and the control of atmospheric contamination. In order to study the biodiesel energy burning characteristics, a series of experiments were carried out in an ISO9705 full-scale room, and the effects of pool size on the burning rate, flame height, flame oscillation frequency, and flame radiation fraction of biodiesel energy fire were systematically analyzed. This study uses genetic algorithm-back propagation neural network (GA-BPNN) algorithms for real-time prediction of transient fire mass loss rates. Three parameters (pool size, liquid depth, and burning time) are paired with the fuel mass loss rate and trained using a GA-BPNN algorithms model. The results show that the GA-BPNN algorithms predictions have a good correlation with the validated experimental values and the relative is less than 15%. Furthermore, the ratio of intermittent flame height and continuous flame height to their mean value is calculated respectively at about 1.72 and 0.58. The flame oscillation frequency decreases following the increase in oil pool size, the correlation can be calculated by f=0.3(D/g′)1/2. Finally, a new correlation χrad=0.2Q˙*−2/3 is proposed to predict the flame radiation fraction by increasing the flame viewing factor coefficient. The proposed correlation can be used to describe its evolution under different oil pool sizes, which can be essential to estimate the flame radiation impact on surroundings for such biodiesel energy fires.
Article
Full-text available
Many potential inventions are never discovered because the thought processes of scientists and engineers are channeled along well-traveled paths. In contrast, the evolutionary process tends to opportunistically solve problems without considering whether the evolved solution comports with human preconceptions about whether the goal is impossible. This paper demonstrates how genetic programming can be used to automate the process of exploring queries, conjectures, and challenges concerning the existence of seemingly impossible entities. The paper suggests a way by which genetic programming can be used to automate the invention process. We illustrate the concept using a challenge posed by a leading analog electrical engineer concerning whether it is possible to design a circuit composed of only resistors and capacitors that delivers a gain of greater than one. The paper contains a circuit evolved by genetic programming that satisfies the requirement of this challenge as well a related more difficult challenge. The original challenge was motivated by a circuit patented in 1956 for preprocessing inputs to oscilloscopes. The paper also contains an evolved circuit satisfying (and exceeding) the original design requirements of the circuit patented in 1956. This evolved circuit is another example of a result produced by genetic programming that is competitive with a human- produced result that was considered to be creative and inventive at the time it was first discovered.
Book
Full-text available
Conference Paper
Full-text available
A different crossover operator, uniform crossover, is presented. It is compared theoretically and empirically with one-point and two-point crossover and is shown to be superior in most cases.
Chapter
Previous studies concluded that the best performance from an evolutionary programming (EP) algorithm was obtained by tuning the parameters for each problem. These studies used fitness at a pre-specified number of evaluations as the criterion for measuring performance. This study uses a complete factorial design for a large set of parameters on a wider array of functions and uses the mean trials to find the global optimum when practical. Our results suggest that the most critical EP control parameter is the perturbation method/rate of the strategy variables that control algorithm search potential. We found that the decline of search capacity limits the difficulty of functions that can be successfully solved with EP. Therefore, we propose a soft restart mechanism that significantly improves EP performance on more difficult problems.
Chapter
Genetic algorithms imitate the collective learning paradigm found in living nature. They derive their power largely from their implicit parallelism gained by processing a population of points in the search space simultaneously. In this paper, we describe an extension of genetic algorithms making them also explicitly parallel. The advantages of the introduction of a population structure are twofold: firstly, we specify an algorithm which uses only local rules and local data making it massively parallel with an observed linear speedup on a transputer-based parallel system, and secondly, our simulations show that both convergence speed and final quality are improved in comparison to a genetic algorithm without population structure.
Article
Test functions are commonly used to evaluate the effectiveness of different search algorithms. However, the results of evaluation are as dependent on the test problems as they are on the algorithms that are the subject of comparison. Unfortunately, developing a test suite for evaluating competing search algorithms is difficult without clearly defined evaluation goals. In this paper we discuss some basic principles that can be used to develop test suites and we examine the role of test suites as they have been used to evaluate evolutionary search algorithms. Current test suites include functions that are easily solved by simple search methods such as greedy hill-climbers. Some test functions also have undesirable characteristics that are exaggerated as the dimensionality of the search space is increased. New methods are examined for constructing functions with different degrees of nonlinearity, where the interactions and the cost of evaluation scale with respect to the dimensionality of the search space.
Conference Paper
Genetic Algorithms (GAs) have received a great deal of attention egarding their potential as optimization techniques for complex unctions. The level of interest and success in this area has led to a umber of improvements to GA-based function optimizers and a good eal of progress in characterizing the Jcinds of functions that are y/hard for GAs to optimize. With all this activity. there has been a atural tendency to equate GAs with function optimization. However, the motivating context of HoUand's initial GA work was the design d implementation of robust adaptive systems. In this paper we argue t a proper understanding of GAs in this broader adaptive systems on text is a necessary prerequisite for understanding their potential pplication to any problem domain. We then use these insights to better understand the strengths and limitations of GAs as function optimizers.
Conference Paper
In this paper we carefully formulate a Schema Theorem for Genetic Programming (GP) using a schema definition that accounts for the variable length and the non-homologous nature of GP's representation. In a manner similar to early GA research, we use interpretations of our GP Schema Theorem to obtain a GP Building Block definition and to state a “classical” Building Block Hypothesis (BBH): that GP searches by hierarchically combining building blocks. We report that this approach is not convincing for several reasons: it is difficult to find support for the promotion and combination of building blocks solely by rigourous interpretation of a GP Schema Theorem; even if there were such support for a BBH, it is empirically questionable whether building blocks always exist because partial solutions of consistently above average fitness and resilience to disruption are not assured; also, a BBH constitutes a narrow and imprecise account of GP search behavior.
Conference Paper
This paper describes and analyzes CHC, a nontraditional genetic algorithm which combines a conservative selection strategy that always preserves the best individuals found so far with a radical (highly disruptive) recombination operator that produces offspring that are maximally different from both parents. The traditional reasons for preferring a recombination operator with a low probability of disrupting schemata may not hold when such a conservative selection strategy is used. On the contrary, certain highly disruptive crossover operators provide more effective search. Empirical evidence is provided to support these claims.
Article
Algorithms are presented to generate the n-bit binary reflected Gray code and codewords of fixed weight in that code. Both algorithms are efficient in that the time required to generate the next element from the current one is constant. Applications to the generation of the combinations of n things taken k at a time, the compositions of integers, and the permutations of a multiset are discussed.
Article
Delta coding is an iterative genetic search strategy that dynamically changes the representation of the search space in an attempt to exploit different problem representations. Delta coding sustains search by reinitializing the population at each iteration of search. This helps to avoid the asymptotic performance typically observed in genetic search as the population becomes more homogeneous. Here, the optimization ability of delta coding is empirically compared against CHC, ESGA, GENITOR, and random mutation hill-climbing (RMHC) on a suite of well-known test functions with and without Gray coding. Issues concerning the effects of Gray coding on these test functions are addressed.
Article
GENITOR is a genetic algorithm which employs one-at-a-time reproduction and allocates reproductive opportunities according to rank to achieve selective pressure. Theoretical arguments and empirical evidence suggest that GENITOR is less vulnerable to some of the biases that degrade performance in standard genetic algorithms.A distributed version of GENITOR which uses many smaller distributed populations in place of a single large population is introduced. GENITOR II is able to optimize a broad range of sample problems more accurately and more consistently than GENITOR with a single population. GENITOR II also appears to be more robust than a single population genetic algorithm, yielding better performance without parameter tuning. We present some preliminary analyses to explain the performance advantage of the distributed algorithm. A distributed search is shown to yield improved search on several classes of problems, including binary encoded feedforward neural networks, the Traveling Salesman Problem, and a set of ‘ deceptive problems’ specially designed to be hard for genetic algorithms.
Conference Paper
In this paper we introduce interval-schemata as a tool for analyzing real-coded genetic algorithms (GAs). We show how interval-schemata are analogous to Holland's symbol-schemata and provide a key to understanding the implicit parallelism of real-valued GAs. We also show how they support the intuition that real-coded GAs should have an advantage over binary coded GAs in exploiting local continuities in function optimization. On the basis of our analysis we predict some failure modes for real-coded GAs using several different crossover operators and present some experimental results that support these predictions. We also introduce a crossover operator for real-coded GAs that is able to avoid some of these failure modes.
Conference Paper
This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or «steady state") selection are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper recommends practical application of the analyses and suggests a number of paths for more detailed analytical investigation of selection techniques. Keywords: proportionate selection, ranking selection, tournament selection, Genitor, takeover time, time complexity, growth ratio. 1
Article
Many seemingly different problems in machine learning, artificial intelligence, and symbolic processing can be viewed as requiring the discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalent to searching a space of possible computer programs for a highly fit individual computer program. The recently developed genetic programming paradigm described herein provides a way to search the space of possible computer programs for a highly fit individual computer program to solve (or approximately solve) a surprising variety of different problems from different fields. In the genetic programming paradigm, populations of computer programs are genetically bred using the Darwinian principle of survival of the fittest and using a genetic crossover (sexual recombination) operator appropriate for genetically mating computer programs. This chapter shows how to reformulate seemingly different problems into a common form (i.e. a problem requiring discovery of a computer program) and, then, to show how the genetic programming paradigm can serve as a single, unified approach for solving problems formulated in this common way.
Article
This book sets out to explain what genetic algorithms are and how they can be used to solve real-world problems. The first objective is tackled by the editor, Lawrence Davis. The remainder of the book is turned over to a series of short review articles by a collection of authors, each explaining how genetic algorithms have been applied to problems in their own specific area of interest. The first part of the book introduces the fundamental genetic algorithm (GA), explains how it has traditionally been designed and implemented and shows how the basic technique may be applied to a very simple numerical optimisation problem. The basic technique is then altered and refined in a number of ways, with the effects of each change being measured by comparison against the performance of the original. In this way, the reader is provided with an uncluttered introduction to the technique and learns to appreciate why certain variants of GA have become more popular than others in the scientific community. Davis stresses that the choice of a suitable representation for the problem in hand is a key step in applying the GA, as is the selection of suitable techniques for generating new solutions from old. He is refreshingly open in admitting that much of the business of adapting the GA to specific problems owes more to art than to science. It is nice to see the terminology associated with this subject explained, with the author stressing that much of the field is still an active area of research. Few assumptions are made about the reader's mathematical background. The second part of the book contains thirteen cameo descriptions of how genetic algorithmic techniques have been, or are being, applied to a diverse range of problems. Thus, one group of authors explains how the technique has been used for modelling arms races between neighbouring countries (a non- linear, dynamical system), while another group describes its use in deciding design trade-offs for military aircraft. My own favourite is a rather charming account of how the GA was applied to a series of scheduling problems. Having attempted something of this sort with Simulated Annealing, I found it refreshing to see the authors highlighting some of the problems that they had encountered, rather than sweeping them under the carpet as is so often done in the scientific literature. The editor points out that there are standard GA tools available for either play or serious development work. Two of these (GENESIS and OOGA) are described in a short, third part of the book. As is so often the case nowadays, it is possible to obtain a diskette containing both systems by sending your Visa card details (or $60) to an address in the USA.
Article
An extension of evolution strategies to multiparent recombination involving a variable number [symbol: see text] of parents to create an offspring individual is proposed. The extension is experimentally evaluated on a test suite of functions differing in their modality and separability and the regular/irregular arrangement of their local optima. Multiparent diagonal crossover and uniform scanning crossover and a multiparent version of intermediary recombination are considered in the experiments. The performance of the algorithm is observed to depend on the particular combination of recombination operator and objective function. In most of the cases a significant increase in performance is observed as the number of parents increases. However, there might also be no significant impact of recombination at all, and for one of the unimodal objective functions, the performance is observed to deteriorate over the course of evolution for certain choices of the recombination operator and the number of parents. Additional experiments with a skewed initialization of the population clarify that intermediary recombination does not cause a search bias toward the origin of the coordinate system in the case of domains of variables that are symmetric around zero.
Article
We review the main results obtained in the theory of schemata in genetic programming (GP), emphasizing their strengths and weaknesses. Then we propose a new, simpler definition of the concept of schema for GP, which is closer to the original concept of schema in genetic algorithms (GAs). Along with a new form of crossover, one-point crossover, and point mutation, this concept of schema has been used to derive an improved schema theorem for GP that describes the propagation of schemata from one generation to the next. We discuss this result and show that our schema theorem is the natural counterpart for GP of the schema theorem for GAs, to which it asymptotically converges.
Article
Scheduling of a bus transit system must be formulated as an optimization problem, if the level of service to passengers is to be maximized within the available resources. In this paper, we present a formulation of a transit system scheduling problem ...