Book

Genetic + Data Structures = Evolution Programs

Authors:

Abstract

Genetic algorithms are founded upon the principle of evolution, i.e., survival of the fittest. Hence evolution programming techniques, based on genetic algorithms, are applicable to many hard optimization problems, such as optimization of functions with linear and nonlinear constraints, the traveling salesman problem, and problems of scheduling, partitioning, and control. The importance of these techniques is still growing, since evolution programs are parallel in nature, and parallelism is one of the most promising directions in computer science. The book is self-contained and the only prerequisite is basic undergraduate mathematics. This third edition has been substantially revised and extended by three new chapters and by additional appendices containing working material to cover recent developments and a change in the perception of evolutionary computation.

Chapters (8)

Evolutionary computation techniques have received a great deal of attention regarding their potential as optimization techniques for complex numerical functions. However, they have not produced a significant breakthrough in the area of nonlinear programming due to the fact that they have not addressed the issue of constraints in a systematic way. Only recently have several methods been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; however, these methods have several drawbacks, and the experimental results on many test cases have been disappointing. In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem; (2) survey several approaches that have emerged in the evolutionary computation community; and (3) provide a set of 11 interesting test cases that may serve as a handy reference for future methods.
There is a large class of interesting problems for which no reasonably fast algorithms have been developed. Many of these problems are optimization problems that arise frequently in applications. Given such a hard optimization problem it is often possible to find an efficient algorithm whose solution is approximately optimal. For some hard optimization problems we can use probabilistic algorithms as well — these algorithms do not guarantee the optimum value, but by randomly choosing sufficiently many “witnesses” the probability of error may be made as small as we like.
Evolution strategies (ESs) are algorithms which imitate the principles of natural evolution as a method to solve parameter optimization problems [7], [162]. They were developed in Germany during the 1960s. As stated in [162]: “In 1963 two students at the Technical University of Berlin met and were soon collaborating on experiments which used the wind tunnel of the Institute of Flow Engineering. During the search for the optimal shapes of bodies in a flow, which was then a matter of laborious intuitive experimentation, the idea was conceived of proceeding strategically. However, attempts with the coordinate and simple gradient strategies were unsuccessful. Then one of the students, Ingo Rechenberg, now Professor of Bionics and Evolutionary Engineering, hit upon the idea of trying random changes in the parameters defining the shape, following the example of natural mutations. The evolution strategy was born.” (The second student was Hans-Paul Schwefel, now Professor of Computer Science and Chair of System Analysis).
As stated in the Introduction, it seems that most researchers modified their implementations of genetic algorithms either by using non-standard chromosome representation and/or by designing problem-specific genetic operators (e.g., [141], [385], [65], [76], etc.) to accommodate the problem to be solved, thus building efficient evolution programs. Such modifications were discussed in detail in the previous two chapters (Chapters 9 and 10) for the transportation problem and the traveling salesman problem, respectively. In this chapter, we have made a somewhat arbitrary selection of a few other evolution programs developed by the author and other researchers, which are based on non-standard chromosome representation and/or problem-specific knowledge operators. We discuss some systems for scheduling problems (section 11.1), the timetable problem (section 11.2), partitioning problems (section 11.3), and the path planning problem in mobile robot environment (section 11.4). The chapter concludes with an additional section 11.5, which provides some brief remarks on a few other, interesting problems.
In this chapter we review briefly two powerful evolutionary techniques; these are evolutionary programming (section 13.1) and genetic programming (section 13.2). These two techniques were developed a quarter of a century apart from each other; they aimed at different problems; they use different chromosomal representations for individuals in the population, and they put emphasis on different operators. Yet, they are very similar from our perspective of “evolution programs”: for particular tasks they aim at, they use specialized data structures (finite state machines and tree-structured computer programs) and specialized “genetic” operators. Also, both methods must control the complexity of the structure (some measure of the complexity of a finite state machine or a tree might be incorporated in the evaluation function). We discuss them in turn.
In this book we discussed different strategies, called Evolution Programs, which might be applied to hard optimization problems and which were based on the principle of evolution. Evolution programs borrow heavily from genetic algorithms However, they incorporate problem-specific knowledge by using “natural” data structures and problem-sensitive “genetic” operators. The basic difference between GAs and EPs is that the former are classified as weak, problem-independent methods, which is not the case for the latter.
As we already discussed in the previous chapters, the best known evolution programs include genetic algorithms, evolutionary programming, evolution strategies, and genetic programming. There are also many hybrid systems which incorporate various features of the above paradigms, and consequently are hard to classify; anyway, we refer to them just as evolution programs (or evolutionary algorithms, or evolutionary computation techniques).
The field of evolutionary computation has been growing rapidly over the last few years. Yet, there are still many gaps to be filled, many experiments to be done, many questions to be answered. In the final chapter of this text we examine a few important directions in which we can expect a lot of activity and significant results; we discuss them in turn.
... • genetic algorithms (GAs) [127], designed to explore the mechanisms underlying systems capable of self-adaptation to their environment. ...
... To complete the four main EAs paradigms, genetic programming [127] emerged as another significant approach. It focused on evolving computer programs rather than fixed structures, allowing the creation of complex solutions through iterative improvements and genetic operators. ...
... Genetic operators (GOs) are fundamental elements of genetic algorithms, which is inspired by the biological process of natural selection [50][51][52][53][54]. In addition to this, GOs are used to generate new solutions by combining two or more solutions from the current population. ...
... Equations (33) and (34) provide the mathematical expression for the crossover process, whereas Eqs. (35) and (36) define the genetic mutation operator [52][53][54]. ...
Article
Full-text available
This paper introduces a novel adaptive genetic operator-based mountain gazelle optimizer (AGOMGO) algorithm for optimal planning of different types of distributed generation (DG) and capacitor bank (CB) along with network reconfiguration of large radial power distribution networks (RDPNs). In the proposed algorithm, the mountain gazelle optimizer is integrated with adaptive genetic operators to improve its ability to efficiently explore complex search spaces and identify optimal solutions. The main aim of this study is to optimize the performance of large RDPNs by mitigating power losses, enhancing voltage profiles, and increasing the voltage stability index. To achieve this goal, the adopted optimization problem is formulated as non-linear-mixed integer problem. The effectiveness of the proposed approach is validated through simulation studies on 118-bus and 136-bus RPDNs with various configurations for integrating compensating devices alongside optimal network reconfiguration. Furthermore, a comparative analysis is conducted to evaluate the performance of the proposed AMGOGO algorithm against the standard mountain gazelle optimizer and other well-known optimization algorithms, as well as the results of existing studies. The results indicate a significant reduction in real power losses, with a 90.77% decrease in the 118-bus system and an 87.54% decrease in the 136-bus system, achieved through the integration of DG operating at optimal power factor and CB along with NR.
... GAs are optimization techniques inspired by Darwin's theory of natural selection. Genetic operators and strategies are explained according to the following books and works: [35][36][37][38]. The algorithm presented in this work was developed specifically for its needs and adapted to the presented optimization problem. ...
Article
Full-text available
This study investigates the optimization of thickness distribution in simply supported and cantilever plates to maximize gaps between adjacent natural frequencies. The research employs a genetic algorithm (GA) as the primary optimization tool, with the finite element method (FEM) integrated for structural dynamics analysis. The optimization process focuses on tailoring the plate thickness (stiffness) while maintaining fixed overall dimensions. The study considers square and rectangular plates with two boundary conditions: simply supported and cantilever. The optimization targets gaps between the first three natural frequencies. The GA-based optimizer demonstrates effectiveness in increasing the relative separation between neighboring natural frequencies, as defined by the fitness function. Compared to the reference individuals, the optimized individuals achieve objective function values from 0.25 to 2.5 times higher. The GA optimization tool is also compared with an alternative optimization tool achieving up to 35% better results. This research contributes to the field of structural dynamics by demonstrating the potential of genetic algorithms in optimizing plate designs for enhanced vibrational characteristics. Such optimization is particularly relevant in civil engineering, where plate elements are widely used, and where controlling dynamic properties can improve serviceability and reduce the risk of resonance under operational or environmental loads. The findings have implications for various engineering applications where controlling dynamic properties of plate structures is crucial.
... In the context of agriculture, GAs have been used to optimize conflicting objectives, such as maximizing crop yield, minimizing costs, and ensuring environmental sustainability. These techniques enable dynamic responses to emerging agricultural challenges, such as extreme weather events, pest outbreaks, and shifting market demands [81][82][83]. For instance, GAs have been applied to identify optimal crop rotation strategies, irrigation schedules, and fertilizer application rates. ...
Article
Full-text available
This study presents a novel approach to managing the cost–time–quality trade-off in modern agriculture by integrating fuzzy logic with a genetic algorithm. Agriculture faces significant challenges due to climate variability, economic constraints, and the increasing demand for sustainable practices. These challenges are compounded by uncertainties and risks inherent in agricultural processes, such as fluctuating yields, unpredictable costs, and inconsistent quality. The proposed model uses a fuzzy multi-objective optimization framework to address these uncertainties, incorporating expert opinions through the alpha-cut technique. By adjusting the level of uncertainty (represented by alpha values ranging from 0 to 1), the model can shift from pessimistic to optimistic scenarios, enabling strategic decision making. The genetic algorithm improves computational efficiency, making the model scalable for large agricultural projects. A case study was conducted to optimize resource allocation for rice cultivation in Asia, barley in Europe, wheat globally, and corn in the Americas, using data from 2003 to 2025. Key datasets, including the USDA Feed Grains Database and the Global Yield Gap Atlas, provided comprehensive insights into costs, yields, and quality across regions. The results demonstrate that the model effectively balances competing objectives while accounting for risks and opportunities. Under high uncertainty (α = 0\alpha = 0α = 0), the model focuses on risk mitigation, reflecting the impact of adverse climate conditions and market volatility. On the other hand, under more stable conditions and lower market volatility conditions (α = 1\alpha = 1α = 1), the solutions prioritize efficiency and sustainability. The genetic algorithm’s rapid convergence ensures that complex problems can be solved in minutes. This research highlights the potential of combining fuzzy logic and genetic algorithms to transform modern agriculture. By addressing uncertainties and optimizing key parameters, this approach paves the way for sustainable, resilient, and productive agricultural systems, contributing to global food security.
... Mutation involves changing the type (which aligns with the scheduling approach) of a node chosen at random within the tree to a different type. Selection and reproduction are performed using a tournament method [34]. More details about the genetic operators used in the described method can be found in [35]. ...
Article
Full-text available
In a kind of system, where strong time constraints exist, very often, worst-case design is applied. It could drive to the suboptimal usage of resources. In previous work, the mechanism of self-adaptive software that is able to reduce this was presented. This paper introduces a novel extension of the method for self-adaptive software synthesis applicable for real-time multicore embedded systems with dynamic voltage and frequency scaling (DVFS). It is based on a multi-criteria approach to task scheduling, optimizing both energy consumption and proof against time delays. The method can be applied to a wide range of embedded systems, such as multimedia systems or Industrial Internet of Things (IIoT). The main aim of this research is to find the method of automatic construction of the task scheduler that is able to minimize energy consumption during the varying execution times of each task.
... The algorithm proceeds with the selection of parent solutions based on their fitness scores. Crossover and mutation operators are then applied to generate new offspring (i.e., new ANN configurations) [30]. ...
Article
Full-text available
Facial expressions are the most basic non-verbal method people use to communicate feelings, intentions and reactions without words. Recognizing these facial expressions accurately is essential for a variety of applications — such as tools that use our faces to interact with computers (human-computer interaction, or HCI), security systems and emotionally intelligent artificial intelligence technologies. As the complexities surrounding these relationships have become better understood, it has allowed us to develop increasingly more complex systems for identifying and detecting facial expressions of different emotions. This paper presents an improved performance of the Facial Expression Recognition (FER) systems via augmentation in Artificial Neural Networks and Genetic Algorithms, two renowned artificial intelligence techniques possessing disparate strengths. ANNS are inspired by the neural architecture of human brain capable of learning and recognizing patterns in unchartered data after trained examples, on the other hand GAs come from fundamental principles underlying natural selection perform optimization process based-on evolutionary methods which includes fitness evaluation, comparison, selection, crossover, and mutation. The research is an effort to mitigate the problems pertaining with conventional methods, like overfitting and generalization fault in order design FER model which has potential for performing much more accurately. A hybrid ANN-GA model that uses Petri Nets and production systems is proposed for the real-time video sequence analysis with high precision in predicting different dynamic facial activities of anger, surprise, disgust, joy, sadness and fear from emotion faces. Importantly, results on the study show that this integrated model has a large-scale promoting effect in emotion detection upon varied scenes and is therefore generalizable to many domains from security and surveillance over biomedicine up to interactive AI-driven systems. Implications for implementing real-time and context-aware recognition of human emotions based on AI technologies are far-reaching as they demonstrate the potential that hybrid AI systems offer at enhancing emotion deciphering.
... By providing a comprehensive toolkit for CGAs, pycellga aims to advance the field of evolutionary computation and equip researchers with the tools needed to solve increasingly complex optimization problems effectively. The integration of cellular automata with genetic algorithms in pycellga represents a significant advancement, offering greater flexibility and adaptability compared to traditional methods (Eiben & Smith, 2015;Karakaya & Satman, 2024;Michalewicz, 1996). ...
Article
Full-text available
pycellga is a Python package that implements cellular genetic algorithms (CGAs) for optimizing complex problems. CGAs combine the principles of cellular automata and traditional genetic algorithms, utilizing a spatially structured population organized in a grid-like topology. This structure allows each individual to interact only with its neighboring individuals, promoting diversity and maintaining a balance between exploration and exploitation during the optimization process. While CGAs themselves are not a novel contribution of this work, pycellga significantly enhances their applicability by integrating advanced features and providing unparalleled versatility. The package supports binary, real-valued, and permutation-based optimization problems, making it adaptable to a wide variety of problem domains. Its use of machine-coded operators for real-valued optimization, adhering to IEEE 754 floating-point arithmetic standards, ensures high precision and computational efficiency. Moreover, pycellga is designed to be extensible, enabling users to easily customize selection, crossover, and mutation operators to suit specific problem requirements. The package is designed to be user-friendly, with a straightforward installation process and comprehensive documentation. Researchers and practitioners in fields such as operations research, artificial intelligence, and machine learning can leverage pycellga to tackle complex optimization challenges effectively. By integrating the principles of cellular automata with genetic algorithms, pycellga represents a significant advancement in the field of evolutionary computation, offering increased flexibility and adaptability compared to traditional methods.
Article
Biomechanical models of the human body are generally complex, and models of its behavior require writing and solving many differential equations. Instead of this time-consuming process, it is considered a more practical process to install smaller models that preserve the main behaviors of the system. In this study, instead of the human body model proposed by ISO 7962, a smaller model with an analytical solution has been proposed. While performing the desired model reduction, genetic algorithms were used and the parameters of the new system that replicated the behavior of the original system were created. Then, the behaviors of both the original system and the reduced-order system were compared graphically. Comparisons have shown that this small model can also be used in calculations and that very good results can be achieved on points such as low-frequency resonances.
Article
Full-text available
In this study, we introduce a technique for unsupervised design and design automation of resonator-based microstrip sensors for dielectric material characterization. Our approach utilizes fundamental building blocks such as circular and square resonators, stubs, and slots, which can be adjusted in size and combined into intricate geometries using appropriate Boolean transformations. The sensor’s topology, including its constituent components and their dimensions, is governed by artificial intelligence (AI) techniques, specifically evolutionary algorithms, in conjunction with gradient-based optimizers. This enables not only the explicit enhancement of the circuit’s sensitivity but also ensures the attainment of the desired operating frequency. The design process is entirely driven by specifications and does not necessitate any interaction from the designer. We extensively validate our design framework by designing a range of high-performance sensors. Selected devices are experimentally validated, calibrated using inverse modeling techniques, and utilized for characterizing dielectric samples across a wide spectrum of permittivity and thickness. Moreover, comprehensive benchmarking demonstrates the superiority of AI-generated sensors over state-of-the-art designs reported in the literature.
Conference Paper
Mapping the under-canopy surface of vast forestry areas using Unmanned Aerial Vehicles poses a multitude of obstacles. This paper presents the technical architecture and the user interfaces of an application that improves mapping operations under the canopy of trees in forests using drones. The app- roach aims to improve the planning of mapping operations, minimizing the time needed to map vast forest areas and improving operator ergonomics and safety. The application facilitates drone management, enabling strategic survey planning and operational efficiency. This innovative method promises to optimize the har- vests of valuable under-canopy tree georeferenced data by reducing labor and time spent in rugged terrain.
Chapter
The chapter expands the knowledge about the principles of evolutionary algorithms. Basic ideas of quantum computing are introduced in the sections. Also, the formulation of the quantum-inspired evolutionary algorithms is presented in the sections. The chapter uses the combination of two fields evolutionary algorithms and quantum computing to implement sustainable information security. The chapter provides information regarding the formulation of computational techniques with different information security concerns. The chapter shows how to build and assess the performance of implemented methods in information security. The hybrid QIEAs are further demonstrated by narrating actual applications and experiment stories. This chapter aims to demonstrate the security issue importance and further research necessity to create even better security solutions.
Preprint
Full-text available
Cyclic fluid injection is a promising hydraulic stimulation strategy for balancing seismicity control and permeability enhancement in low-permeability underground formations. However, the mechanisms by which cyclic injection affects fracture permeability and seismic hazard remain unclear. To address this, we simulated the slip behavior of a pre-stressed rough natural granite fracture under monotonic and cyclic fluid pressurizations using the rate-and-state friction law. Transient permeability was calculated using an aperture model based on displacement and velocity, with constitutive parameters constrained by numerical inversion of experimental data. Our results reveal that monotonic pressurization induces a sharp increase in slip displacement and velocity, enhancing fracture permeability by approximately four times. However, a subsequent significant drop in slip velocity reduces permeability to about three times its initial value. Conversely, cyclic pressurization results in slip displacement and velocity changes per cycle that are approximately 1/10 of those in the monotonic case. This leads to a smaller permeability enhancement in each cycle, but ultimately cause a higher long-term permeability increase (around 5 times the initial value) than in the monotonic case at similar final slip displacements (~0.1 mm). The reduced slip velocity in the cyclic case is primarily due to the lower peak injection pressures (95% of the peak in the monotonic case) and subsequent depressurizations. These results highlight the critical role of slip velocity history in controlling transient fracture permeability evolution and demonstrate the potential of cyclic hydraulic shearing to enhance permeability while reducing seismicity under longer stimulation times, provided that injection parameters are carefully designed.
Article
Full-text available
In many applications, it is necessary to optimise the performance of hydrodynamic (HD) bearings. Many studies have proposed different strategies, but there remains a lack of conclusive research on the suitability of various optimisation methods. This study evaluates the most commonly used algorithms, including the genetic (GA), particle swarm (PSWM), pattern search (PSCH) and surrogate (SURG) algorithms. The effectiveness of each algorithm in finding the global minimum is analysed, with attention to the parameter settings of each algorithm. The algorithms are assessed on HD journal and thrust bearings, using analytical and numerical solutions for friction moment, bearing load-carrying capacity and outlet lubricant flow rate under multiple operating conditions. The results indicate that the PSCH algorithm was the most efficient in all cases, excelling in both finding the global minimum and speed. While the PSWM algorithm also reliably found the global minimum, it exhibited lower speed in the defined problems. In contrast, genetic algorithms and the surrogate algorithm demonstrated significantly lower efficiency in the tested problems. Although the PSCH algorithm proved to be the most efficient, the PSWM algorithm is recommended as the best default choice due to its ease of use and minimal sensitivity to parameter settings.
Article
Full-text available
The grey wolf optimizer (GWO) is a metaheuristic algorithm recognized for its effectiveness; however, it faces several limitations, such as a lack of diversity in its population, a tendency to prematurely converge on local optima, and insufficient convergence speed. To address these issues, we propose an innovative hybrid algorithm that combines the advantages of GWO with the marine predator algorithm (MPA), leading to the creation of the Hybrid Grey Wolf Marine Predator Algorithm (HGWMPA). By integrating the adaptive characteristics of MPA, this hybrid approach fosters a robust search mechanism that effectively balances exploration and exploitation. We performed comprehensive experimental assessments utilizing benchmark functions from the CEC competitions of 2014, 2017, 2020, and 2022. The findings indicate that the HGWMPA consistently surpasses numerous leading optimization methods, achieving an average rank of 1 across most benchmark functions. Specifically, HGWMPA secured top positions in 76.67% of functions in the CEC 2014 test suite, 70.00% in CEC 2017, 90.00% in CEC 2020, and 66.67% in CEC 2022, showcasing its robust performance across various benchmark scenarios. The experimental results reveal that HGWMPA excels in global exploration, local exploitation, convergence speed, and accuracy, achieving optimal or near-optimal solutions with minimal standard deviations. A detailed performance evaluation, employing the Wilcoxon rank-sum test and the MARCOS MCDM ranking technique, further confirms the competitive advantages of HGWMPA. The algorithm’s adaptability, characterized by the dynamic adjustment of parameters, enables an effective balance between exploration and exploitation, making it particularly suitable for a wide range of engineering design problems. Sensitivity analyses indicate that changes in population size, maximum iteration, and other parameter limits significantly influence the algorithm’s performance, providing valuable insights for enhancing their configurations. HGWMPA has been successfully applied to diverse engineering design challenges, demonstrating its versatility and effectiveness in minimizing costs while adhering to critical design constraints. This advancement in optimization techniques, represented by HGWMPA, integrates pioneering concepts from both GWO and MPA to effectively tackle real-world challenges.
Article
The trends in the development of automated design methods for modern submicron in-tegrated circuits (ICs) are associated with the relentless growth in the degree of integration and complexity of ICs. This leads to increased sensitivity and correlation of the parameters considered, which significantly impacts the requirements for automated design tools under tight project deadlines. A review of literature and the experience of leading global compa-nies in electronic design automation shows that the primary way to enhance the efficiency of automated physical design of ICs is to create and utilize design tools that account for the influence of various parameters on the performance of ICs, while also facilitating the reduction of design time. Multi-parameter optimization is a critically important task in integrated circuit (IC) de-sign. It requires consideration of numerous factors and parameters, such as physical con-straints, minimization of power consumption, optimization of performance, and manufac-turing costs. Modern IC design technologies are becoming increasingly complex, and tradi-tional methods of element placement require constant refinement to ensure their effective-ness and compliance with requirements. In this regard, multi-parameter optimization in IC design is attracting more attention from researchers and engineers striving to create solu-tions capable of addressing the growing complexities. The goal of this article is to examine the main approaches and methods of multi-parameter optimization in the physical design of ICs, including the application of genetic algorithms, quadratic assignment methods, matrix transformations, and other modern tools such as design rule checking.
Chapter
Classification is one of the most studied areas of data mining, which gives classification rules during training or learning. Classification rule mining, an important data-mining task, extracts significant rules for classification of objects. In this chapter class specific rules are represented in IF THEN form. With the popularity of soft computing methods, researchers explore different soft computing tools for rule discovery. Genetic algorithm (GA) is one of such tools. Over time, new techniques of GA for forming classification rules are invented. In this chapter, the authors focus on an understanding of the evolution of GA in classification rule mining to get an optimal rule set that builds an efficient classifier.
Article
Full-text available
In order to more effectively solve the 0-1 knapsack problem (0-1KP) using evolutionary algorithms, this paper proposes a one-way mutation strategy (OWMS) to improve the performance by analyzing the shortcomings of existing methods for handling infeasible solutions. In addition, a new algorithm rRGOA is proposed to eliminate infeasible solutions of 0-1KP based on OWMS. To verify the effectiveness of OWMS and rRGOA, four evolutionary algorithms are used to solve 108 large-scale 0-1KP instances. The calculation results indicate that combining OWMS with existing methods can greatly improve the performance of evolutionary algorithms for solving 0-1KP. Meanwhile, rRGOA is also an effective algorithm for handling infeasible solutions. The comparison with five state-of-the-art algorithms and deterministic algorithms shows that genetic algorithm is the best for solving 0-1KP when combining OWMS with existing methods to handle infeasible solutions. Finally, it is pointed out that the inverse strongly correlated 0-1KP instances are the easiest for evolutionary algorithms.
Article
Full-text available
Optimization methods have been rapidly entering the realm of antenna design over the last several years. Despite many available algorithms, practical optimization is demanding due to the high electromagnetic (EM) analysis cost necessary for dependable antenna assessment. This is particularly troublesome in global parameter tuning, routinely conducted using nature-inspired procedures. Unfortunately, these methods are known for their poor computational efficiency. Surrogate modeling may mitigate this issue to a certain extent, yet dimensionality and parameter range issues severely impede the construction of accurate metamodels. This research suggests an innovative algorithm for global parameter adjustment of antenna systems. It conducts a simplex-based search in the space of the structure’s performance figures (e.g., center frequencies, bandwidth, etc.). Operating at this level regularizes the objective function. Low cost is achieved by the simplex updating strategy requiring only one EM analysis per iteration, and multi-resolution simulations. The global search state involves coarse-discretization full-wave analysis, whereas final (gradient-based) parameter tuning involves medium-fidelity simulations for sensitivity estimation and high-fidelity models for design verification. The developed algorithmic framework is validated using four microstrip antennas. The results generated in multiple runs demonstrate global search capability and remarkably low expenses, corresponding to around a hundred high-fidelity analyses on average. The performance level is competitive over local and global optimizers.
Article
Full-text available
Meeting the International Maritime Organization’s net-zero target by 2050 necessitates the replacement of marine fossil fuels with sustainable alternatives, such as dimethyl ether (DME). Silicon-doped aluminophosphate (SAPO) solid acid catalysts,...
Article
This paper introduces a global approach to magnetic topology optimization for hard magnetic materials, aiming to achieve a specific stray field distribution. The proposed method utilizes a hybrid optimization algorithm that integrates the strengths of both local and global optimization techniques. The ideas, advantages and disadvantages of each approach are discussed in detail and tested and illustrated using selected model problems. The results show that global optimization approaches are necessary to achieve significantly lower minima of the objective function, since multiple local minima might occur for such problems and the outcome of local optimization methods highly depend on the chosen starting configurations.
Article
Full-text available
Consideration is given to the robotic assembly line balancing problem (RALBP) under uncertain task (operation) times, a critical challenge encountered in automated manufacturing systems.. RALBP is a decision problem which seeks the optimal assignment of the assembly work as well as the most suitable robots to the workstations of the assembly line with respect to objectives related to the capacity of the line or/and its cost of operation. When multiple types of robots with different capabilities are being used, task times may vary depending on robot type and the nature of the task. Task variation is expected to be small for simple tasks but may be quite large for complex and failure sensitive operations. To deal with uncertainty in task variation we used fuzzy logic theory. First, we introduce formally the fuzzy RALBP and then we describe deeply the fuzzy representation of the task times. We address RALBP with respect to two optimization objectives namely, the production rate and workload smoothing. Since the problem is known to be NP-hard, we explore its heuristic solution through a new robust multi-objective genetic algorithm (MOGA) aiming to determine the Pareto optimal set. Simulation experiments assess MOGA’s efficiency in comparison to the famous NSGA-II and MOPSO algorithms, while also exploring the trade-off between the two conflicting objectives.
Chapter
Decision trees are part of the decision theory and are excellent tools in the decision-making process. Majority of decision tree learning methods were developed within the last 30 years by scholars like Quinlan, Mitchell, and Breiman, just to name a few (Ozgulbas & Koyuncugil, 2006). There are a number of methods and sophisticated software used to graphically present decision trees. Decision trees have a great number of benefits and are widely used in many business functions as well as different industries. However there are also disagreements and various concerns as to how useful decision trees really are. As technology evolves so do decision trees. Therefore not only do many controversies arise but also solutions and new proposals to these arguments.
Chapter
The Role of Network Security and 5G Communication in Smart Cities and Industrial Transformation explores the transformative power of 5G communication and network security in creating smarter, safer, and more sustainable urban and industrial ecosystems. This book highlights how 5G technology drives real-time connectivity for applications such as intelligent transportation, healthcare, energy management, and industrial automation while emphasizing the critical need for robust cybersecurity measures. The book integrates diverse topics, from 5G-enabled edge computing and blockchain-based healthcare systems to big data analytics and AI-powered security solutions. It offers insights into mitigating vulnerabilities, protecting data privacy, and building resilient infrastructures to support Industry 4.0 and sustainable smart cities. Designed for researchers, professionals, and policymakers, this resource provides practical strategies and forward-thinking perspectives on shaping a hyperconnected future. Key Features: - Explores 5G`s role in smart city and industrial applications. - Highlights cybersecurity challenges and solutions. - Examines healthcare innovations using 5G and blockchain. - Discusses big data and AI in secure mobile services. - Provides actionable insights for sustainable transformation.
Article
Full-text available
As product diversity continues to expand in today’s market, there is an increasing demand from customers for unique and varied items. Meeting these demands necessitates the transfer of different sub-product components to the production line, even within the same manufacturing process. Lean manufacturing has addressed these challenges through the development of kitting systems that streamline the handling of diverse components. However, to ensure that these systems contribute to sustainable practices, it is crucial to design and implement them with environmental considerations in mind. The optimization of warehouse layouts and kitting preparation areas is essential for achieving sustainable and efficient logistics. To this end, we propose a comprehensive study aimed at developing the optimal layout, that is, creating warehouse layouts and kitting preparation zones that minimize waste, reduce energy consumption, and improve the flow of materials. The problem of warehouse location assignment is classified as NP-hard, and the complexity increases significantly when both storage and kitting layouts are considered simultaneously. This study aims to address this challenge by employing the genetic algorithm (GA) and Ant Colony Optimization (ACO) methods to design a system that minimizes energy consumption. Through the implementation of genetic algorithms (GAs), a 24% improvement was observed. This enhancement was achieved by simultaneously optimizing both the warehouse layout and the kitting area, demonstrating the effectiveness of integrated operational strategies. This substantial reduction not only contributes to lower operational costs but also aligns with sustainability goals, highlighting the importance of efficient material handling practices in modern logistics operations. This article provides a significant contribution to the field of sustainable logistics by addressing the vital role of kitting systems within green supply chain management practices. By aligning logistics operations with sustainability goals, this study not only offers practical insights but also advances the broader conversation around environmentally conscious supply chain practices.
Chapter
Full-text available
This chapter presents an evolutionary algorithm approach, applying a genetic algorithm to address the practical problem of planning corridors in a specific geographical area. The corridor location problem involves identifying a near-optimal set of spatial alternative paths between two locations to determine the best route for the right of way, considering the topographical conditions. The proposed approach uses a path-based search through genetic operators to explore spatial alternative routes for corridor location. Using real data from Veracruz, Mexico, the experimental evaluation shows the ability to compute a set of near-optimal spatial alternative paths that improve up to 18% of the topography impact cost compared to a greedy pathfinding method, such as those included in traditional geographical information systems. This approach contributes to developing spatial alternative routes for linear installation planning in the decision-making process.
Article
Full-text available
Age‐at‐death estimation is an arduous task in human identification based on characteristics such as appearance, morphology or ossification patterns in skeletal remains. This process is performed manually, although in recent years there have been several studies that attempt to automate it. One of the most recent approaches involves considering interpretable machine learning methods, obtaining simple and easily understandable models. The ultimate goal is not to fully automate the task but to obtain an accurate model supporting the forensic anthropologists in the age‐at‐death estimation process. We propose a semi‐automatic method for age‐at‐death estimation based on nine pubic symphysis traits identified from Todd's pioneering method. Genetic programming is used to learn simple mathematical expressions following a symbolic regression process, also developing feature selection. Our method follows a component‐scoring approach where the values of the different traits are evaluated by the expert and aggregated by the corresponding mathematical expression to directly estimate the numeric age‐at‐death value. Oversampling methods are considered to deal with the strongly imbalanced nature of the problem. State‐of‐the‐art performance is achieved thanks to an interpretable model structure that allows us to both validate existing knowledge and extract some new insights in the discipline.
Article
Full-text available
Consideration is given to the resource leveling problem (RLP) in resource-constrained project scheduling (RCPS). Although RLP has gained an increasing research interest, multiple optimization criteria are rarely considered simultaneously in the literature. In this paper, a multi-objective version of RLP is investigated aiming to simultaneously minimize the resource imbalance, the peak of the resource usage as well as the makespan. A metaheuristic algorithm is presented devoted to the search for Pareto-optimal RLP solutions. This algorithm constitutes an adaptation of h-NSDE (the hybrid non-dominated sorting differential evolution) which has recently shown excellent performance over a particular class of machine scheduling problems. Using existing benchmark data sets, we test the performance of h-NSDE in comparison to three of the most famous in the literature multi-objective population-based metaheuristics namely NSGA-II, SPEA2, and PAES. The results obtained are quite promising demonstrating a clear superiority of h-NSDE in terms of both the solution quality and diversity in regard to Pareto-front.
Article
Full-text available
Gas turbine engines at sea, characterized by nonlinear behavior and parameter variations due to dynamic marine environments, pose challenges for precise speed control. The focus of this study was a COGAG system with four LM-2500 gas turbines. A third-order model with time delay was derived at three operating points using commissioning data to capture the engines’ inherent characteristics. The cascade controller design employs a real-coded genetic algorithm–PID (R-PID) controller, optimizing PID parameters for each model. Simulations revealed that the R-PID controllers, optimized for robustness, show Nyquist path stability, maintaining the furthest distance from the critical point (−1, j0). The smallest sensitivity function Ms (maximum sensitivity) values and minimal changes in Ms for uncertain plants confirm robustness against uncertainties. Comparing transient responses, the R-PID controller outperforms traditional methods like IMC and Sadeghi in total variation in control input, settling time, overshoot, and ITAE, despite a slightly slower rise time. However, controllers designed for specific operating points show decreased performance when applied beyond those points, with increased rise time, settling time, and overshoot, highlighting the need for operating-point-specific designs to ensure optimal performance. This research underscores the importance of tailored controller design for effective gas turbine engine management in marine applications.
Article
Full-text available
The structure of thermoset composite laminated plates is made by stacking layers of plies with different fiber orientations. Similarly, the stiffened panel structure is assembled from components with varying ply configurations, resulting in thermal residual stresses and processing-induced deformations (PIDs) during manufacturing. To mitigate the residual stresses caused by the geometric features of corner structures and the mismatch between the stiffener-skin ply orientations, which lead to PIDs in composite-stiffened panels, this study proposes a multi-objective stacking optimization strategy based on an improved adaptive genetic algorithm (IAGA). The viscoelastic constitutive model was employed to describe the modulus variation during the curing process to ensure computational accuracy. In this study, the IAGA was proposed to optimize the ply-stacking sequence of L-shaped stiffeners in composite laminated structures. The results demonstrate a reduction in the spring-in angle to 0.12°, a 50% improvement compared to symmetric balanced stacking designs, while the buckling eigenvalues were improved by 20%. Additionally, the IAGA outperformed the traditional non-dominated sorting genetic algorithm (NSGA), achieving a threefold increase in the Pareto solution diversity under identical constraints and reducing the convergence time by 70%. These findings validate the effectiveness of asymmetric ply design and provide a robust framework for enhancing the structural performance and manufacturability of composite laminates.
Article
The Internet of Things (IoT) and its industrial counterpart, the Industrial Internet of Things (IIoT), have transformed sectors such as home automation, healthcare, and manufacturing by enhancing data management through advanced networking. However, the rapid growth of IIoT has introduced significant cybersecurity challenges, necessitating a comprehensive approach to securing data across the TCP/IP model. This paper presents a novel cybersecurity investment strategy formulated as a bi-objective optimization problem, validated through genetic and iterative algorithms. The strategy effectively balances security and cost, achieving nearly 50% efficiency in solution effectiveness. By utilizing these optimization techniques, the approach provides a practical and cost-effective solution to improve IIoT security within budget constraints, offering valuable insights for cybersecurity professionals seeking robust and economically viable solutions.
Article
Full-text available
Recent years have witnessed a tremendous popularity growth of optimization methods in high-frequency electronics, including microwave design. With the increasing complexity of passive microwave components, meticulous tuning of their geometry parameters has become imperative to fulfill demands imposed by the diverse application areas. More and more often, achieving the best possible performance requires global optimization. Unfortunately, global search is an intricate undertaking. To begin with, reliable assessment of microwave components involves electromagnetic (EM) analysis entailing significant CPU expenses. On the other hand, the most widely used nature-inspired algorithms require large numbers of system simulations to yield a satisfactory design. The associated costs are impractically high if not prohibitive. The use of available mitigation methods, primarily surrogate-based approaches, is impeded by dimensionality-related problems and the complexity in microwave circuit characteristics. This research introduces a procedure for expedited globalized parameter adjustment of microwave passives. The search process is embedded in a surrogate-assisted machine learning framework that operates in a dimensionality-restricted domain, spanned by the parameter space directions being of importance in terms of their effects on the circuit characteristic variability. These directions are established using a fast global sensitivity analysis procedure developed for this purpose. Domain confinement reduces the cost of surrogate model establishment and improves its predictive power. The global optimization phase is complemented by local tuning. Verification experiments demonstrate the remarkable efficacy of the presented approach and its advantages over the benchmark methods that include machine learning in full-dimensionality space and population-based metaheuristics.
Chapter
A spacecraft in Earth orbit has a large potential and kinetic energy: for each kg in a low Earth orbit the total energy amounts to about 30 MJ.
Article
Full-text available
Efficient algorithms for solving the Non-deterministic Polynomial-time (NP-) hard problems are essential in various manufacturing industries, where even minute optimisation gains in scheduling large-scale tasks on industrial machines can lead to considerable economisation of resources. We present an original evolutionary algorithm for solving a well-known common NP-hard problem, the Permutation Flowshop Scheduling. Taking inspiration from the occurrence patterns of eight known types of viral mutations, MuVE (Mutation-inspired Viral Evolutionary algorithm) focuses the computing power of its search process toward optimal solutions on narrow areas rather than the entire possible search domain. Renewing the population of individuals by circular composition of permutations is triggered by the algorithm’s convergence and does not necessarily occur with each iteration. The diversity of the population is ensured by an exclusion criterion based on the distance calculation given by the Kendall-Tau correlation. Through detailed statistical analysis of the results, we demonstrate that MuVE, given limited processing time and resources, is capable of outperforming prime examples of related methods in most instances and also, in a considerable proportion of cases, of equalising benchmark upper bounds in the problem sets of Taillard, Carlier, Heller, Reeves and Lahiri.
Article
Full-text available
The development of contemporary electronic components, particularly antennas, places significant emphasis on miniaturization. This trend is driven by the emergence of technologies such as mobile communications, the internet of things, radio-frequency identification, and implantable devices. The need for small size is accompanied by heightened demands on electrical and field properties, posing a considerable challenge for antenna design. Shrinking physical dimensions can compromise performance, making miniaturization-oriented parametric optimization a complex and heavily constrained task. Additionally, the task is multimodal due to typical parameter redundancy resulting from various topological modifications in compact antennas. Identifying truly minimum-size designs requires a global search approach, as the popular nature-inspired algorithms face challenges related to computational efficiency and the need for reliable full-wave electromagnetic (EM) simulation to evaluate device’s characteristics. This study introduces an innovative machine learning procedure for cost-effective global optimization-based miniaturization of antennas. Our technique includes parameter space pre-screening and the iterative refinement of kriging surrogate models using the predicted merit function minimization as an infill criterion. Concurrently, the design task incorporates design constraints implicitly by means of penalty functions. The combination of these mechanisms demonstrates superiority over conventional techniques, including gradient search and electromagnetic-driven nature-inspired optimization. Numerical experiments conducted on four broadband antennas indicate that the proposed framework consistently yields competitive miniaturization rates across multiple algorithm runs at low costs, compared to the benchmark.
Article
Full-text available
Cryopreservation is the process of freezing and storing biological cells and tissues with the purpose of preserving their essential physiological properties after re-warming. The process is applied primarily in medicine in the cryopreservation of cells and tissues, for example stem cells, or articular cartilage. The cryopreservation of articular cartilage has a crucial clinical application because that tissue can be used for reconstruction and repair of damaged joints. This article concerns the identification of the thermophysical parameters of cryopreserved articular cartilage. Initially, the direct problem was formulated in which heat and mass transfer were analyzed by applying the finite difference method. After that, at the stage of inverse problem investigations, an evolutionary algorithm coupled with the finite difference method was used. The identification of the thermophysical parameters was carried out on the basis of experimental data on the concentration of the cryoprotectant. In the last part, this article presents the results of numerical analysis for both the direct and inverse problems. Comparing the results for the direct problem, in which the thermophysical parameters are taken from the literature, with the experimental data, we obtained a relative error between 0.06% and 15.83%. After solving the inverse problem, modified values for the thermophysical parameters were proposed.
Article
Full-text available
The significance of rigorous optimization techniques in antenna engineering has grown significantly in recent years. For many design tasks, parameter tuning must be conducted globally, presenting a challenge due to associated computational costs. The popular bio-inspired routines often necessitate thousands of merit function calls to converge, generating prohibitive expenses whenever the design process relies on electromagnetic (EM) simulation models. Surrogate-assisted methods offer acceleration, yet constructing reliable metamodels is hindered in higher-dimensional spaces and systems with highly nonlinear characteristics. This work suggests an innovative technique for global antenna optimization embedded within a machine-learning framework. It involves iteratively refined kriging surrogates and particle swarm optimization for generating infill points. The search process operates within a reduced-dimensionality region established through fast global sensitivity analysis. Domain confinement enables the creation of accurate behavioral models using limited training data, resulting in low CPU costs for optimization. Additional savings are realized by employing variable-resolution EM simulations, where low-fidelity models are utilized during the global search stage (including sensitivity analysis), and high-fidelity ones are reserved for final (gradient-based) tuning of antenna parameters. Comprehensive verification demonstrates the consistent performance of the proposed procedure, its superiority over benchmark techniques, and the relevance of the mechanisms embedded into the algorithm for enhancing search process reliability, design quality, and computational efficiency.
Article
Full-text available
This study introduces EFSGA, an evolutionary-based ensemble learning and feature selection technique inspired by the genetic algorithm, tailored as an optimized application-specific credit classifier for dynamic default prediction in FinTech lending. Our approach addresses existing gaps in metaheuristic applications for credit risk optimization by (i) hybridizing metaheuristics with machine learning to accommodate the dynamic nature of time-evolving systems and uncertainty, (ii) leveraging distributed and parallel computing for real-time solutions in complex risk decision processes, and (iii) enhancing applicability to unbalanced learning scenarios. The proposed model utilizes a heterogeneous ensemble of machine learning algorithms, incorporating a genetic algorithm to simultaneously optimize model hyperparameters and classification thresholds based on decision-maker objectives over time. This approach substantially improves out-of-sample model performance, providing valuable insights for timely post-loan risk management. The feature selection technique contributes to a balanced trade-off between model performance and interpretability—a pivotal consideration in metaheuristic-based models. Results obtained from the EFSGA model applied to a dataset spanning 2007 to 2014 unveiled an average improvement of 23% in application-specific evaluation metrics compared to conventional heterogeneous ensemble techniques across diverse risk-taking scenarios. Noteworthy is the proposed dynamic framework, featuring a tunable class-weighted fitness function, demonstrating significant superiority in delivering real-time solutions adaptable to evolving decision processes. We validate the EFSGA classification model against established credit evaluation models.
ResearchGate has not been able to resolve any references for this publication.