BookPDF Available
Introduction to
Evolutionary Computing II
A.E. Eiben
Free University Amsterdam
http://www.cs.vu.nl/~gusz/
with thanks to the EvoNet Training Committee and its “Flying Circus”
A.E. Eiben, Introduction to EC II 2EvoNet Summer School 2002
Contents
The evolutionary mechanism and its
components
Examples: the 8-queens problem
Working of an evolutionary algorithm
EC dialects and beyond
Advantages & disadvantages of EC
Summary
A.E. Eiben, Introduction to EC II 3EvoNet Summer School 2002
The main evolutionary cycle
Population
Parents
Parent selection
Survivor selection Offspring
Recombination
(crossover)
Mutation
Intialization
Termination
A.E. Eiben, Introduction to EC II 4EvoNet Summer School 2002
The two pillars of evolution
Increasing population
diversity by genetic
operators
mutation
recombination
Push towards novelty
Decreasing population
diversity by selection
of parents
of survivors
Push towards quality
There are two competing forces active
A.E. Eiben, Introduction to EC II 5EvoNet Summer School 2002
Components:
representation / individuals (1)
Individuals have two levels of existence
phenotype: object in original problem context, the outside
genotype: code to denote that object, the inside
(a.k.a. chromosome, “digital DNA”):
a d c a a c b
genotype:
phenotype:
The link between these levels is called representation
A.E. Eiben, Introduction to EC II 6EvoNet Summer School 2002
Genotype spacePhenotype space
Encoding
(representation)
Decoding
(inverse representation)
B 0 c 0 1 c d
G 0 c 0 1 c d
R 0 c 0 1 c d
Components:
representation / individiuals (2)
A.E. Eiben, Introduction to EC II 7EvoNet Summer School 2002
Sometimes producing
the phenotype from the
genotype is a simple
and obvious process.
Other times the
genotype might be a set
of parameters to some
algorithm, which works
on the problem data to
produce the phenotype
Problem
Data
Genotype
Phenotype
Growth
Function
Components:
representation / individuals (3)
A.E. Eiben, Introduction to EC II 8EvoNet Summer School 2002
Components:
representation / individuals (4)
Search takes place in the genotype space
Evaluation takes place in the phenotype space
Repr: Phenotypes
Genotypes
Fitness(g) = Value(repr-1(g))
Repr must be invertible, in other words decoding must be
injective (Q: surjective?)
Role of representation: defines objects that can be
manipulated by (genetic) operators
Note back on Darwinism: no mutations on phenotypic
level! (right term: small random variations)
A.E. Eiben, Introduction to EC II 9EvoNet Summer School 2002
Components: evaluation, fitness
measure
Role:
represents the task to solve, the requirements to adapt to
enables selection (provides basis for comparison)
Some phenotypic traits are advantageous, desirable,
e.g. big ears cool better,
These traits are rewarded by more offspring that will
expectedly carry the same trait
A.E. Eiben, Introduction to EC II 10 EvoNet Summer School 2002
Components: population
Role: holds the candidate solutions of the problem as
individuals (genotypes)
Formally, a population is a multiset of individuals,
i.e. repetitions are possible
Population is the basic unit of evolution,
i.e., the population is evolving, not the individuals
Selection operators act on population level
Variation operators act on individual level
A.E. Eiben, Introduction to EC II 11 EvoNet Summer School 2002
Components: selection
Role:
Gives better individuals a higher chance of
becoming parents
surviving
Pushes population towards higher fitness
E.g. roulette wheel selection
fitness(A) = 3
fitness(B) = 1
fitness(C) = 2
AC
1/6 = 17%
3/6 = 50%
B
2/6 = 33%
A.E. Eiben, Introduction to EC II 12 EvoNet Summer School 2002
Components: Mutation
Role: causes small (random) variance
before
1 1 1 0 1 1 1
after
1 1 1 1 1 1 1
A.E. Eiben, Introduction to EC II 13 EvoNet Summer School 2002
Components: Recombination
1 1 1 1 1 1 1 0 0 0 0 0 0 0 parents
cut cut
1 1 1 0 0 0 0 0 0 0 1 1 1 1 offspring
Role: combines features from different sources
A.E. Eiben, Introduction to EC II 14 EvoNet Summer School 2002
Place 8 queens on an 8x8 chessboard in
such a way that they cannot check each other
Example: the 8 queens problem
A.E. Eiben, Introduction to EC II 15 EvoNet Summer School 2002
The 8 queens problem
Representation
1 23 45 6 7 8
Genotype:
a permutation of
the numbers 1 - 8
Phenotype:
a board configuration
Obvious mapping
A.E. Eiben, Introduction to EC II 16 EvoNet Summer School 2002
Penalty of one queen:
the number of queens she can check.
Penalty of a configuration:
the sum of the penalties of all queens.
Note: penalty is to be minimized
Fitness of a configuration:
inverse penalty to be maximized
The 8 queens problem
Fitness evaluation
A.E. Eiben, Introduction to EC II 17 EvoNet Summer School 2002
The 8 queens problem
Mutation
Small variation in one permutation, e.g.:
swapping values of two randomly chosen positions, or
inverting a randomly chosen segment
1 23 45 6 7 8 1 23 4 567 8
A.E. Eiben, Introduction to EC II 18 EvoNet Summer School 2002
The 8 queens problem
Recombination
Combining two permutations into two new permutations:
choose random crossover point
copy first parts into children
create second part by inserting values from other
parent:
in the order they appear there
beginning after crossover point
skipping values already in child
8 7 6 42 53
1
1 3 5 24 67
8
8 7 6 45 1
2
3
1 3 5 62 87
4
A.E. Eiben, Introduction to EC II 19 EvoNet Summer School 2002
Parent selection:
Roulette wheel selection, for instance
Survivor selection (replacement)
When inserting a new child into the population, choose
an existing member to replace by:
sorting the whole population by decreasing fitness
enumerating this list from high to low
replacing the first with a fitness lower than the given child
Note: selection works on fitness values, no need to adjust it
to representation
The 8 queens problem
Selection
A.E. Eiben, Introduction to EC II 20 EvoNet Summer School 2002
Working of an EA
Phases in optimizing on a 1-dimensional fitness landscape
Early phase:
quasi-random population distribution
Mid-phase:
population arranged around/on hills
Late phase:
population concentrated on high hills
A.E. Eiben, Introduction to EC II 21 EvoNet Summer School 2002
Typical run
Typical run of an EA shows so-called “anytime behavior”
Best fitness in population
Time (number of generations)
A.E. Eiben, Introduction to EC II 22 EvoNet Summer School 2002
Best fitness in population
Time (number of generations)
Progress in 1st half
Progress in 2nd half
Long runs?
A.E. Eiben, Introduction to EC II 23 EvoNet Summer School 2002
Time (number of generations)
Best fitness in population
T: time needed to reach level F after random initialisation
T
F: fitness after smart initialisation
F
Smart initialisation?
A.E. Eiben, Introduction to EC II 24 EvoNet Summer School 2002
Scale of “all” problems
Performance of methods on problems
Random search
Special, problem tailored method
Evolutionary algorithm
Goldberg’89 view
A.E. Eiben, Introduction to EC II 25 EvoNet Summer School 2002
EAs and domain knowledge
Trend in the 90ies: adding problem specific knowledge
to EAs (special variation operators, repair, etc)
Result: EA performance curve “deformation”:
better on problems of the given type
worse on problems different from given type
Amount of added knowledge is variable
A.E. Eiben, Introduction to EC II 26 EvoNet Summer School 2002
Performance of methods on problems
EA 1
EA 4
EA 3
EA 2
Scale of “all” problems
P
Michalewicz’96 view
A.E. Eiben, Introduction to EC II 27 EvoNet Summer School 2002
General EA framework and dialects
There is a general, formal EA framework (omitted here)
In theory:
every EA is an instantiation of this framework, thus:
specifying a particular EA or a type of EAs (a “dialect”)
needs only filling in the characteristic features
In practice
this would be too formalistic
there are many exceptions (EAs not fitting into this
framework)
why care about the taxonomy, or label?
A.E. Eiben, Introduction to EC II 28 EvoNet Summer School 2002
Genetic algorithms &
genetic programming
Genetic algorithms (USA, 70’s, Holland, DeJong):
Typically applied to: discrete optimization
Attributed features:
not too fast
good solver for combinatorial problems
Special: many variants, e.g., reproduction models, operators
Genetic programming (USA, 90’s, Koza)
Typically applied to: machine learning tasks
Attributed features:
competes with neural nets and alike
slow
needs huge populations (thousands)
Special: non-linear chromosomes: trees, graphs
A.E. Eiben, Introduction to EC II 29 EvoNet Summer School 2002
Evolution strategies &
evolutionary programming
Evolution strategies (Germany, 70’s, Rechenberg, Schwefel)
Typically applied to:
numerical optimization
Attributed features:
fast & good optimizer for real-valued optimization
relatively much theory
Special:
self-adaptation of (mutation) parameters standard
Evolutionary programming (USA, 60’s, Fogel et al.)
Typically applied to: machine learning (old EP), optimization
Attributed features:
very open framework: any representation and mutation op’s OK
Special:
no recombination
self-adaptation of parameters standard (contemporary EP)
A.E. Eiben, Introduction to EC II 30 EvoNet Summer School 2002
Beyond dialects
Field merging from the early 1990’s
No hard barriers between dialects, many
hybrids, outliers
Choice for dialect should be motivated by given
problem
Best practical approach: choose representation,
operators, population model, etc. pragmatically
(and end up with an “unclassifiableEA)
There are general issues for EC as a whole
A.E. Eiben, Introduction to EC II 31 EvoNet Summer School 2002
Advantages of EC
No presumptions w.r.t. problem space
Widely applicable
Low development & application costs
Easy to incorporate other methods
Solutions are interpretable (unlike NN)
Can be run interactively, accommodate user
proposed solutions
Provides many alternative solutions
Intrinsic parallelism,straightforward parallel
implementations
A.E. Eiben, Introduction to EC II 32 EvoNet Summer School 2002
Disadvantages of EC
No guarantee for optimal solution within
finite time
Weak theoretical basis
May need parameter tuning
Often computationally expensive, i.e. slow
A.E. Eiben, Introduction to EC II 33 EvoNet Summer School 2002
The performance of EC
Acceptable performance at acceptable costs on a wide range
of problems
EC niche (where supposedly superior to other techniques):
complex problems with one or more of the following features
many free parameters
complex relationships between parameters
mixed types of parameters (integer, real)
many local optima
multiple objectives
noisy data
changing conditions (dynamic fitness landscape)
A.E. Eiben, Introduction to EC II 34 EvoNet Summer School 2002
Summary
Evolutionary Computation:
is a method, based on biological metaphors,
of breeding solutions to problems
has been shown to be useful in a number of
areas
could be useful for your problem
its easy to give it a try
is FUN

Chapters (15)

The most important aim of this chapter is to describe what an evolutionary algorithm is. This description is deliberately based on a unifying view presenting a general scheme that forms the common basis of all evolutionary algorithm (EA) variants. The main components of EAs are discussed, explaining their role and related issues of terminology. This is immediately followed by two example applications (unlike other chapters, where example applications are typically given at the end) to make things more concrete. Further on we discuss general issues of the working of EAs. Finally, we put EAs into a broader context and explain their relation with other global optimisation techniques.
In this chapter we describe the most widely known type of evolutionary algorithm: the genetic algorithm. After presenting a simple example to introduce the basic concepts, we begin with what is usually the most critical decision in any application, namely that of deciding how best to represent a candidate solution to the algorithm. We present four possible solutions, that is, four widely used representations. Following from this we then describe variation operators (mutation and crossover) suitable for different types of representation, before turning our attention to the selection and replacement mechanisms that are used to manage the populations of solutions.
In this chapter we introduce evolution strategies (ES), another member of the evolutionary algorithm family. We also use these algorithms to illustrate a very useful feature in evolutionary computing: self-adaptation of strategy parameters. In general, self-adaptivity means that some parameters of the EA are varied during a run in a specific manner: the parameters are included in the chromosomes and coevolve with the solutions. This feature is inherent in modern evolution strategies. That is, since the procedure was detailed in 1977 [340] most ESs have been self-adaptive, and over the last ten years other EAs have increasingly adopted self-adaptivity. A summary of ES is given in Table 4.1.
In this chapter we present evolutionary programming (EP), another historical member of the EC family. Other EC streams have an algorithm variant that can be identified as being the “standard”, or typical, version of genetic algorithms, evolution strategies, or genetic programming. For EP such a standard version is hard to define for reasons discussed later in this chapter. The summary of EP in Table 5.1 is therefore a representative rather than a standard algorithm variant.
In this chapter we present genetic programming, the youngest member of the evolutionary algorithm family. Besides the particular representation (using trees as chromosomes), it differs from other EA strands in its application area. While the EAs discussed so far are typically applied to optimisation problems, GP could instead be positioned in machine learning. In terms of the different problem types as discussed in Chapter 1, most other EAs are for finding some input realising maximum payoff (Fig. 1.4), whereas GP is used to seek models with maximum fit (Fig. 1.5). Clearly, once maximisation is introduced, modelling problems can be seen as special cases of optimisation. This, in fact, is the basis of using evolution for such tasks: models are treated as individuals, and their fitness is the model quality to be maximised. The summary of GP is given in Table 6.1.
This chapter introduces an evolutionary approach to machine learning tasks working with rule sets, rather than parse trees, to represent knowledge. In learning classifier systems (LCS) the evolutionary algorithm acts as a rule discovery component. LCS systems are used primarily in applications where the objective is to evolve a system that will respond to the current state of its environment (i.e., the inputs to the system) by suggesting a response that in some way maximises (future) reward from the environment.1 Specifically, the idealised result of running an LCS is the evolution of a rule base that covers the space of possible inputs and suggests the most appropriate actions for each. Through LCS algorithms we also demonstrate evolution where cooperation between the population members (i.e., rules) is crucial. In this aspect LCS systems differ significantly from the other four members of the evolutionary algorithm family, where individuals strictly compete with each other.
The issue of setting the values of various parameters of an evolutionary algorithm is crucial for good performance. In this paper we discuss how to do this, beginning with the issue of whether these values are best set in advance or are best changed during evolution. We provide a classification of different approaches based on a number of complementary features, and pay special attention to setting parameters on-the-fly. This has the potential of adjusting the algorithm to the problem while solving the problem. This paper is intended to present a survey rather than a set of prescriptive details for implementing an EA for a particular type of problem. For this reason we have chosen to interleave a number of examples throughout the text. Thus we hope to both clarify the points we wish to raise as we present them, and also to give the reader a feel for some of the many possibilities available for controlling different parameters. 1
So far in our discussion of evolutionary algorithms we have considered the entire population to act as a common genepool, with fitness as the primary feature affecting the likelihood of an individual taking part in the creation of new offspring, and surviving to the next generation. However we know that evolution in vivo is also affected by another major parameter, namely that of the physical space within which evolution occurs, which imposes a sense of locality on genetic operators. However beautiful (i.e., highly fit) the flowers in the municipal garden, it is extremely unlikely that they will be fertilised with pollen from a garden on the opposite side of the world.
In the preceding chapters we described the main varieties of evolutionary algorithms and described various examples of how they might be suitably implemented for different applications. In this chapter we turn our attention to systems in which, rather than existing as “stand-alone” algorithms, EA-based approaches are either incorporated within larger systems, or alternatively have other methods or data structures incorporated within them. This category of algorithms is very successful in practice and forms a rapidly growing research area with great potential. This area and the algorithms that form its subject of study are named memetic algorithms (MA). In this chapter we explain the rationale behind MAs, outline a number of possibilities for combining EAs with other techniques, and give some guidelines for designing successful hybrid algorithms.
In this chapter we present a brief overview of some of the approaches taken to analysing and modelling the behaviour of Evolutionary Algorithms. The “Holy Grail” of these efforts is the formulation of predictive models describing the behaviour of an EA on arbitrary problems, and permitting the specification of the most efficient form of optimiser for any given problem. However, (at least in the authors’ opinions) this is unlikely ever to be realised, and most researchers will currently happily settle for techniques that provide any verifiable insights into EA behaviour, even on simple test problems. The reason for what might seem like limited ambition lies in one simple fact: evolutionary algorithms are hugely complex systems, involving many random factors. Moreover, while the field of EAs is fairly young, it is worth noting that the field of population genetics and evolutionary theory has a head start of more than a hundred years, and is still battling against the barrier of complexity.
In this chapter we consider the issue of constraint handling by evolutionary algorithms. This issue has great practical relevance because many practical problems are constrained. It is also a theoretically challenging subject since a great deal of intractable problems (NP-hard, NP-complete, etc.) are constrained. The presence of constraints has the effect that not all possible combinations of variable values represent valid solutions to the problem at hand. Unfortunately, constraint handling is not straightforward in an EA, because the variation operators (mutation and recombination) are typically “blind” to constraints. That is, there is no guarantee that even if the parents satisfy some constraints, the offspring will satisfy them as well. In this chapter we elaborate on the notion of constrained problems and distinguish two different types: constrained optimisation problems and constraint satisfaction problems. (This elaboration requires clarifying some basic notions, leading to definitions that implicitely have been used in earlier chapters.) Based on this classification of constrained problems, we discuss what constraint handling means from an EA perspective, and review the most commonly applied EA techniques to treat constraints. Analysing these techniques, we identify a number of common features and arrive at the conclusion that the presence of constraints is not harmful, but rather helpful in that it provides extra information that EAs can utilise.
In this chapter we discuss special forms of evolution that in some sense deviate from the standard evolutionary algorithms. In particular, we present coevolution and interactive evolution that both work under “external influence”. In coevolution the influence comes from another population, whose members affect the fitness of the main population. In turn, the main population also influences the fitness of the other one; hence the two populations evolve together. In interactive evolution this influence comes from a user who defines the fitness values by subjective preferences. In both of these cases, the fitness that is awarded to a solution may vary. In the first case because the fitness is dependent on the evolutionary state of the second population, and in the second because users often display inconsistencies. We finish this chapter by describing evolutionary approaches to problems where changing evaluation criteria form the very feature defining them: nonstationary optimisation problems.
The main objective of this chapter is to provide practical guidelines for working with EAs. Working with EAs often means comparing different versions experimentally. Guidelines to perform experimental comparisons are therefore given much attention, including the issues of algorithm performance measures, statistics, and benchmark test suites. The example application (Sect. 14.5) is also adjusted to the special topics here; it illustrates the application of different experimental practices, rather than EA design.
Gray coding is a variation on the way that integers are mapped on bit strings that ensures that consecutive integers always have Hamming distance one. A three bit Gray coding table is given in Table A.l, and the procedures for converting a binary number b = 〈b 1,..., b m〉, where m is the number of bits, into a Gray code number g = 〈g 1,..., g m〉 and vice versa in Table A.2.
We cannot hope here to give a comprehensive set of test functions, and by the arguments given in Sect. 14.4.1, it would not be particularly appropriate. Rather we will give a few instances of test problems that we have referred to in this book, along with descriptions and pseudocode for two randomised test function generators.
... Researchers have proposed numerous methods for selecting the parents from the population for the mating pool Mp(t) who are fit for generating offspring, such as fitness proportional, ranking, roulette wheel, tournament, and uniform parent selections, among others (Eiben & Smith, 2015;Pinedo, 2016;Pezzella et al., 2008). The two-way tournament selection method is used in the present work as it is an overall balanced method for maintaining selection pressure (Pinedo, 2016;Pezzella et al., 2008). ...
... These genetically modified solutions will form the offspring population Q(t). There are multiple crossover operators, like single-point crossover (Mendes, 2013;Lin et al., 2010;Falkenauer & Bouffouix, 1991;Karadgi & Hiremath, 2023;Eiben & Smith, 2015), two-point crossover (Mendes, 2013;Eiben & Smith, 2015), and many others. In the current work, when two parents are chosen from Mp(t), a single-point crossover operation is carried out if random rc is less than the crossover probability Pc, where a random length p is selected for splitting and combining the two selected parent solutions into two offspring, as illustrated in Fig. 10, otherwise, two chosen parent solutions are treated as two offspring. ...
... These genetically modified solutions will form the offspring population Q(t). There are multiple crossover operators, like single-point crossover (Mendes, 2013;Lin et al., 2010;Falkenauer & Bouffouix, 1991;Karadgi & Hiremath, 2023;Eiben & Smith, 2015), two-point crossover (Mendes, 2013;Eiben & Smith, 2015), and many others. In the current work, when two parents are chosen from Mp(t), a single-point crossover operation is carried out if random rc is less than the crossover probability Pc, where a random length p is selected for splitting and combining the two selected parent solutions into two offspring, as illustrated in Fig. 10, otherwise, two chosen parent solutions are treated as two offspring. ...
Article
As Industry 4.0 continues to transform the manufacturing domain, the focus is shifting towards mass personalization of products, enabling companies to efficiently produce customized goods that meet individual customers’ unique needs and preferences. This requires manufacturing enterprises to be flexible and adaptable with their scheduling processes and manufacturing setup. Flexibility and subsequent realization of personalization of products can be realized by utilizing the notion of a Line-less Assembly System (LAS), which replaces a fixed conveyor system with a system in which the products move between machines, with products being fitted on Autonomous Mobile Robots (AMRs) to transport the products from one machine to another as per their production routing. This necessitates scheduling products as per their production routing on available AMRs to reap the benefits of LAS, which is viewed as a Job Shop Scheduling Problem (JSSP) to maximize resource utilization while adhering to constraints. The novelty of this approach is that, in addition to scheduling products, it also considers the scheduling of AMRs. A mathematical formulation to solve the deterministic JSSP is presented in the current work. The formulation is solved for various inputs using a mathematical solver. In general, JSSPs are NP-hard problems. Subsequently, a meta-heuristic-based Genetic Algorithm (GA) has been constructed to solve the JSSP. The solutions obtained through both GA and mathematical solver are compared, and it was found that GA performs well in computation and optimization efficiencies.
... Multiple design alternatives can more effectively analyzed when framed as a multi-objective optimization problem, where the design goal is to minimize both the error rate and the number of parameters. In this context, solutions that cannot improve one objective without compromising the other are denoted as nondominated, forming what is known as the Pareto frontier [Eiben and Smith 2015]. Figure 3 depicts the average error rate versus the number of parameters for each nondominated MSFA model tested using the MSFA approach, all adopting KD, as other PT and KD design alternatives were dominated. ...
Conference Paper
The development of sonar technologies, such as Multibeam Forward Looking Sonar (MFLS), has enabled detailed underwater imaging, which can be applied for tasks like identifying mine-like objects. However, obtaining large datasets to train image recognition models remains challenging, leading to the need for smaller yet equally accurate alternative models. Previous research proposed a hybrid model that combines Convolutional Neural Networks with Graph Neural Networks for MFLS image classification. This study refines the feature extractor of this model using Knowledge Distillation (KD) and evaluates the cost-effectiveness of this pipeline compared to alternative solutions. The proposed method achieved an error rate of 6.42%, a value comparable to that of other solutions but with less computational effort.
... Based on the observed results, the number of function evaluations (NFE) appears to be a crucial factor alongside the ability of each algorithm to explore and exploit the search space effectively. This finding suggests that algorithms with efficient exploration and exploitation capabilities while maintaining a moderate NFE might achieve superior performance in this specific problem domain [52]. The findings presented in Table 3 reveal a noteworthy characteristic of the FA algorithm. ...
Article
Full-text available
Facility layout planning is crucial in construction projects due to its significant impact on project time and cost. The strategic location and capacity selection of tower cranes, given their high cost, are essential components of this process, alongside the placement of material supply point. Addressing this complex, combinatorial, and NP-hard decision-making problem, this study employs a comprehensive analysis of ten advanced metaheuristic algorithms to optimize the type and location of tower cranes with material supply points at construction sites. By formulating the problem as a mathematical model, the objective function seeks to minimize the overall material transportation cost while considering job site constraints. To evaluate the performance of these algorithms, we designed three real-world scenarios, providing a robust assessment framework. Our findings highlight that Ant Colony Optimization (ACO) delivers superior performance compared to other algorithms, excelling in both execution time and cost efficiency in this problem. This study's contribution lies in its exhaustive approach to problem-solving, offering valuable insights into algorithmic performance across varied construction scenarios.
... Viewing AI as a life form integrated into the broader evolutionary process introduces several key considerations. First, AI must adapt to changing environments and challenges, continually evolving to enhance its capabilities and resilience [38][39]. Second, AI can form symbiotic relationships with other life forms, enhancing mutual survival and development. ...
... Basin of attraction (BoA) is an important concept for multimodal problems, such as B(x * ) = {x ∈ X | x * = local-search(x)}, where the BoA B(x * ) of a local optimum x * is the set of solutions B(x * ) that approaches x * by utilizing a local search strategy among the decision variable space X [10]. • Ruggedness [11]: Ruggedness is usually manifested as steep ascents and descents in the fitness landscape with the existence of many local optima. • Neutrality [12]: Neutrality is usually manifested as a flat area of the fitness landscape. ...
Article
Full-text available
For a long time, there has been a gap between theoretical optimization research and real-world applications. A key challenge is that many real-world problems are black-box problems, making it difficult to identify their characteristics and, consequently, select the most effective algorithms to solve them. Fortunately, the Nearest-Better Network has emerged as an effective tool for analyzing the characteristics of problems, regardless of dimensionality. In this paper, we conduct an in-depth experimental analysis of real-world functions from the CEC 2022 and CEC 2011 competitions using the NBN. Our experiments reveal that real-world problems often exhibit characteristics such as unclear global structure, multiple attraction basins, vast neutral regions around the global optimum, and high levels of ill conditioning.
Article
Full-text available
Noble metallic nanoparticles (NPs) have shown great potential in the field of sustainable energy. Gold nanorods (AuNRs), known for their size-dependent optical and electrical characteristics, are strong candidates for various applications, particularly in solar energy conversion. Additionally, AuNRs are well-established nanomaterials in precision medicine. In this paper, we optimize the shape and size of AuNRs to maximize light-to-heat conversion based on a validated theoretical model. Utilizing the Differential Evolution (DE) algorithm, a robust metaheuristic optimization approach, we calculated the optimal size and shape of AuNRs for selected wavelengths. The aspect ratio (AR), defined as the ratio of the diameter to the length of the AuNRs, was a key parameter in the optimization process. The optimization results reveal that for shorter wavelengths, near-spherical AuNRs (AR of 0.71 and 0.75) demonstrate the highest efficiency, while for longer wavelengths, more elongated AuNRs (AR of 0.24 and 0.17) outperform others. This study also includes Computational Fluid Dynamics (CFD) calculations to evaluate the impact of optimized AuNRs on heat generation in a real-world scenario. A case study is presented in which lasers of different wavelengths irradiate a borosilicate glass embedded with a slab of AuNRs at its center. The results, reported as temperature distributions and temperature evolution during irradiation, indicate that the optimized AuNRs significantly enhance heat generation across various laser wavelengths. Specifically, temperature increases were observed as follows: from 2.28 to 39.08\,^\circ \textrm{C} at 465 nm, from 1.91 to 81.42\,^\circ \textrm{C} at 532 nm, from 1.7 to 65.14\,^\circ \textrm{C} at 640 nm, from 40 to 48.35\,^\circ \textrm{C} at 808 nm, and from 0.94 to 118.45\,^\circ \textrm{C} at 980 nm, respectively. These findings underscore the effectiveness of the optimization process in enhancing photothermal conversion.
Chapter
Full-text available
This chapter explores the challenges in applying optimization techniques to real-world decision-making projects. While many such challenges can be modeled as mathematical problems, practical implementation often faces obstacles due to the complexity of real-world scenarios. We propose three novel optimization problems that address practical issues: high-performance but opaque algorithms, practitioner’s tacit knowledge, and consensus among practitioners. These aim to bridge the gap between theoretical research and real-world applications. For each proposed problem, we discuss potential algorithmic strategies.
Chapter
We present two novel domain-independent genetic operators that harness the capabilities of deep learning: a crossover operator for genetic algorithms and a mutation operator for genetic programming. Deep Neural Crossover leverages the capabilities of deep reinforcement learning and an encoder-decoder architecture to select offspring genes. BERT mutation masks multiple gp-tree nodes and then tries to replace these masks with nodes that will most likely improve the individual’s fitness. We show the efficacy of both operators through experimentation.
Chapter
The Role of Network Security and 5G Communication in Smart Cities and Industrial Transformation explores the transformative power of 5G communication and network security in creating smarter, safer, and more sustainable urban and industrial ecosystems. This book highlights how 5G technology drives real-time connectivity for applications such as intelligent transportation, healthcare, energy management, and industrial automation while emphasizing the critical need for robust cybersecurity measures. The book integrates diverse topics, from 5G-enabled edge computing and blockchain-based healthcare systems to big data analytics and AI-powered security solutions. It offers insights into mitigating vulnerabilities, protecting data privacy, and building resilient infrastructures to support Industry 4.0 and sustainable smart cities. Designed for researchers, professionals, and policymakers, this resource provides practical strategies and forward-thinking perspectives on shaping a hyperconnected future. Key Features: - Explores 5G`s role in smart city and industrial applications. - Highlights cybersecurity challenges and solutions. - Examines healthcare innovations using 5G and blockchain. - Discusses big data and AI in secure mobile services. - Provides actionable insights for sustainable transformation.
ResearchGate has not been able to resolve any references for this publication.