ArticlePDF Available

Selection Methods for Genetic Algorithms

Authors:

Abstract and Figures

Based on a study of six well known selection methods often used in genetic algorithms, this paper presents a technique that benefits their advantages in terms of the quality of solutions and the genetic diversity. The numerical results show the extent to which the quality of solution depends on the choice of the selection method. The proposed technique, that can help reduce this dependence, is presented and its efficiency is numerically illustrated.
Content may be subject to copyright.
Int. J. Emerg. Sci., 3(4), 333-344, December 2013
ISSN: 2222-4254
© IJES
333
Selection Methods for Genetic Algorithms
Khalid Jebari, Mohammed Madiafi
Laboratory, Faculty of Sciences, Mohammed VAgdal University, UM5A, Rabat, Morocco,
Ben M’sik Faculty of Sciences, Hassan II Mohammedia University, UH2M, Casablanca, Morocco,
khalid.jebari@gmail.com, madiafi.med@gmail.com
Abstract. Based on a study of six well known selection methods often used in
genetic algorithms, this paper presents a technique that benefits their
advantages in terms of the quality of solutions and the genetic diversity. The
numerical results show the extent to which the quality of solution depends on
the choice of the selection method. The proposed technique, that can help
reduce this dependence, is presented and its efficiency is numerically
illustrated.
Keywords: Evolutionary Computation, Genetic Algorithms, Genetic
Operator, Selection Pressure, Genetic Diversity.
1 INTRODUCTION
The theory of genetic algorithms (GAs) was originally developed by John Holland
in 1960 and was fully developed in his book " Adaptation in Natural and Artificial
Systems ", published in 1975 [1]. His initial goal was not to develop an optimization
algorithm , but rather to model the adaptation process, and show how this process
could be useful in a computing system. The GAs are stochastic search methods
using the concepts of Mendelian genetics and Darwinian evolution [2,3]. Such
heuristics have been proved effective in solving a variety of hard real-world
problems in many application domains including economics, engineering,
manufacturing, bioinformatics, medicine, computational science, etc [4].
In principle, a population of individuals selected from the search space , often in a
random manner, serves as candidate solutions to optimize the problem [3]. The
individuals in this population are evaluated through ( "fitness" ) adaptation function.
A selection mechanism is then used to select individuals to be used as parents to
those of the next generation. These individuals will then be crossed and mutated to
form the new offspring. The next generation is finally formed by an alternative
mechanism between parents and their offspring [4]. This process is repeated until a
certain satisfaction condition.
More formally, a standard GA can be described by the following pseudo-code:
Khalid Jebari, and Mohammed Madiafi
334
SGA_pseudo-code
{
Choose an initial population of individuals:
(0)P
;
Evaluate the fitness of all individuals of
(0)P
;
Choose a maximum number of generations:
max
t
;
While (not satisfied and
max
tt
) do {
-
1tt
;
- Select parents for offspring production;
- Apply reproduction and mutation operators;
- Create a new population of survivors:
()Pt
;
- Evaluate
()Pt
;
}
Return the best individual of
()Pt
;
}
As we can see from the pseudo-code above, a GA is a parametrical algorithm
whose application to a given problem requires setting parameters and making
decisions about:
the way parents are selected for offspring production;
selected parents are recombined;
population size is adjusted;
individuals are mutated, or crossed;
probability of crossover and probability of mutation that explore other areas
of research space are chosen [5].
The ultimate goal is the right choice of these parameters [5]. Research has
therefore looked to find techniques to dynamically adjust and improve the quality of
the solution. We find in the literature a lot of attention to the genetic operators
exploration, and much more to population size [5]. By cons less attention is paid to
the selection operator. Without this operator, genetic algorithms are only simple
random methods give different values each time [3]. Individuals who have a higher
fitness value have a high probability of being selected for the next generation by this
operator. Selection method focuses research in promising areas of the search space.
The balance between exploitation and exploration is essential for the behaviour of
genetic algorithms. It can be adjusted by the selection pressure of the selection
operator and the probability of crossover and mutation. In the case of selection, a
strong selection pressure may cause the algorithm converges to a local optimum,
while a low selection pressure may cause the AG to random results that differ from
one run to another [6]. The selection operator is aimed at exploiting the best
characteristics of good candidate solutions in order to improve these solutions
throughout generations, which, in principle, should guide the GA to converge to an
acceptable and satisfactory solution of the optimisation problem at hand [2].
Selection operator is the most important parameter that may influence the
performance of a GA [7]. It is aimed at exploiting the best characteristics of good
International Journal of Emerging Sciences 3(4), 333-344, December 2013
335
candidate solutions in order to improve these solutions throughout generations,
which, in principle, should guide the GA to converge to an acceptable and
satisfactory solution of the optimisation problem at hand [6].
In the literature there are several selection methods: Roulette Wheel Selection
[8], Stochastic Universal Sampling, the tournament selection and the selection of
Boltzmann and others [9].
However, despite decades of research, there are no general guidelines or
theoretical support concerning the way of selecting a good selection method for
each problem [10]. This can be a serious problem because, as we will see through
numerical results, a non-suitable selection operator can lead to poor performance of
the GA in terms of both rapidity and reliability. To illustrate this problem we will
consider a set of six popular selection methods that we apply to the optimization
problem of four benchmark functions. In the following sections, we first provide a
brief description of each studied selection method. In section 3, we outline the
presentation of a technique that we propose in order to help reducing the influence
of selection operator on the global performance of a GA. Numerical results are
presented and discussed in section 4. Section 5 is devoted to our concluding remarks
and future works.
2 SELECTION METHODS FOR GAS
As mentioned before, six different selection methods are considered in this work,
namely: the roulette wheel selection (RWS), the stochastic universal sampling
(SUS), the linear rank selection (LRS), the exponential rank selection (ERS), the
tournament selection (TOS), and the truncation selection (TRS). In this section, we
provide a brief description of each studied selection method.
2.1 Roulette Wheel Selection (RWS)
The conspicuous characteristic of this selection method is the fact that it gives to
each individual i of the current population a probability
()pi
of being selected [10],
proportional to its fitness
()fi
1
()
() ()
n
j
fi
pi fj
(1)
Where n denotes the population size in terms of the number of individuals. The
RWS can be implemented according to the following pseudo-code
RWS_ pseudo-code
{
Calculate the sum
1()
n
i
S f i
;
Khalid Jebari, and Mohammed Madiafi
336
For each individual
1in
do {
- Generate a random number
 
0,S
;
-
0iSum
;
0j
;
- Do {
-
()iSum iSum f j
;
-
1jj
;
} while(
iSum
and
jn
)
- Select the individual
j
;}
}
Note that a well-known drawback of this technique is the risk of premature
convergence of the GA to a local optimum, due to the possible presence of a
dominant individual that always wins the competition and is selected as a parent.
2.2 Stochastic Universal Sampling (SUS)
The SUS [8] is a variant of RWS aimed at reducing the risk of premature
convergence. It can be implemented according to the following pseudo-code:
SUS_ pseudo-code
{
Calculate the mean
1
1 ( )
n
i
f n f i
;
Generate a random number
 
0,1
;
(1)Sum f
;
delta f

;
0j
;
Do {
- If (
delta Sum
) {
- select the jth individual;
-
delta delta Sum
;
}
else {
-
1jj
;
-
()Sum Sum f j
;
}
} while (
jn
)
}
International Journal of Emerging Sciences 3(4), 333-344, December 2013
337
2.3 Linear Rank Selection (LRS)
LRS [9] is also a variant of RWS that tries to overcome the drawback of premature
convergence of the GA to a local optimum. It is based on the rank of individuals
rather than on their fitness. The rank n is accorded to the best individual whilst the
worst individual gets the rank 1. Thus, based on its rank, each individual i has the
probability of being selected given by the expression
()
() ( 1)
rank i
pi nn

(2)
Once all individuals of the current population are ranked, the LRS procedure can
be implemented according to the following pseudo-code:
LRS_ pseudo-code
{
Calculate the sum
1
2.001
vn
;
For each individual
1in
do {
- Generate a random number
 
0,v
;
- For each
1jn
do {
- If (
()pj
) {
- Select the jth individual;
- Break;
}
}
}
}
2.4 Exponential Rank Selection (ERS)
The ERS [10] is based on the same principle as LRS, but it differs from LRS by the
probability of selecting each individual. For ERS, this probability is given by the expression:
 
( ) 1.0 exp rang i
pi c

 

(3.a)
with
 
 
 
 
21
61
nn
cnn
 
 
(3.b)
Once the n probabilities are computed, the rest of the method can be described by the
following pseudo-code:
ERS_ pseudo-code
Khalid Jebari, and Mohammed Madiafi
338
{
For each individual
1in
do {
- Generate a random number
12
,
9cc



;
- For each
1jn
do {
- If (
()pj
) {
- Select the jth individual;
- Break;
} // end if
} // end for j
}// end for i
}
2.5 Tournament Selection (TOS)
Tournament selection [11] is a variant of rank-based selection methods. Its principle
consists in randomly selecting a set of k individuals. These individuals are then
ranked according to their relative fitness and the fittest individual is selected for
reproduction. The whole process is repeated
n
times for the entire population.
Hence, the probability of each individual to be selected is given by the expression:
 
 
1
1 if 1, 1
()
0 if ,
k
nk
n
Ci n k
C
pi
i n k n
 

(4)
Technically speaking, the implementation of TOS can be performed according to
the pseudo-code:
TOS_ pseudo-code
{
Create a table
t
where the
n
individuals are placed in
a randomly chosen order
For
1 to in
do {
- for
1 to jn
do {
-
1()i t j
;
- For
1 to mn
do {
-
2()i t j m
;
- If (
12
( ) ( )f i f i
) the select
1
i
else select
2
i
;
}// end for m
-
j j k
;
} // end for j
International Journal of Emerging Sciences 3(4), 333-344, December 2013
339
}// end for i
}
2.6 Truncation Selection (ERS)
The truncation selection [10] is a very simple technique that orders the candidate
solutions of each population according to their fitness. Then, only a certain portion
p
of the fittest individuals are selected and reproduced
1p
times. It is less used in
practice than other techniques, except for very large population. The pseudo-code of
the technique is as follows:
TRS_ pseudo-code
{
Order the
n
individuals of
()Pt
according to their
fitness;
Set the portion
p
of individuals to select
(e.g.
10% 50%p
);
int( )sp n p
// selection pressure;
Select the first
sp
individuals;
}
3 DESCRIPTION OF THE PROPOSED TECHNIQUE
After implementing the six selection methods described in the previous section and
tested them on the optimization problem of a variety of test functions we found that
results differ significantly from one selection method to another. This poses the
problem of selecting the adequate method for real-world problems for which no
posterior verification of results is possible.
To help mitigating this non-trivial problem we present in this section the outlines
of a new selection procedure that we propose as an alternative, which can be useful
when no single other technique can be used with enough confidence. Our technique
is a dynamic one in the sense that the selection protocol can vary from one
generation to another. The underling idea consists in finding a good compromise
between proportional methods, which decrease the effect of selection pressure and
assure some genetic diversity within the population, but may increase the
convergence time; and elitist methods that reduce the convergence time but may
increase the effect of selection pressure and, therefore, the risk of converging to
local minima.
To achieve this goal, more than one selection method are applied at each generation,
but in a competitive way meaning that only results provided by the selection
operator with the best performance are actually taken into account. To assess and
Khalid Jebari, and Mohammed Madiafi
340
compare the performance of candidate selection methods two objective criteria are
employed. The first criterion is the quality of solution; it can easily be measured as a
function of the fitness
*
f
of the best individual. The second criterion is the genetic
diversity, which is less evident to quantify than the first one. In this work, as a
measure of the genetic diversity within the population
()Pt
we propose the mean
population diversity at generation t over 30 runs is calculated according to the
formula:
 
 
30
1 1
),(
)1ln()max( 1
30
1
)(
k
n
i
n
ij ij
ij tkd
nd
tDiv
(5)
Where Di j (k, t) is the Euclidean distance between the i-th and j-th individuals at
generation t of the k-th run, n is the population size, max(dij)is the maximum distance
supposed between individuals of the population and t the number of generations or
iterations. As a measure of the quality of the solution at each generation we used
the criterion
2
min
2
max
*
ff
f
Quality
(6)
Where
max
f
and
min
f
denote respectively the maximum and the minimum values of
the fitness at generation
t
, and
*max
ff
or
*min
ff
depending on the nature of the
problem, which can be either a maximization or a minimization problem. Finally, in
order to combine the two criteria in a unique one we used the relation
Quality
t
t
Div
t
C11
(7)
4 EXPERIMENTAL STUDY
In this section, we present some examples of numerical results obtained by applying
the six selection methods, as well as the proposed Combined Selection (CS)
procedure, described in the previous sections, to the problem of genetic
optimization of a set of four well-known benchmark functions [12]. The first
function is a classical multi modal and high dimension function defined on the
interval [-30,30] by the expression:
n
ii
n
iix
n
x
neexf 1
1
2)2cos(
1
1
2.0
20))(min(
(8)
The second test function is also a mono-dimensional example but it is a deceptive
one in the sense that it possesses, as depicted by Fig. 1, two local maxima in which
optimization algorithms may be trapped.
International Journal of Emerging Sciences 3(4), 333-344, December 2013
341
Figure 1. The deceptive test function f2(x).
2()fx
is defined on the interval
 
0,6
by the expression
  
 
 
2
1 for 0,1
0.3 1 for 1,3
() 0.3 5 for 3,5
2 10 for 5,6
xx
xx
fx xx
xx

 
 
 
(9)
The third example is the bi-variable Shubert function [3] defined, for
1010 1x
and
1010 2x
, by
55
3 1 2 1 2
11
( , ) ( cos(( 1) )( cos(( 1) ))
ii
f x x i i x i i i x i

 

(10)
And the last example is the bi-variable instance of the more general Rastrigin
function [4] defined by:
n
iin xxnxxxf 12
214 ))2cos(10(10),...,,(
(11)
for
nixi,...,2,1 ,12.512.5
Results are summarized in Tables 1 to 3. Table 1 shows for each selection and
each test function, the number of generations the GA needed to converge to an
acceptable solution.
Analysis of Table I shows significant differences in convergence speed of the GA
for the six studied selection methods, particularly in the case of the deceptive
example,
2
f
.
Table 1. Number of Generations needed for convergence
Test
Functions
Selection Method
RWS
SUS
LRS
ERS
TOS
TRS
CS
f1
40
34
48
16
7
5
9
f2
229
173
290
290
14
18
27
Khalid Jebari, and Mohammed Madiafi
342
f3
43
16
18
18
10
8
14
f4
27
9
14
14
3
3
5
Table 2. Percentage of Selection Pressure
Test
Functions
Selection Method
RWS
SUS
LRS
ERS
TOS
TRS
CS
f1
12.3
12.2
22.8
22.9
80
70
84
f2
2.3
2.3
1.3
5.4
75
33
70
f3
10
4.8
4.8
4.8
80
65
80
f4
25.1
19
4.8
4.8
65
50
70
Table 2 shows the percentage of selection pressure for each studied selection
method and each function. The selection pressure,
sp
, of a given selection method
is defined as the number of generations after which the best individual dominates
the population. As to the percentage of selection pressure, it is defined by
min
sp sp
where
min
sp
denotes the minimal selection pressure observed among all the studied
parent selection methods. We can remark that proportional methods maintain the
genetic diversity more than elitist ones.
Table 3 provides sample results related to another aspect of this study. It is the
aspect of quality assessment of the optimum provided by the GA for each test
function and for each selection method, including the combined selection method
we proposed in this work. To measure this quality, we used the relative error
f
ff
ff
*
(12)
where
*
f
is the optimum provided by the algorithm and
f
the actual optimum,
which is a priori known.
Table 3. Relative errors in percentage of the optima
Test
Functions
Selection Method
RWS
SUS
LRS
ERS
TOS
TRS
CS
f1
18
7
7
7
5
7
4
f2
5
7
0
5
0
0
0
f3
34
34
34
34
34
34
0
f4
98
98
96
96
95
96
0
By analysing this table we can see clearly that the proposed method of selection
performs better than the six studied selection methods.
International Journal of Emerging Sciences 3(4), 333-344, December 2013
343
5 CONCLUSION
In this paper, six well-known selection methods for GAs are studied, implemented
and their relative performance analysed and compared using a set of four
benchmark functions. These methods can be categorised into two categories:
proportional and elitist. The first category uses a probability of selection
proportional to the fitness of each individual. Selection methods of this category
allow maintaining a genetic diversity within the population of candidate solutions
throughout generations, which is a good property that prevents the GA from
converging to local optima. But, on the other hand, these methods tend to increase
the time of convergence.
By contrast, selection methods of the second category select only the best
individuals, which increases the speed of convergence but at the risk of converging
to local optima due to the loss of genetic diversity within the population of candidate
solutions.
Starting from these observations, we have conducted a preliminary study aimed at
combining the advantages of the two categories. This study resulted in a new
combined selection procedure whose outlines are presented in this paper. The main
idea behind this procedure is the use of more than one selection method in a
competitive way together with an objective criterion which allows choosing the best
selection method to adopt at each generation.
The proposed technique was successfully applied to the optimisation problem of a
set of well-known benchmark functions, which encourages farther developments of
this idea.
REFERENCES
1. Holland. J, Adaptation in Natural and Artificial Systems”, University of Michigan Press,
An Arbor, MI, USA, 1975.
2. Goldberg. D. E, Genetic Algorithms in Search, Optimization and Machine Learning”,
Addison-Wesley, New York, NY 1989.
3. Michalewicz. Z, Genetic Algorithms + Data Structures=Evolution Programs”, Third
Edition, Springer, 2007.
4. DeJong. K. A, Evolutionary Computation : a unified approach, MIT Press, 2007.
5. Eiben. A. E, Smit. S. K Parameter tuning for configuring and analyzing evolutionary
algorithms”, Swarm and Evolutionary Computation, 2011; 1(1) :1931.
6. ck. T, Selective pressure in evolutionary algorithms: a characterization of selection
mechanisms”, In Evolutionary Computation, 1994, IEEE World Congress on
Computational Intelligence, 1994(1):57-62.
7. Bäck. T, Hoffmeister. F, Extended selection mechanisms in genetic algorithms”. In
Proceedings of the Fourth International Conference on Genetic Algorithms 1991; 92-99.
Khalid Jebari, and Mohammed Madiafi
344
8. Blickle. T, Thiele. L, A comparison of selection schemes used in genetic algorithms”,
TIK-Report 11, TIK Institut fur Technische und Kommunikationsnetze, Swiss Federal
Institute of Technology, Dec. 1995.
9. Goldberg D. E, Deb.K, A comparative analysis of selection schemes used in genetic
algorithms”, In Foundations of Genetic Algorithms, Ed., Morgan Kaufmann, 1991; 69-
93.
10. Hancock. P, A comparison of selection mechanisms”, . In Handbook of Evolutionary
Computation, Eds . IOP Publishing and Oxford University Press., Bristol, UK, 1997
11. K. Hingee. K, Hutter. M, Equivalence of probabilistic tournament and polynomial
ranking selection”, In Evolutionary Computation, 2008. CEC 2008. (IEEE World
Congress on Computational Intelligence). IEEE Congress on 2008; 564-571.
12. Talbi. E, Metaheuristics : from design to implementation”, John Wiley and Sons, USA,
2009.
... The main goal is to ensure that the best individuals in the population are more likely to be selected for reproduction so that their genes can be transferred to the next generation. In our experiments, we employ a selection operator based on fitness proportionate selection to determine the parents for the next generation, commonly referred to as roulette wheel selection (Jebari, 2013;Katoch et al., 2021;Lipowski and Lipowska, 2012;Rathore and Rathore, 2016). This approach gives individuals with higher fitness scores a greater likelihood of being selected as parents. ...
...  Tournament selection: In this method, a few individuals are randomly selected from the population, and chosen the best among them as a parent for the next generation. This process is repeated to select the second parent (Fang and Li, 2010;Greenstein, Elsey, and Hutchison, 2023;Jebari, 2013;Katoch et al., 2021;Rathore and Rathore, 2016). KSU J Eng Sci,27(4), 2024 Araştırma Makalesi Research Article T. Ü. Şen, G. Bakal  Rank-based selection: This approach assigns ranks to individuals based on their fitness scores and selects parents based on their ranks rather than their fitness scores (Jebari, 2013;Katoch et al., 2021;Pencheva, Atanassov, and Shannon, 2009;Rathore and Rathore, 2016;Zheng and Wen, 2023). ...
... This process is repeated to select the second parent (Fang and Li, 2010;Greenstein, Elsey, and Hutchison, 2023;Jebari, 2013;Katoch et al., 2021;Rathore and Rathore, 2016). KSU J Eng Sci,27(4), 2024 Araştırma Makalesi Research Article T. Ü. Şen, G. Bakal  Rank-based selection: This approach assigns ranks to individuals based on their fitness scores and selects parents based on their ranks rather than their fitness scores (Jebari, 2013;Katoch et al., 2021;Pencheva, Atanassov, and Shannon, 2009;Rathore and Rathore, 2016;Zheng and Wen, 2023).  Stochastic universal sampling: This strategy assigns probabilities to candidate solutions based on fitness and selects them using a roulette wheel approach. ...
Article
Full-text available
Deep learning has shown remarkable success in various applications, such as image classification, natural language processing, and speech recognition. However, training deep neural networks is challenging due to their complex architecture and the number of parameters required. Genetic algorithms have been proposed as an alternative optimization technique for deep learning, offering an efficient alternative way to find an optimal set of network parameters that minimize the objective function. In this paper, we propose a novel approach integrating genetic algorithms with deep learning, specifically LSTM models, to enhance performance. Our method optimizes crucial hyper-parameters including learning rate, batch size, neuron count per layer, and layer depth through genetic algorithms. Additionally, we conduct a comprehensive analysis of how genetic algorithm parameters influence the optimization process and illustrate their significant impact on improving LSTM model performance. Overall, the presented method provides a powerful mechanism for improving the performance of deep neural networks, and; thus, we believe that it has significant potential for future applications in the artificial intelligence discipline.
... The problems of early convergence and getting trapped in local solution points can be addressed by an effective selection method, leading to an improvement in the solution quality [66,67]. With the influence of the selection method used, a balance between exploration and exploitation can be achieved by utilizing high-quality solution candidates in the search space. ...
... Because these methods also provide a chance for less fit candidates to be selected, there is a risk of not selecting the best candidate and not transferring its information to the subsequent steps. This leads to the problem of early convergence to local optima [45,66]. ...
Article
Full-text available
The concept of distributed energy, where different energy sources are combined in remote locations, forms the basis of today's power systems overall energy production logic. Furthermore, advancements in power electronic infrastructures have emphasized their increased utilization within power systems. In particular, the transition from current source converters (CSC) technology to voltage source converters (VSC) technology has made it easier to integrate power grids with different characteristics into existing power systems. High voltage direct current (HVDC) transmission applications also play a significant role in this integration. In these increasingly complex power systems with various infrastructures and applications, maintaining a sustainable, secure, economical, and environmentally-friendly balance between supply and demand becomes more challenging using classical approaches. In this study, a metaheuristic algorithm is proposed for solving the power flow problems in hybrid AC/DC power systems that include VSC-based, Multi-Terminal HVDC grids. The proposed algorithm is an enhanced version of the symbiotic organisms search (SOS) algorithm and is named di-SOS (diversity improved SOS with Parazite RFDB) algorithm. To demonstrate the effectiveness of the developed algorithm, comparisons were made with SOS algorithm variants and 15 different metaheuristic algorithms found in the literature using various test functions. Nonparametric Wilcoxon signed-rank tests and Friedman tests were performed the compared algorithms and in the comparison between SOS algorithm variants, the di_1-SOS variant of the di_SOS algorithm performed the best with an algorithm score of 2.245. In the comparison with the other 15 metaheuristic algorithms, the di_1-SOS algorithm ranked first with a ranking score of 4.525, demonstrating its success in solving classical test functions. Finally, the algorithm was employed to address power flow problems concerns within hybrid AC/DC power systems, employing altered instances of the IEEE 14-bus and IEEE 30-bus test networks. The acquired outcomes substantiated the efficacy of the algorithm in strategic formulation of AC/DC power systems and in resolving intricate real-world engineering problems, characterized by nonlinearities and constraints.
... Selection methods, like a roulette wheel or tournament selection, favor higher fitness solutions for survival in future generations [30]. The process continues until a termination condition is met, either when the solution's cost falls below a threshold or when the allowed control cycle time runs out, returning the best solution found. ...
Article
Full-text available
Genetic algorithm (GA) is typically used to solve nonlinear model predictive control's optimisation problem. However, the size of the search space in which the GA searches for the optimal control inputs is crucial for its applicability to fast-response systems. This paper proposes accelerating the genetic optimisation of NMPC by learning optimal search space size. The approach trains a multivariate regression model to adaptively predict the best smallest size of the search space in every control cycle. The proposed approach reduces the GA's computational time, improves the chance of convergence to better control inputs, and provides a stable and feasible solution. The proposed approach was evaluated on three nonlinear systems and compared to four other evolutionary algorithms implemented in a processor-in-the-loop fashion. The results show that the proposed approach provides a 17-45% reduction in computational time and increases the convergence rate by 35-47%. The source code is available on GitHub.
... Jebari et al. [12] tested the RWS, Stochastic universal sampling (SUS), LRS, ERS, TS, and truncation selection (TRS) methods on various test functions and proposed the Combined selection (CS) method, which allows choosing the best selection method to be adopted in each generation. Experimental studies have shown that proportional methods preserve genetic diversity more than elitist methods, and that the proposed method results in better performance in terms of error rates. ...
Preprint
Full-text available
The effectiveness of genetic algorithms (GA) is dependent on the selection of operators utilized. A multitude of researchers have proposed a variety of operators with the aim of improving the performance of GA. The results demonstrate that achieving optimal outcomes necessitates a balance between exploration and exploitation. In this paper, we put forward a novel selection operator with the objective of improving the equilibrium between exploration and exploitation. Moreover, a comparative analysis is conducted with the existing operator in the literature in terms of convergence rate on a total of 30 distinct travelling salesman problems, 11 of which are symmetric and 19 of which are asymmetric. Finally, the statistical merit of the proposed operator is demonstrated through the use of a critical difference diagram (CD). The results obtained demonstrate that the proposed method is more effective than those presented in the existing literature.
... For shop scheduling problems, solutions with larger fitness values are more likely to be selected. Some well-known methods are implemented in this step: the roulette wheel selection, the stochastic universal sampling, the tournament selection and so on [13]. Next, the crossover takes two random individuals kept after selection and exchanges random sub-chromosomes. ...
Preprint
There have been extensive works dealing with genetic algorithms (GAs) for seeking optimal solutions of shop scheduling problems. Due to the NP hardness, the time cost is always heavy. With the development of high performance computing (HPC) in last decades, the interest has been focused on parallel GAs for shop scheduling problems. In this paper, we present the state of the art with respect to the recent works on solving shop scheduling problems using parallel GAs. It showcases the most representative publications in this field by the categorization of parallel GAs and analyzes their designs based on the frameworks.
Article
The multi-component stacked assembly of EV batteries has the characteristic of rigid-flexible hybrids between contact surfaces, such as aerogel thermal insulation pads, which challenges assembly quality control of large-scale and high-speed manufacturing. This manuscript proposes an approach using the Weighted Objective Function of Assembly (WFA) to solve the hybrid assembly problem. In order to predict the interface contact state of the rigid-flexible hybrid assembly, the approach considers the distance constraint, the interference constraints and the equilibrium equation to transform the rigid-flexible hybrid assembly problem into a weighted optimization problem. The target dimension distribution is obtained by leveraging an enhanced genetic algorithm, which combines the elite retention strategy and the targeted gene mutation method. Moreover, the WFA model can be applied not only to consider the dimensional tolerance and the flexible deformation during the assembly process, but also to carry out coupling analysis under different loading conditions. The accuracy and efficiency of the proposed method are exhibited through an industrial case study of battery stacked assembly. While maintaining computational accuracy, a significant reduction in time costs is achieved, making it applicable for dimensional distribution predictions that rely on Monte Carlo simulations. The proposed WFA method can be applied to support the design and prediction of battery stacked assembly or other rigid-flexible coupled assembly.
Chapter
Emergency departments play a pivotal role in healthcare delivery since they represent the first line in hospitals to face emergency patients. This department is where medical care is provided to patients whose arrival rate is uncertain and a high fluctuation in the daily requirements is presented. Yet, managing its human resources efficiently is imperative to improve the quality of its services and provide the right treatments at the right time. This work aims to establish a schedule for physicians in emergency departments which represents a challenging task as it requires respecting several rules incorporating diverse aspects. The objective of this study is to allocate workdays and shifts to emergency department physicians to align physicians’ productivity with patients’ demands, without decreasing physicians’ preferences, to maximize the number of patients treated at the emergency department. Taking into consideration the stochastic and timevarying patients’ demand, we presented a mathematical programming approach that determines feasible physicians’ schedules respecting the emergency department’s hard requirements, minimizing patients’ coverage violations, along with the violation of AQ1 physicians’ preferences. A real-life case study was carried out in a public Hospital in Quebec, Canada. Different scenarios were then created to study the effect of each goal on the problem. Afterward, we applied a meta-heuristic solution approach belonging to the iterative evolutionary algorithms with several feature configurations to solve the physicians scheduling problem. The results obtained from the computational study of the several scenarios were discussed as a function of the degree of satisfaction of the goals under which the system operates allowing to conclude the best scenario that generates a one-month schedule respecting all goals without deterioration. Keywords: physicians scheduling problem · emergency department · genetic algorithm · evolutionary algorithm
Article
Full-text available
In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a three-tier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instances, parameters, and EA performance measures as major factors, and discuss how tuning can be directed to algorithm performance and/or robustness. For the survey part we establish different taxonomies to categorize tuning methods and review existing work. Finally, we elaborate on how tuning can improve methodology by facilitating well-funded experimental comparisons and algorithm analysis.
Conference Paper
This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or «steady state") selection are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper recommends practical application of the analyses and suggests a number of paths for more detailed analytical investigation of selection techniques. Keywords: proportionate selection, ranking selection, tournament selection, Genitor, takeover time, time complexity, growth ratio. 1
Book
A unified view of metaheuristics. This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems. Designing efficient metaheuristics for multi-objective optimization problems. Designing hybrid, parallel, and distributed metaheuristics. Implementing metaheuristics on sequential and parallel machines. Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.
Book
Genetic algorithms are founded upon the principle of evolution, i.e., survival of the fittest. Hence evolution programming techniques, based on genetic algorithms, are applicable to many hard optimization problems, such as optimization of functions with linear and nonlinear constraints, the traveling salesman problem, and problems of scheduling, partitioning, and control. The importance of these techniques is still growing, since evolution programs are parallel in nature, and parallelism is one of the most promising directions in computer science. The book is self-contained and the only prerequisite is basic undergraduate mathematics. This third edition has been substantially revised and extended by three new chapters and by additional appendices containing working material to cover recent developments and a change in the perception of evolutionary computation.