Conference PaperPDF Available

An Improved Diversity Mechanism for Solving Constrained Optimization Problems Using a Multimembered Evolution Strategy

Authors:

Abstract and Figures

This paper presents an improved version of a simple evolution strategy (SES) to solve global nonlinear optimization problems. As its previous version, the approach does not require the use of a penalty function, it does not require the definition by the user of any extra parameter (besides those used with an evolution strategy), and it uses some simple selection criteria to guide the process to the feasible region of the search space. Unlike its predecessor, this new version uses a multimembered Evolution Strategy (ES) and an improved diversity mechanism based on allowing infeasible solutions close to the feasible region to remain in the population. This new version was validated using a well-known set of test functions. The results obtained are very competitive when comparing the proposed approach against the previous version and other approaches representative of the state-of-the-art in constrained evolutionary optimization. Moreover, its computational cost (measured in terms of the number of fitness function evaluations) is lower than the cost required by the other techniques compared.
Content may be subject to copyright.
An Improved Diversity Mechanism for Solving
Constrained Optimization Problems using a
Multimembered Evolution Strategy
Efr´
en Mezura-Montes and Carlos A. Coello Coello
CINVESTAV-IPN
Evolutionary Computation Group (EVOCINV)
Electrical Engineering Department
Computer Science Section
Av. IPN No. 2508, Col. San Pedro Zacatenco
M´
exico D.F. 07300, M´
EXICO
emezura@computacion.cs.cinvestav.mx
ccoello@cs.cinvestav.mx
Abstract. This paper presents an improved version of a simple evolution strategy
(SES) to solve global nonlinear optimization problems. As its previous version,
the approach does not require the use of a penalty function, it does not require the
definition by the user of any extra parameter (besides those used with an evolu-
tion strategy), and it uses some simple selection criteria to guide the process to the
feasible region of the search space. Unlike its predecessor, this new version uses
a multimembered Evolution Strategy (ES) and an improved diversity mechanism
based on allowing infeasible solutions close to the feasible region to remain in the
population. This new version was validated using a well-known set of test func-
tions. The results obtained are very competitive when comparing the proposed
approach against the previous version and other approaches representative of the
state-of-the-art in constrained evolutionary optimization. Moreover, its computa-
tional cost (measured in terms of the number of fitness function evaluations) is
lower than the cost required by the other techniques compared.
1 Introduction
Evolutionary algorithms (EAs) have been successfully used to solve different types of
optimization problems [1]. However, in their original form, they lack an explicit mech-
anism to handle the constraints of a problem. This has motivated the development of a
considerable number of approaches to incorporate constraints into the fitness function
of an EA [2,3]. Particularly, in this paper we are interested in the general nonlinear
programming problem in which we want to: Find xwhich optimizes f(x)subject to:
gi(x)0, i = 1,...,nhj(x) = 0, j = 1,...,pwhere xis the vector of decision
variables x= [x1, x2,...,xr]T,nis the number of inequality constraints and pis the
number of equality constraints (in both cases, constraints could be linear or nonlinear).
The most common approach adopted to deal with constrained search spaces is the
use of penalty functions [4]. When using a penalty function, the amount of constraint
violation is used to punish or “penalize” an infeasible solution so that feasible solu-
tions are favored by the selection process. Nonetheless, the main drawback of penalty
functions is that they require a careful fine tuning of the penalty factors that accurately
estimates the degree of penalization to be applied so that we can approach efficiently
the feasible region [3].
The algorithm presented in this paper is an improved version of two previous ap-
proaches. The first version [5] was based on a (µ+ 1) Evolution Strategy coupled with
three simple selection criteria based on feasibility to guide the search to the feasible
region of the search space. A second version of the approach was proposed in [6] but
now using a (1 + λ)-ES and adding a diversity mechanism whichconsisted of allowing
solutions with a good value of the objective function to remain as a new starting point
in the next generation of the search, regardless of feasibility. The version presented in
this paper still uses the self-adaptive mutation mechanism of an ES, but we now adopt
a multimembered (µ+λ)-ES to explore constrained search spaces. This mechanism is
combined with the same three simple selection criteria used before to guide the search
towards the global optima of constrained optimization problems [6]. However, we now
add an improved diversity mechanism which, although simple, provides a significant
improvement in terms of performance. The idea of this mechanism is to allow the in-
dividual with both the lowest amount of constraint violation and the best value of the
objective function to be selected for the next generation. This solution can be chosen
with a 50% of probability either from the parents or from the offspring population. With
the combination of the above elements, the algorithm first focuses on reaching the fea-
sible region of the search space. After that, it is capable of moving over the feasible
region as to reach the global optimum. The infeasible solutions that remain in the pop-
ulation are then used to sample points in the boundaries between the feasible and the
infeasible regions. Thus, the main focus of this paper is to show how a multimembered
ES coupled with the previously described diversity mechanism, has a highly compet-
itive performance in constrained problems when compared with respect to algorithms
representative of the state-of-the-art in the area.
This paper is organized as follows: In Section 2 a description of previous approaches
based on similar ideas to our own is provided. Section 3 includes the description of
the diversity mechanism that we propose. Then, in Section 4, we present the results
obtained and also a comparison against the previous version and state-of-the-art algo-
rithms. Such results are discussed in Section 5. Finally, in Section 6 we provide some
conclusions and possible paths for future research.
2 Related Work
The hypothesis that originated this work is the following: (1) The self-adaptation mech-
anism of an ES helps to sample the search space well enough as to reach the feasible
region reasonably fast and (2) the simple addition of simple selection criteria based on
feasibility to an ES should be enough to guide the search in such a way that the global
optimum can be approached efficiently.
The three simple selection criteria used are the following:
1. Between 2 feasible solutions, the one with the highest fitness value wins (assuming
a maximization problem/task).
2. If one solution is feasible and the other one is infeasible, the feasible solution wins.
3. If both solutions are infeasible, the one with the lowest sum of constraint violation
is preferred.
The use of these criteria has been explored by other authors. Jim´
enez and Verde-
gay [7] proposed an approach similar to a min-max formulation used in multiobjective
optimization combined with tournament selection. The rules used by them are similar to
those adopted in this work. However, Jim´
enez and Verdegay’s approach lacks an explicit
mechanism to avoid the premature convergence produced by the random sampling of
the feasible region because their approach is guided by the first feasible solution found.
Deb [8] used the same tournament rules previously indicated in his approach. However,
Deb proposed to use niching as a diversity mechanism, which introduces some extra
computational time (niching has time-complexity O(N2)). In Deb’s approach, feasible
solutions are always considered better than infeasible ones. This contradicts the idea
of allowing infeasible individuals to remain in the population. Therefore, this approach
will have difficulties in problems in which the global optimum lies on the boundary
between the feasible and the infeasible regions.
Motivated by the fact that some of the most recent and competitive approaches to
incorporate constraints into an EA use an ES (see for example [9,10]), we proposed [5]
a Simple (µ+ 1) Evolution Strategy (SES) to solve constrained optimization problems
in which one child created from µmutations of the current solution competes against it
and the better one is selected as the new current solution. This approach is based on the
two mechanisms previously indicated.
However, the approach in [5] used to get trapped in local optimum solutions. Thus,
in order to improve the quality and robustness of the results, a diversity mechanism was
added in [6]. In this case, a (1 + λ)-ES was adopted and the diversity mechanism con-
sisted on allowing solutions with a good value of the objective function to remain as a
new starting point for the search at each generation, regardless of feasibility. Addition-
ally, we introduced a self-adaptive parameter called Selection Ratio (Sr), which refers
to the percentage of selections that will be performed in a deterministic way (as used
in the first version of the SES [5] where the child replaces the current solution based
on the three selection criteria previously indicated). In the remaining 1Srselections,
there were two choices: (1) either the individual (out of the λ) with the best value of
the objective function would replace the current solution (regardless of its feasibility)
or (2) the best parent (based on the three selection criteria) would replace the current
solution. Both options are given a 50% probability each. The results improved, but for
some test problems no feasible solutions could be found and for other functions the
statistical results did not show enough robustness.
3 The new diversity mechanism
The two previous versions of the algorithm [5,6] are based on a single-membered ES
and they lack the explorative power to sample large search spaces. Thus, we decided to
re-evaluate the use of a (µ+λ)-ES to solve this limitation, but in this case, improving
the diversity mechanism implemented in the second version of our approach [6] and
eliminating the use of the self-adaptive Srparameter. The new version of the SES is
based on the same concepts that its predecessors as discussed before.
The detailed features of the improved diversity mechanism are the following: At
each generation, we allow the infeasible individual with the best value of the objec-
tive function and with the lowest amount of constraint violation to survive for the next
generation. We call this solution the best infeasible solution. In fact, there are two best
infeasible solutions at each generation, one from the µparents and one from the λoff-
spring. Either of them can be chosen with a 50% of probability. With 0.03 probability,
the selection process will choose the best infeasible individual with equal probability to
be the best infeasible parent or the best infeasible offspring.
Therefore, the same best infeasible solution can be copied more than once into the
next population. However, this is a desired behavior because a few copies of this so-
lution will allow its recombination with several solutions in the population, specially
with feasible ones. Recombining feasible solutions with infeasible solutions in promis-
ing areas (based on the good value of the objective function) and close to the boundary
of the feasible region will allow the ES to reach global optimum solutions located in
the boundary of the feasible region of the search space (which are known to be the most
difficult solutions to reach). See Figure 1.
Feasible Region
Boundaries
Best infeasible solution
Feasible solutions
Possible crossover
Fig.1. Diagram that illustrates the idea of searching the boundaries with the new diversity mech-
anism proposed in this paper.
When the selection process occurs, the best individuals among the parents and off-
spring are selected based on the three selection criteria previously indicated. The se-
lection process will pick feasible solutions with a better value of the objective func-
tion first, followed by infeasible solutions with a lower constraint violation. However,
3times from every 100 picks, the best infeasible solution (from either the parents or
the offspring population with a 50% of probability each) the best infeasible solution is
copied in the population for the next generation. The pseudocode is listed in Figure 2.
We chose the value of 3based on the previous version [6] which used a population of
just 3offspring. With this low number of solutions, the approach provided good results.
function selection()
For i=1 to µDo
If flip(0.97)
Select the best individual based on the selection criteria from the union
of the parents and offspring population, add it to the population for the
next generation and delete it from this union.
Else
If flip(0.5)
Select the best infeasible individual from the parents population and
add it to the population for the
next generation.
Else
Select the best infeasible individual from the offspring population and
add it to the population for
the next generation.
End If
End If
End For
End
Fig.2. Pseudocode of the selection procedure with the diversity mechanism incorpo-
rated. flip(P)is a function that returns TRUE with probability P
4 Experiments and Results
To evaluate the performance of the proposed approach we used the 13 test functions
described in [9]. The test functions chosen contain characteristics that are representative
of what can be considered “difficult” global optimization problems for an evolutionary
algorithm. Their expressions are provided in the Appendix at the end of this paper.
To get an estimate of the ratio between the feasible region and the entire search
space for these problems, a ρmetric (as suggested by Michalewicz and Schoenauer [2])
was computed using the following expression: ρ=|F|/|S|where |F|is the number of
feasible solutions and |S|is the total number of solutions randomly generated. In this
work, S= 1,000,000 random solutions.
The different values of ρfor each of the functions chosen are shown in Table 4, where n
is the number of decision variables, LI is the number of linear inequalities, NI the num-
ber of nonlinear inequalities, LE is the number of linear equalities and NE is the number
of nonlinear equalities. We performed 30 independent runs for each test function. The
learning rates values were calculated using the formulas proposed by Schwefel [12]
(where nis the number of decision variables of the problem): τ= (p2n)1and
τ0= (2n)1. In order to favor finer movements in the search space (as we observed
in the previous versions of the approach where only one sigma value was used and
when it had values close to zero the improvements of the result increased) we decided
to experiment with just a percentage of the quantity obtained by the formula proposed
by Schwefel [12]. We initialized the sigma values for all the individuals in the initial
Statistical Results of the New SES with the Improved Diversity Mechanism
Problem Optimal Best Mean Median Worst St. Dev.
g01 15.00 15.00 15.00 15.00 15.00 0
g02 0.803619 0.803601 0.785238 0.792549 0.751322 1.67E-2
g03 1.00 1.00 1.00 1.00 1.00 2.09E-4
g04 30665.539 30665.539 30665.539 30665.539 30665.539 0
g05 5126.498 5126.599 5174.492 5160.198 5304.167 50.05E+0
g06 6961.814 6961.814 6961.284 6961.814 6952.482 1.85E+0
g07 24.306 24.327 24.475 24.426 24.843 1.32E-1
g08 0.095825 0.095825 0.095825 0.095825 0.095825 0
g09 680.630 680.632 680.643 680.642 680.719 1.55E-2
g10 7049.25 7051.90 7253.05 7253.60 7638.37 136.0E+0
g11 0.75 0.75 0.75 0.75 0.75 1.52E-4
g12 1.00 1.00 1.00 1.00 1.00 0
g13 0.053950 0.053986 0.166385 0.061873 0.468294 1.76E-1
Table 1. Statistical results obtained by our SES for the 13 test functions with 30 inde-
pendent runs. A result in boldface means global optimum solution found.
Best Result Mean Result Worst Result
Problem Optimal NEW SES OLD NEW SES OLD NEW SES OLD
g01 15.00 15.00 15.00 15.00 15.00 15.00 15.00
g02 0.803619 0.803601 0.803569 0.785238 0.769612 0.751322 0.702322
g03 1.00 1.00 1.00 1.00 1.00 1.00 1.00
g04 30665.539 30665.539 30665.539 30665.539 30665.539 30665.539 30665.539
g05 5126.498 5126.599 5174.492 5304.167
g06 6961.814 6961.814 6961.814 6961.284 6961.814 6952.482 6961.814
g07 24.306 24.327 24.314 24.475 24.419 24.843 24.561
g08 0.095825 0.095825 0.095825 0.095825 0.095784 0.095825 0.095473
g09 680.630 680.632 680.669 680.643 680.810 680.719 681.199
g10 7049.25 7051.90 7057.04 7253.05 10771.42 7638.37 16375.27
g11 0.75 0.75 0.75 0.75 0.75 0.75 0.76
g12 1.00 1.00 1.00 1.00 1.00 1.00 1.00
g13 0.053950 0.053986 0.053964 0.166385 0.264135 0.468294 0.544346
Table 2. Comparison of results between the new SES and the old one proposed in [6].
“-” means no feasible solutions were found. A result in boldface means a better value
obtained by our new approach.
Best Result Mean Result Worst Result
Problem Optimal New SES HM New SES HM New SES HM
g01 15.00 15.00 14.7886 15.00 14.7082 15.00 14.6154
g02 0.803619 0.803601 0.79953 0.785238 0.79671 0.751322 0.79119
g03 1.00 1.00 0.9997 1.00 0.9989 1.00 0.9978
g04 30665.539 30665.539 30664.530665.539 30655.330665.539 30645.9
g05 5126.498 5126.599 5174.492 5304.167
g06 6961.814 6961.814 6952.16961.284 6342.66952.482 5473.9
g07 24.306 24.327 24.620 24.475 24.826 24.843 25.069
g08 0.095825 0.095825 0.0958250 0.095825 0.0891568 0.095825 0.0291438
g09 680.63 680.632 680.91 680.643 681.16 680.719 683.18
g10 7049.25 7051.90 7147.97253.05 8163.67638.37 9659.3
g11 0.75 0.75 0.75 0.75 0.75 0.75 0.75
g12 1.00 1.00 0.999999 1.00 0.999134 1.00 0.991950
g13 0.053950 0.053986 NA 0.166385 N A 0.468294 N A
Table 3. Comparison of the new version of the SES with respect to the Homomorphous
Maps (HM) [11]. “-” means no feasible solutions were found. A result in boldface
means a better value obtained by our new approach.
Problem n Function ρLI NI LE NE
g01 13 quadratic 0.0003% 9 0 0 0
g02 20 nonlinear 99.9973% 1 1 0 0
g03 10 nonlinear 0.0026% 0 0 0 1
g04 5 quadratic 27.0079% 0 6 0 0
g05 4 nonlinear 0.0000% 2 0 0 3
g06 2 nonlinear 0.0057% 0 2 0 0
g07 10 quadratic 0.0000% 3 5 0 0
g08 2 nonlinear 0.8581% 0 2 0 0
g09 7 nonlinear 0.5199% 0 4 0 0
g10 8 linear 0.0020% 3 3 0 0
g11 2 quadratic 0.0973% 0 0 0 1
g12 3 quadratic 4.7697% 0930 0
g13 5 nonlinear 0.0000% 0 0 1 2
Table 4. Values of ρfor the 13 test problems chosen.
population with only a 40% of the value obtained by the following formula (where n
is the number of decision variables): σi(0) = 0.4×(∆xi/n)where ∆xiis approx-
imated with the expression (suggested in [9]), ∆xixu
ixl
i, where xu
ixl
iare the
upper and lower limits of the decision variable i. For the experiments we used the fol-
lowing parameters: (100 + 300)-ES, number of generations = 800, number of objective
function evaluations = 240,000.
To increase the exploitation feature of the global crossover operator we combine
discrete and intermediate crossover. Each gene in the chromosome can be processed
with any of these two crossover operators with a 50% of probability. This operator
is applied to both, strategy parameters (sigma values) and decision variables of the
problem. Note that we do not use correlated mutation. To deal with equality constraints,
a parameterless dynamic mechanism originally proposed in ASCHEA [10] and used in
[5] and in [6] is adopted. The tolerance value is decreased with respect to the current
generation using the following expression: j(t+ 1) = j(t)/1.00195. The initial 0
was set to 0.001. For problem g13, 0was set to 3.0and, in consequence, the factor
to decrease the tolerance value was modified to j(t+ 1) = j(t)/1.0145. Also, for
problems g03 and g13 the initial stepsize required a more dramatic decrease of the
stepsize. They were defined as 0.01 (just a 5% instead of the 40%) for g03 and 0.05 (a
2.5% instead of the 40%) for g13. These two test functions seem to provide better results
with very smooth movements. It is important to note that these two problems share the
following features: moderately high dimensionality (five or more decision variables),
nonlinear objective function, one or more equality constraints, and moderate size of the
search space (based on the range of the decision variables). These common features
suggest that for this type of problem, finer movements provide a better sampling of the
search space using an evolution strategy.
The statistical results of this new version of the SES with the improved diversity
mechanism are summarized in Table 1. The comparison of the improved version against
the previous one [6] is presented in Table 2. We compared our approach against the pre-
vious version of the SES [6] in Table 2 and against three state-of-the-art approaches:
the Homomorphous Maps (HM) [11] in Table 3, Stochastic Ranking (SR) [9] in Table
5 and the Adaptive Segregational Constraint Handling Evolutionary Algorithm (AS-
CHEA) [10] in Table 6.
The Homomorphous Maps performs a homomorphous mapping between an n-
dimensional cube and the feasible search region (either convex or non-convex). The
main idea of this approach is to transform the original problem into another (topolog-
ically equivalent) function that is easier to optimize by the EA. Both, the Stochastic
Ranking and ASCHEA are based on a penalty function approach. SR sorts the indi-
viduals in the population in order to assign them a rank value. However, based on the
value of a user-defined parameter, the comparison between two adjacent solutions will
be performed using only the objective function. The remaining comparisons will be per-
formed using only the penalty value (the sum of constraint violation). ASCHEA uses
three combined mechanisms: (1) an adaptive penalty function, (2) a constraint-driven
recombination that forces to select a feasible individual to recombine with an infeasible
one and (3) a segregational selection based on feasibility which maintains a balance
between feasible and infeasible solutions in the population. ASCHEA also requires a
niching mechanism to improve the diversity in the population. Each mechanism requires
the definition by the user of extra parameters.
Best Result Mean Result Worst Result
Problem Optimal New SES SR New SES SR New SES SR
g01 15.00 15.00 15.000 15.00 15.000 15.00 15.000
g02 0.803619 0.803601 0.803515 0.785238 0.781975 0.751322 0.726288
g03 1.00 1.00 1.000 1.00 1.000 1.00 1.000
g04 30665.539 30665.539 30665.539 30665.539 30665.539 30665.539 30665.539
g05 5126.498 5126.599 5126.497 5174.492 5128.881 5304.165 5142.472
g06 6961.814 6961.814 6961.814 6961.284 6875.940 6952.482 6350.262
g07 24.306 24.327 24.307 24.475 24.374 24.843 24.642
g08 0.095825 0.095825 0.095825 0.095825 0.095825 0.095825 0.095825
g09 680.63 680.632 680.630 680.643 680.656 680.719 680.763
g10 7049.25 7051.90 7054.316 7253.05 7559.192 7638.37 8835.655
g11 0.75 0.75 0.750 0.75 0.750 0.75 0.750
g12 1.00 1.00 1.00 1.00 1.00 1.00 1.00
g13 0.053950 0.053986 0.053957 0.166385 0.057006 0.468294 0.216915
Table 5. Comparison of our new version of the SES with respect to Stochastic Ranking
(SR) [9]. A result in boldface means a better value obtained by our new approach.
5 Discussion of Results
As described in Table 1, our approach was able to find the global optimum in seven test
functions (g01, g03, g04, g06, g08, g11 and g12) and it found solutions very close to
the global optimum in the remaining six (g02, g05, g07, g09, g10, g13). Compared with
its previous version [6] (Table 2) this new diversity mechanism improved the quality of
the results in problems g02, g05, g09 and g10. Also, the robustness of the results was
better in problems g02, g05, g08, g09, g10 and g13.
Best Result Mean Result Worst Result
Problem Optimal New SES ASCHEA New SES ASCHEA New SES ASCHEA
g01 15.015.00 15.015.00 14.84 15.00 NA
g02 0.803619 0.803601 0.785 0.785238 0.59 0.751322 NA
g03 1.00 1.00 1.01.00 0.99989 1.00 NA
g04 30665.539 30665.539 30665.530665.539 30665.530665.539 NA
g05 5126.498 5126.599 5126.5 5174.492 5141.65 5304.167 NA
g06 6961.814 6961.814 6961.81 6961.284 6961.81 6952.482 NA
g07 24.306 24.327 24.3323 24.475 24.66 24.843 NA
g08 0.095825 0.095825 0.095825 0.095825 0.095825 0.095825 NA
g09 680.630 680.632 680.630 680.643 680.641 680.719 NA
g10 7049.25 7051.90 7061.13 7253.05 7193.11 7638.37 NA
g11 0.75 0.75 0.75 0.75 0.75 0.75 NA
g12 1.00 1.00 NA 1.00 N A 1.00 N A
g13 0.053950 0.053986 NA 0.166385 N A 0.468294 N A
Table 6. Comparison of our new version of the SES with respect to ASCHEA [10].
NA = Not Available. A result in boldface means a better value obtained by our new
approach.
When compared with respect to the three state-of-the-art techniques previously in-
dicated, we found the following: Compared with the Homomorphous Maps (Table 3)
the new SES found a better “best” solution in ten problems (g01, g02, g03, g04, g05,
g06, g07, g09, g10 and g12) and a similar “best” result in other two (g08 and g11).
Also, our technique reached better “mean” and “worst” results in ten problems (g01,
g03, g04, g05, g06, g07, g08, g09, g10 and g12). A “similar” mean and worst result
was found in problem g11. The Homomorphous maps found a “better” mean and worst
result in function g02. No comparisons were made with respect to function g13 because
such results were not available for HM.
With respect to Stochastic Ranking (Table 5), our approach was able to find a better
“best” result in functions g02 and g10. In addition, it found a “similar” best solution in
seven problems (g01, g03, g04, g06, g08, g11 and g12). Slightly better “best” results
were found by SR in the remaining functions (g05, g07, g09 and g13). The new SES
found better “mean” and “worst” results in four test functions (g02, g06, g09 and g10).
It also provided similar “mean” and “worst” results in six functions (g01, g03, g04, g08,
g11 and g12). Finally, SR found again justslightly better “mean” and “worst” results in
functions g05, g07 and g13.
Compared against the Adaptive Segregational Constraint Handling Evolutionary
Algorithm (Table 6), our algorithm found “better” best solutions in three problems (g02,
g07 and g10) and it found “similar” best results in six functions (g01, g03, g04, g06,
g08, g11). ASCHEA found slightly “better” best results in function g05 and g09. Ad-
ditionally, the new SES found “better” mean results in four problems (g01, g02, g03
and g07) and it found “similar” mean results in three functions (g04, g08 and g11).
ASCHEA surpassed our mean results in four functions (g05, g06, g09 and g10). We did
not compare the worst results because they were not available for ASCHEA. We did
not perform comparisons with respect to ASCHEA using functions g12 and g13 for the
same reason. As we can see, our approach showed a very competitive performance with
respect to these three state-of-the-art approaches.
Our approach can deal with moderately constrained problems (g04), highly con-
strained problems, problems with low (g06, g08), moderated (g09) and high (g01, g02,
g03, g07) dimensionality, with different types of combined constraints (linear, non-
linear, equality and inequality) and with very large (g02), very small (g05 and g13) or
even disjoint (g12) feasible regions. Also, the algorithm is able to deal with large search
spaces (based on the intervals of the decision variables) with a very small feasible region
(g10). Furthermore, the approach can find the global optimum in problems where such
optimum lies on the boundaries of the feasible region (g01, g02, g04, g06, g07, g09).
This behavior suggests that the mechanism of maintaining the best infeasible solution
helps the search to sample the boundaries of the feasible region.
Besides still being a very simple approach, it is worth reminding that our algorithm
does not require the fine-tuning of any extra parameters (other than those used with an
evolution strategy) since the only parameters required by the approach have remained
fixed in all cases. In contrast, the Homomorphous maps require an additional parameter
(called v) which has to be found empirically [11]. Stochastic ranking requires the defini-
tion of a parameter called Pf, whose value has an important impact on the performance
of the approach [9]. ASCHEA also requires the definition of several extra parameters,
and in its latest version, it uses niching, which is a process that also has at least one
additional parameter [10].
The computational cost measured in terms of the number of fitness function eval-
uations (FFE) performed by any approach is lower for our algorithm with respect to
the others to respect to which it was compared. This is an additional (and important)
advantage, mainly if we wish to use this approach for solving real-world problems. Our
new approach performed 240,000 FFE, the previous version required 330,000 FFE,
the Stochastic Ranking performed 350,000 FFE, the Homomorphous Maps performed
1,400,000 FFE, and ASCHEA required 1,500,000 FFE.
6 Conclusions and Future Work
An improved diversity mechanism added to a multimembered Evolution Strategy com-
bined with some selection criteria based on feasibility were proposed to solve (rather
efficiently) constrained optimization problems. The proposed approach does not require
the use of a penalty function and it does not require the fine-tuning of any extra parame-
ters (other than those required by an evolution strategy), since they assume fixed values.
The proposed approach uses the self-adaptation mechanism of a multimembered ES to
sample the search space in order to reach the feasible region and it uses three simple
selection criteria based on feasibility to guide the search towards the global optimum.
Moreover, the proposed technique adopts a diversity mechanism which consists of al-
lowing infeasible solutions close to the boundaries of the feasible region to remain in
the next population. This approach is very easy to implement and its computational cost
(measured in terms of the number of fitness function evaluations) is considerably lower
than the cost reported by other three constraint-handling techniques which are repre-
sentative of the state-of-the-art in evolutionary optimization. Despite its lower compu-
tational cost, the proposed approach was able to match (and even improve) on the results
obtained by the other algorithms with respect to which it was compared.
As part of our future work, we plan to evaluate the rate at which our algorithm
reaches the feasible region. This is an important issue when dealing with real-world
applications, since in highly constrained search spaces, reaching the feasible region
may be a rather costly task. Additionally, we have to perform more experiments in
order to establish which of the three mechanisms of the approach (diversity mechanism,
combined crossover or the reduced stepsize) is mandatory or if only their combined
effect makes the algorithm work.
Acknowledgments
The first author acknowledges support from the Mexican Consejo Nacional de Ciencia y
Tecnolog´
ıa (CONACyT) through a scholarship to pursue graduate studies at CINVES-
TAV-IPN’s. The second author acknowledges support from CONACyT through project
number 34201-A.
Appendix: Test Functions
1. g01: Minimize: f(x) = 5 P4
i=1 xi5P4
i=1 x2
iP13
i=5 xisubject to:
g1(x) = 2x1+ 2x2+x10 +x11 10 0, g2(x) = 2x1+ 2x3+x10 +x12 10 0
g3(x) = 2x2+ 2x3+x11 +x12 10 0, g4(x) = 8x1+x10 0,
g5(x) = 8x2+x11 0, g6(x) = 8x3+x12 0, g7(x) = 2x4x5+x10 0,
g8(x) = 2x6x7+x11 0, g9(x) = 2x8x9+x12 0
where the bounds are 0xi1(i= 1,...,9), 0xi100 (i= 10,11,12) and
0x13 1. The global optimum is at x= (1,1,1,1,1,1,1,1,1,3,3,3,1) where
f(x) = 15. Constraints g1,g2,g3,g4,g5and g6are active.
2. g02: Maximize: f(x) =
Pn
i=1 cos4(xi)2Qn
i=1 cos2(xi)
Pn
i=1 ix2
i
subject to:
g1(x) = 0.75
n
Y
i=1
xi0, g2(x) =
n
X
i=1
xi7.5n0
where n= 20 and 0xi10 (i= 1, . . . , n). The global maximum is unknown;
the best reported solution is [9] f(x) = 0.803619. Constraint g1is close to being active
(g1=108).
3. g03: Maximize: f(x) = (n)nQn
i=1 xisubject to: h(x) = Pn
i=1 x2
i1 = 0 where n=
10 and 0xi1 (i= 1, . . . , n). The global maximum is at x
i= 1/n(i= 1, . . . , n)
where f(x) = 1.
4. g04: Minimize: f(x) = 5.3578547x2
3+ 0.8356891x1x5+ 37.293239x140792.141
subject to:
g1(x) = 85.334407 + 0.0056858x2x5+ 0.0006262x1x40.0022053x3x592 0
g2(x) = 85.334407 0.0056858x2x50.0006262x1x4+ 0.0022053x3x50
g3(x) = 80.51249 + 0.0071317x2x5+ 0.0029955x1x2+ 0.0021813x2
3110 0
g4(x) = 80.51249 0.0071317x2x50.0029955x1x20.0021813x2
3+ 90 0
g5(x) = 9.300961 + 0.0047026x3x5+ 0.0012547x1x3+ 0.0019085x3x425 0
g6(x) = 9.300961 0.0047026x3x50.0012547x1x30.0019085x3x4+ 20 0
where: 78 x1102,33 x245,27 xi45 (i= 3,4,5). The optimum solution
is x= (78,33,29.995256025682,45,36.775812905788) where f(x) = 30665.539.
Constraints g1yg6are active.
5. g05: Minimize:f(x) = 3x1+ 0.000001x3
1+ 2x2+ (0.000002/3)x3
2subject to: g1(x) =
x4+x30.55 0, g2(x) = x3+x40.55 0
h3(x) = 1000 sin(x30.25) + 1000 sin(x40.25) + 894.8x1= 0
h4(x) = 1000 sin(x30.25) + 1000 sin(x3x40.25) + 894.8x2= 0
h5(x) = 1000 sin(x40.25) + 1000 sin(x4x30.25) + 1294.8 = 0 where 0
x11200,0x21200,0.55 x30.55, and 0.55 x40.55. The best
known solution is x= (679.9453,1026.067,0.1188764,0.3962336) where f(x) =
5126.4981.
6. g06: Minimize: f(x) = (x110)3+ (x220)3subject to: g1(x) = (x15)2(x2
5)2+ 100 0,g2(x) = (x16)2+ (x25)282.81 0where 13 x1100
and 0x2100. The optimum solution is x= (14.095,0.84296) where f(x) =
6961.81388. Both constraints are active.
7. g07: Minimize: f(x) = x2
1+x2
2+x1x214x116x2+ (x310)2+ 4(x45)2+
(x53)2+ 2(x61)2+ 5x2
7+ 7(x811)2+ 2(x910)2+ (x10 7)2+ 45
subject to: g1(x) = 105 + 4x1+ 5x23x7+ 9x80
g2(x) = 10x18x217x7+ 2x80, g3(x) = 8x1+ 2x2+ 5x92x10 12 0
g4(x) = 3(x12)2+ 4(x23)2+ 2x2
37x4120 0
g5(x) = 5x2
1+ 8x2+ (x36)22x440 0
g6(x) = x2
1+ 2(x22)22x1x2+ 14x56x60
g7(x) = 0.5(x18)2+ 2(x24)2+ 3x2
5x630 0
g8(x) = 3x1+6x2+12(x98)27x10 0where 10 xi10 (i= 1, . . . , 10). The
global optimum is x= (2.171996,2.363683,8.773926,5.095984,0.9906548,1.430574,
1.321644,9.828726,8.280092,8.375927) where f(x) = 24.3062091. Constraints g1,g2,
g3,g4,g5and g6are active.
8. g08: Maximize: f(x) = sin3(2π x1) sin(2πx2)
x3
1(x1+x2)
subject to: g1(x) = x2
1x2+ 1 0,g2(x) = 1 x1+ (x24)20where 0x110
and 0x210. The optimum solution is located at x= (1.2279713,4.2453733) where
f(x) = 0.095825.
9. g09: Minimize: f(x) = (x110)2+ 5(x212)2+x4
3+ 3(x411)2+ 10x6
5+ 7x2
6+
x4
74x6x710x68x7
subject to: g1(x) = 127 + 2x2
1+ 3x4
2+x3+ 4x2
4+ 5x50,g2(x) = 282 + 7x1+
3x2+ 10x2
3+x4x50
g3(x) = 196 + 23x1+x2
2+ 6x2
68x70,g4(x) = 4x2
1+x2
23x1x2+ 2x2
3+ 5x6
11x70where 10 xi10 (i= 1,...,7). The global optimum is x= (2.330499,
1.951372,0.4775414,4.365726,0.6244870,1.038131,1.594227) where f(x) =
680.6300573. Two constraints are active (g1and g4).
10. g10: Minimize: f(x) = x1+x2+x3subject to: g1(x) = 1 + 0.0025(x4+x6)0
g2(x) = 1 + 0.0025(x5+x7x4)0, g3(x) = 1 + 0.01(x8x5)0
g4(x) = x1x6+ 833.33252x4+ 100x183333.333 0
g5(x) = x2x7+ 1250x5+x2x41250x40
g6(x) = x3x8+ 1250000 + x3x52500x50where 100 x110000,1000
xi10000,(i= 2,3),10 xi1000,(i= 4,...,8). The global optimum is: x=
(579.19,1360.13,5109.92,182.0174,295.5985,217.9799,286.40,395.5979), where
f(x) = 7049.25.g1,g2and g3are active.
11. g11: Minimize: f(x) = x2
1+ (x21)2subject to: h(x) = x2x2
1= 0 where: 1
x11,1x21. The optimum solution is x= (±1/2,1/2) where f(x) = 0.75.
12. g12: Maximize: f(x) = 100(x15)2(x25)2(x35)2
100 subject to: g1(x) = (x1p)2+
(x2q)2+ (x3r)20.0625 0where 0xi10 (i= 1,2,3) and p, q, r =
1,2,...,9. The feasible region of the search space consists of 93disjointed spheres. A point
(x1, x2, x3)is feasible if and only if there exist p, q, r such the above inequality (12) holds.
The global optimum is located at x= (5,5,5) where f(x) = 1.
13. g13: Minimize: f(x) = ex1x2x3x4x5subject to: h1(x) = x2
1+x2
2+x2
3+x2
4+x2
510 = 0
h2(x) = x2x35x4x5= 0,h3(x) = x3
1+x3
2+1 = 0 where 2.3xi2.3 (i= 1,2)
and 3.2xi3.2 (i= 3,4,5). The optimum solution is x= (1.717143,1.595709,
1.827247,0.7636413,0.763645) where f(x) = 0.0539498.
References
1. B¨
ack, T.: Evolutionary Algorithms in Theory and Practice. Oxford University Press, New
York (1996)
2. Michalewicz, Z., Schoenauer, M.: Evolutionary Algorithms for Constrained Parameter Op-
timization Problems. Evolutionary Computation 4(1996) 1–32
3. Coello Coello, C.A.: Theoretical and Numerical Constraint Handling Techniques used with
Evolutionary Algorithms: A Survey of the State of the Art. Computer Methods in Applied
Mechanics and Engineering 191 (2002) 1245–1287
4. Smith, A.E., Coit, D.W.: Constraint Handling Techniques—Penalty Functions. In B¨
ack,
T., Fogel, D.B., Michalewicz, Z., eds.: Handbook of Evolutionary Computation. Oxford
University Press and Institute of Physics Publishing (1997)
5. Mezura-Montes, E., Coello Coello, C.A.: A Simple Evolution Strategy to Solve Constrained
Optimization Problems. In Cant ´
u-Paz, E., Foster, J.A., Deb, K., Davis, L.D., Roy, R., Reilly,
U.M.O., Beyer, H.G., Standish, R., Kendall, G., Wilson, S., Harman, M., Wegener, J., Das-
gupta, D., Potter, M.A., Schultz, A.C., Dowsland, K.A., Jonoska, N., Miller, J., eds.: Proceed-
ings of the Genetic and Evolutionary Computation Conference (GECCO’2003), Heidelberg,
Germany, Chicago, Illinois, Springer Verlag (2003) 640–641 Lecture Notes in Computer
Science Vol. 2723.
6. Mezura-Montes, E., Coello Coello, C.A.: Adding a Diversity Mechanism to a Simple Evolu-
tion Strategy to Solve Constrained Optimization Problems. In: Proceedings of the Congress
on Evolutionary Computation 2003 (CEC’2003). Volume 1., Piscataway, New Jersey, Can-
berra, Australia, IEEE Service Center (2003) 6–13
7. Jim´
enez, F., Verdegay, J.L.: Evolutionary techniques for constrained optimization problems.
In Zimmermann, H.J., ed.: 7th European Congress on Intelligent Techniques and Soft Com-
puting (EUFIT’99), Aachen, Germany, Verlag Mainz (1999) ISBN 3-89653-808-X.
8. Deb, K.: An Efficient Constraint Handling Method for Genetic Algorithms. Computer
Methods in Applied Mechanics and Engineering 186 (2000) 311–338
9. Runarsson, T.P., Yao, X.: Stochastic Ranking for Constrained Evolutionary Optimization.
IEEE Transactions on Evolutionary Computation 4(2000) 284–294
10. Hamida, S.B., Schoenauer, M.: ASCHEA: New Results Using Adaptive Segregational
Constraint Handling. In: Proceedings of the Congress on Evolutionary Computation 2002
(CEC’2002). Volume 1., Piscataway, New Jersey, IEEE Service Center (2002) 884–889
11. Koziel, S., Michalewicz, Z.: Evolutionary Algorithms, Homomorphous Mappings, and Con-
strained Parameter Optimization. Evolutionary Computation 7(1999) 19–44
12. Schwefel, H.P.: Evolution and Optimal Seeking. John Wiley & Sons Inc., New York (1995)
... Leong and Yen [26] adjust the inertia weight according to the particles' positions to speed up the convergence rate. On the other hand, Mezura-Montes and Coello Coello [27] improved the pental function in the algorithm to improve the optimization ability. The improved algorithm in this paper combines the nonlinear dynamic inertia weight and the penalty factor of dynamic correction used to detect the global optimum. ...
Article
Full-text available
This paper mainly discussed the problem of a multiechelon and multiperiod joint policy of inventory and supply network. According to the random lead time and customers’ inventory demand, the s,S policy was improved. Based on the multiechelon supply network and the improved, the dynasty joint model was built. The supply scheme in every period with the objective of minimum total costs is obtained. Considering the complexity of the model, the improved particle swarm optimization algorithm combining the adaptive inertia weight and grading penalty function is adopted to calculate this model and optimize the spare part problems in various environments.
... In feasibility first schemes, the feasible solutions are ranked using their objective values and infeasible solutions are ranked based on their constraint violations. Thereafter, the infeasible solutions are placed below the feasible set of solutions [16,19,32,36,38]. In the context of a generational model, the use of such a scheme would select M feasible solutions as members of the population for the next generation (if there are at least M feasible solutions in the pool consisting of 2M solutions (M parents and M offspring)). ...
Conference Paper
Most real world optimization problems involve constraints and constraint handling has long been an area of active research. While older techniques explicitly preferred feasible solutions over infeasible ones, recent studies have uncovered some shortcomings of such strategies. There has been a growing interest in the efficient use of infeasible solutions during the course of search and this paper presents of short review of such techniques. These techniques prefer good infeasible solutions over feasible solutions during the course of search (or a part of it). The review looks at major reported works over the years and outlines how these preferences have been dealt in various stages of the solution process, viz, problem formulation, parent selection/recombination and ranking/selection. A tabular summary is then presented for easy reference to the work in this area.
... Only Zavala [98] has a better mean performance for the second constrained function f mc f 2 . We are comprehensively beaten by Hedar [67], Montes [56,57] and Runarsson [72] on the fourth of Michalewicz's constrained functions f mc f 4 , which contains four equality constraints. Xie [95] has an outstanding result on the fifth of Michalewicz's constrained functions which neither the results obtained using our technique nor any other reported performance is close to matching. ...
Article
We present an investigation into the design of an evolutionary mechanism for multiagent protocol constraint optimisation. Starting with a review of common population based mechanisms we discuss the properties of the mechanisms used by these search methods. We derive a novel algorithm for optimisation of vectors of real numbers and empirically validate the efficacy of the design by comparing against well known results from the literature. We discuss the application of an optimiser to a novel problem and remark upon the relevance of the no free lunch theorem. We show the relative performance of the optimiser is strong and publish details of a new best result for the Keane optimisation problem. We apply the final algorithm to the multi-agent protocol optimisation problem and show the design process was successful.
... The proposed approach provided good quality results, but it was not consistent (not all runs reached either the best known or the global optimum solution). However, they concluded that their approach as well as Deb's and Lampinen's algorithms [17] lack a mechanism to maintain diversity (to have both feasible and infeasible solutions in the population during all the evolutionary process), which is one of the most important aspects to consider when designing a competitive constraint-handling approach [23]. Hence, Mezura-Montes et al. [27], in order to keep diversity in the population, used Storn's idea of allowing the generation of more than one offspring for each individual, but now it was combined with a mechanism to allow infeasible solutions with a good value of the objective function to remain in the population. ...
Article
Full-text available
This paper presents a novel Constrained Optimization based on Modified Differential Evolution algorithm (COMDE). In the new algorithm, a new directed mutation rule, based on the weighted difference vector between the best and the worst individuals at a particular generation, is introduced. The new directed mutation rule is combined with the modified basic mutation strategy DE/rand/1/bin, where only one of the two mutation rules is applied with the probability of 0.5. The proposed mutation rule is shown to enhance the local search ability of the basic Differential Evolution (DE) and to get a better trade-off between convergence rate and robustness. Two new scaling factors are introduced as uniform random variables to improve the diversity of the population and to bias the search direction. Additionally, a dynamic non-linear increased crossover probability is utilized to balance the global exploration and local exploitation. COMDE also includes a modified constraint handling technique based on feasibility and the sum of constraints violations. A new dynamic tolerance technique to handle equality constraints is also adopted. The effectiveness and benefits of the new directed mutation strategy and modified basic strategy used in COMDE has been experimentally investigated. The effect of the parameters of the crossover probability function and the parameters of the dynamic tolerance equation on the performance of COMDE have been analyzed and evaluated by different experiments. Numerical experiments on 13 well-known benchmark test functions and five engineering design problems have shown that the new approach is efficient, effective and robust. The comparison results between the COMDE and the other 28 state-of-the-art evolutionary algorithms indicate that the proposed COMDE algorithm is competitive with, and in some cases superior to, other existing algorithms in terms of the quality, efficiency, convergence rate, and robustness of the final solution.
... As it can be seen, the multi-objective optimization is extensively used in problems with searches for the global minimum and with discrete variables. Probably for these reasons the main kind of algorithm used in these kind of optimizations is the stochastic genetic evolutive algorithms (Montes and Coello 2004), (Coello 1998). Improvements of this algorithm have been suggested in the literature. ...
Article
Full-text available
1. Abstract A new algorithm based on the goal attainment method is proposed. The algorithm, named DTS, is a more appropriate way to settle multi-objective optimization problems. The algorithm is also based on the GRG method, with important changes to improve the convergence to the solution. The first modification consists of performing a factorization of the gradient of the constraints, allowing finding the most sensitive independent variables to the search direction, avoiding choosing inappropriate variables. Other important modification is the way in which the search direction is determined. In the proposed algorithm, the search direction is determined through an optimization problem with analytic solution applicable to any problem, independent on the nonlinearities of the constraint and objective functions. The third important modification of this algorithm consists of changing the original nonlinear problem composed by a set of nonlinear objective functions and nonlinear constraints into a problem with one linear objective function subject to nonlinear constraints, which allows the algorithm to rides only on the constraints (Active Set Method) of the modified system. The results show the high performance and high accuracy of the algorithm when applied to several optimization problems. 2 Keywords: Multi-objective optimization, Goal Attainment, Optimization Algorithm.
... However, heuristics are not absolutely guaranteed to provide the best 978-1-4244-7534-6/10/$26.00 ©201O IEEE solutions, or even to find a solution at all. This demands adopting some optimisation techniques such as Evolutionary Algorithms (EA) or Evolutionary Computation (EC) [17,9]. ...
Article
Full-text available
This paper presents a novel multi-agent architecture for meeting scheduling. The proposed architecture is a new Hybrid Multi-Agent Architecture (HMAA) that generates new heuristics for solving NP-hard problems. Moreover, the paper investigates the feasibility of running computationally intensive algorithms on multi-agent architectures while preserving the ability of small agents to run on small devices, including mobile devices. Three experimental groups are conducted in order to test the feasibility of the proposed architecture. The results show that the performance of the proposed architecture is better than those of many existing meeting scheduling frameworks. Moreover, it has been proved that HMAA preserves small agents' mobility (i.e. the ability to run on small devices) while implementing evolutionary algorithms.
... The initial mutation strength parameter r i ðjÞ was set to 0:4jx max À x min j= ffiffiffi ffi m p for the algorithms with anisotropic mutation distributions, as employed in (Montes and Coello 2004), and 0:4jx max À x min j for the algorithms with mutations generated from isotropic distributions, where x min and x max are, respectively the minimum and maximum allowed values of each element of the solution in the initial population. The initial q-Gaussian parameter q in algorithms with q-Gaussian mutation was set to 1.0 (value where the Gaussian distribution is reproduced). ...
Article
Full-text available
This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.
... On the other hand, in Lampinen's technique, the solution which Pareto-dominates the other in the constraints space will be selected. Mezura et al. [16] proposed a DE-based approach where the newly generated offspring is added to the current generation (instead of including the solution into the next generation). The idea was to allow newly generated solutions to be selected to influence the selection of search directions of the offspring in the current generation and to speed up convergence. ...
Conference Paper
Full-text available
In this paper, we present a Differential-Evolution based approach to solve constrained optimization problems. The aim of the approach is to increase the probability of each parent to generate a better offspring. This is done by allowing each solution to generate more than one offspring but using a different mutation operator which combines information of the best solution in the population and also information of the current parent to find new search directions. Three selection criteria based on feasibility are used to deal with the constraints of the problem and also a diversity mechanism is added to maintain infeasible solutions located in promising areas of the search space. The approach is tested in a set of test problems proposed for the special session on Constrained Real Parameter Optimization. The results obtained are discussed and some conclusions are established.
Article
differential evolution (DE) is well known optimization tool for solving global optimization problems. In the present study we present a DE variant called MSDE [22] for solving constrained optimization problems. The proposed algorithm is applied on a set of 6 constrained benchmark problems proposed in CEC 2006 [1]. Numerical results indicate the competence of the proposed algorithm.
Article
In this paper, a novel hybridized algorithm is developed to solve constrained optimization real life problem. The newly developed algorithm is introduced in the name of Chemo-inspired Genetic Algorithm for constrained optimization (CGAC). Here, one typical engineering problem is solved by CGAC and the numerical results are compared with Differential Evolution with Level Comparison (DELC), Differential Evolution with Dynamic Stochastic Selection (DEDS), Hybrid Evolutionary Algorithm and Adaptive constraint-handling technique (HEAA) and many other evolutionary algorithms. The computational result confirms the out per performance of CGAC over others.
Conference Paper
Full-text available
In this paper, we argue that the self-adaptation mechanism of a conventional evolution strategy combined with some (very simple) tournament rules based on feasibility similar to some previous proposals can provide us with a highly competitive evolutionary algorithm for constrained optimization. In our proposal, however, no extra mechanisms are provided to maintain diversity. In order to verify our hypothesis, we performed a small comparative study among five different types of ES: (μ+ ,λ)-ES with and without correlated mutation and a (μ+1)-ES using the “1/5-success rule”. The tournament rules adopted in the five types of ES implemented are the following: Between 2 feasible solutions, the one with the highest fitness value wins, if one solution is feasible and the other one is infeasible, the feasible solution wins and if both solutions are infeasible, the one with the lowest sum of constraint violation is preferred. To evaluate the performance of the five types of ES under study, we decided to use ten (out of 13) of the test functions described by Runarsson and Yao (2000). The (μ+1)-ES had the best overall performance (both in terms of the best solution found and in terms of its statiscal measures). The algorithm of the type of ES adopted (due to its simplicity, we decided to call it Simple Evolution Strategy, or SES) is presented in Figure 1.
Conference Paper
Full-text available
ASCHEA is an adaptive algorithm for constrained optimization problem based on a population level adaptive penalty function to handle constraints, a constraint-driven mate selection for recombination, and a segregational selection that favors a given number of feasible individuals. In this paper, we present some new results obtained using ASCHEA after extending the penalty function and introducing a niching technique with adaptive radius to handle multimodal functions. Furthermore, we propose a new equality constraint handling strategy. The idea is to start, for each equality, with a large feasible domain and to reduce it progressively along generations, in order to bring it as close as possible to null measure domain. Two approaches are proposed and experimented, the first based on dynamic adjustment and the second based on adaptive adjustment
Article
Full-text available
This paper provides a comprehensive survey of the most popular constraint-handling techniques currently used with evolutionary algorithms. We review approaches that go from simple variations of a penalty function, to others, more sophisticated, that are biologically inspired on emulations of the immune system, culture or ant colonies. Besides describing briefly each of these approaches (or groups of techniques), we provide some criticism regarding their highlights and drawbacks. A small comparative study is also conducted, in order to assess the performance of several penalty-based approaches with respect to a dominance-based technique proposed by the author, and with respect to some mathematical programming approaches. Finally, we provide some guidelines regarding how to select the most appropriate constraint-handling technique for a certain application, and we conclude with some of the most promising paths of future research in this area.
Article
Full-text available
Evolutionary computation techniques have received a great deal of attention regarding their potential as optimization techniques for complex numerical functions. However, they have not produced a significant breakthrough in the area of nonlinear programming due to the fact that they have not addressed the issue of constraints in a systematic way. Only recently have several methods been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; however, these methods have several drawbacks, and the experimental results on many test cases have been disappointing. In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem; (2) survey several approaches that have emerged in the evolutionary computation community; and (3) provide a set of 11 interesting test cases that may serve as a handy reference for future methods.
Book
This book presents a unified view of evolutionary algorithms: the exciting new probabilistic search tools inspired by biological models that have immense potential as practical problem-solvers in a wide variety of settings, academic, commercial, and industrial. In this work, the author compares the three most prominent representatives of evolutionary algorithms: genetic algorithms, evolution strategies, and evolutionary programming. The algorithms are presented within a unified framework, thereby clarifying the similarities and differences of these methods. The author also presents new results regarding the role of mutation and selection in genetic algorithms, showing how mutation seems to be much more important for the performance of genetic algorithms than usually assumed. The interaction of selection and mutation, and the impact of the binary code are further topics of interest. Some of the theoretical results are also confirmed by performing an experiment in meta-evolution on a parallel computer. The meta-algorithm used in this experiment combines components from evolution strategies and genetic algorithms to yield a hybrid capable of handling mixed integer optimization problems. As a detailed description of the algorithms, with practical guidelines for usage and implementation, this work will interest a wide range of researchers in computer science and engineering disciplines, as well as graduate students in these fields.
Article
Evolutionary computation techniques have received a great deal of attention regarding their potential as optimization techniques for complex numerical functions. However, they have not produced a significant breakthrough in the area of nonlinear programming due to the fact that they have not addressed the issue of constraints in a systematic way. Only recently have several methods been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; however, these methods have several drawbacks, and the experimental results on many test cases have been disappointing. In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem; (2) survey several approaches that have emerged in the evolutionary computation community; and (3) provide a set of 11 interesting test cases that may serve as a handy reference for future methods.
Article
Many real-world search and optimization problems involve inequality and/or equality constraints and are thus posed as constrained optimization problems. In trying to solve constrained optimization problems using genetic algorithms (GAs) or classical optimization methods, penalty function methods have been the most popular approach, because of their simplicity and ease of implementation. However, since the penalty function approach is generic and applicable to any type of constraint (linear or nonlinear), their performance is not always satisfactory. Thus, researchers have developed sophisticated penalty functions specific to the problem at hand and the search algorithm used for optimization. However, the most difficult aspect of the penalty function approach is to find appropriate penalty parameters needed to guide the search towards the constrained optimum. In this paper, GA's population-based approach and ability to make pair-wise comparison in tournament selection operator are exploited to devise a penalty function approach that does not require any penalty parameter. Careful comparisons among feasible and infeasible solutions are made so as to provide a search direction towards the feasible region. Once sufficient feasible solutions are found, a niching method (along with a controlled mutation operator) is used to maintain diversity among feasible solutions. This allows a real-parameter GA's crossover operator to continuously find better feasible solutions, gradually leading the search near the true optimum solution. GAs with this constraint handling approach have been tested on nine problems commonly used in the literature, including an engineering design problem. In all cases, the proposed approach has been able to repeatedly find solutions closer to the true optimum solution than that reported earlier.