Conference PaperPDF Available

The role of e-dominance in multi-objective particle swarm optimization methods

Authors:

Abstract and Figures

In this paper, the influence of ε-dominance on multi-objective particle swarm optimization (MOPSO) methods is studied. The most important role of ε-dominance is to bound the number of non-dominated solutions stored in the archive (archive size), which has influences on computational time, convergence and diversity of solutions. Here, ε-dominance is compared with the existing clustering technique for fixing the archive size and the solutions are compared in terms of computational time, convergence and diversity. A new diversity metric is also suggested. The results show that the ε-dominance method can find solutions much faster than the clustering technique with comparable and even in some cases better convergence and diversity.
Content may be subject to copyright.
The Role of -dominance in Multi Objective Particle Swarm Optimization
Methods
Sanaz Mostaghim
Electrical Engineering Department,
University of Paderborn,
Paderborn, Germany
mostaghim@date.upb.de
J ¨urgen Teich
Computer Science Department
Friedrich-Alexander-University,
Erlangen, Germany
teich@informatik.uni-erlangen.de
Abstract- In this paper, the influence of -dominance on
Multi-objective Particle Swarm Optimization (MOPSO)
methods is studied. The most important role of
-
dominance is to bound the number of non-dominated so-
lutions stored in the archive (archive size), which has in-
fluences on computational time, convergence and diver-
sity of solutions. Here,
-dominance is compared with
the existing clustering technique for fixing the archive
size and the solutions are compared in terms of com-
putational time, convergence and diversity. A new di-
versity metric is also suggested. The results show that
the
-dominance method can find solutions much faster
than the clustering technique with comparable and even
in some cases better convergence and diversity.
1 Introduction
Archiving is studied in many Multi-objective Optimization
(MO) methods. In the context of Evolutionary MO meth-
ods, archiving is called elitism and is used in several meth-
ods like Rudolph’s Elitist MOEA, Elitist NSGA-II, SPEA,
PAES (see [2] for all) and SPEA2 [17]. In these methods,
the non-dominated (best) solutions of each generation are
kept in an external population, called archive. Therefore
the archive must be updated in each generation. The time
needed for updating the archive depends on the archive size,
population size and the number of objectives and increases
extremely when increasing the values of these three fac-
tors [10]. There are a few studies on data structures for
storing the archive as a non-dominated set e.g., in [12, 6].
These data structures also take lots of time, when increas-
ing the archive size. Indeed it is reasonable to fix the size of
the archive to avoid the large number of comparisons during
updating. There are several methods, like clustering, trun-
cation in SPEA2 and crowding techniques to fix the archive
size. These methods must also keep good diversity of so-
lutions, which tends to make them the most expensive part
in the updating procedure. Here, we propose to use the idea
of
-dominance in [13, 9] to fix the size of the archive to a
certain amount. This size depends on
. By increasing ,
the archive size decreases. We use this method to obtain the
approximate Pareto-front and compare the method with the
existing MOPSO method which uses a clustering technique.
The
-dominance method has influence on the convergence
and diversity of solutions, while reducing the computational
time. In some cases the computational time is much less
than 100 times that of the clustering technique. The com-
parison is done in terms of computational time, convergence
and diversity of solutions. There are several metrics for
comparing convergence and diversity of solutions. Here we
suggest a new idea for a diversity metric (Sigma method),
which is inspired from [11]. In [11], we have proposed the
Sigma method for finding the local guides in MOPSO. The
idea of this method can be used to find a good diversity
metric for different test functions, however, for some func-
tions other diversity metrics are also suggested [8]. We also
study using an initial archive instead of an empty one. This
has more influences in MOPSO techniques than other MO
methods. The empty archive is filled in the first generation
by the non-dominated solutions in the initial population and
these archive members will be the local guides for the parti-
cles in the population. But if they are not in well distributed
positions, we will lose the diversity of solutions just after
one generation, therefore there is the need to have an initial
well distributed archive.
In this paper, the definitions of domination, and -
domination are studied in Section 2. In Section 3 the
MOPSO method is briefly reviewed and in Section 4 the
combination of MOPSO and
-dominance, the results on
different test functions and comparison with clustering tech-
nique are studied. Finally we conclude the paper in Sec-
tion 5.
2 Definitions
A multi-objective optimization problem is of the form
(1)
subject to
, involving conflicting objective func-
tions
that we want to minimize simultane-
ously. The decision vectors
belong
to the feasible region
. The feasible region is formed
by constraint functions.
We denote the image of the feasible region by
and
call it a feasible objective region. The elements of
are
called objective vectors and they consist of objective (func-
tion) values
.
Definition 1: Domination A decision vector
is
said to dominate a decision vector
(denoted
) iff:
- The decision vector is not worse than in all ob-
jectives, i.e., .
- The decision vector is strictly better than in at
least one objective, or
for at least
one
.
Definition 2: Weak Domination A decision vector
weakly dominates (denoted ) iff:
- The decision vector
is not worse than in all ob-
jectives, i.e.,
.
Definition 3: Pareto Optimal Front A decision vector
is called Pareto-optimal if there does not exist an-
other
that dominates it. An objective vector is
called Pareto-optimal if the corresponding decision vector
is Pareto-optimal.
Let
be a set of vectors. The Pareto Optimal
Front
contains all vectors , which are not
dominated by any vector
:
(2)
Definition 4:
-domination A decision vector is
said to
-dominate a decision vector for some
(denoted ) iff:
-
.
-
for at least one .
Figure 1 shows the concept of
-domination. By considering
this definition, the domination areas increase by increasing
the objective values. For smaller values of objectives the
dominating area is smaller than for larger objective values.
f
2
f
2
/(1+ε)
f
1
f
1
/(1+ε)f
1
f
2
dominated by f
f
−dominated by fε
f
Figure 1: Domination and -domination
Definition 5:
-approximate Pareto Front Let
be a set of vectors and . The -approximate Pareto
Front
contains all vectors , which are not
-dominated by any vector :
such that (3)
We have to note that the set
is not unique, but contains
just certain amount of vectors, depending on the
value.
This has been studied in [13, 9]. For any finite
and any set
with objective vectors ,
there exists a set
containing:
(4)
Here, we consider that is the same for all objectives.
3 MOPSO Methods
Figure 2 shows the algorithm of a Multi-objective Optimiza-
tion method, which we use here for Multi Objective Particle
Swarm Optimization technique. MOPSO methods are stud-
ied in [1, 6, 11, 14]. In this algorithm
denotes the gener-
ation index,
the population, and the archive at gen-
eration
. In Step 2 the population is initialized, which
contains the initial particles, their positions
and their ini-
tial velocities
. The external archive is also initial-
ized in this step, which is empty. The function
in
Step 3, evaluates the particles in the population
and the
function
updates the archive and stores the
non-dominated solutions among
and in the archive.
BEGIN
Step 1:
;
Step 2: Initialize population
and archive
Step 3: Evaluate
Step 4:
Step 5:
Step 6:
Step 7: Unless a is met,
goto Step 3
END
Figure 2: Typical structure of an archive-based MOPSO.
Step 5 is the most critical Step in MOPSO techniques.
In this step the velocity and position of each particle
is
updated as below:
(5)
where
, is the inertia weight of the particle,
and are two positive constants, and and are ran-
dom values in the range
.
According to Equation 5, each particle has to change its po-
sition
towards the position of a local guide which
must be selected from the updated set of non-dominated so-
lutions stored in the archive
. How to select the local
guide from the archive has a great impact on convergence
and diversity of the solutions and is studied in [1, 6, 7, 11].
In this equation,
is like a memory for the particle and
keeps the non-dominated (best) position of the particle by
comparing the new position
in the objective space with
( is the last non-dominated (best) position of the par-
ticle
).
At the end of this step, a turbulence factor is added to the
positions of the particles. This is done by adding a random
value to the current position of each particle:
(6)
Where
is a random value added to the updated
position of each particle with a probability.
The steps of the MOPSO algorithm are iteratively repeated
until a termination criterion is met such as a maximum num-
ber of generations or when there has been no change in the
set of non-dominated solutions found for a given number of
generations. The output of the MOPSO method is the set of
non-dominated solutions stored in the final archive.
4 Archiving
As it is explained in Section 3, an external archive is used
to keep non-dominated solutions found in each generation.
The archive members must be updated in each generation
by the function Update (Step 4, Figure 2). The Update
function compares whether members of the current popu-
lation are non-dominated with respect to the members
of the actual archive
and how and which of such can-
didates should be considered for insertion into the archive
and which should be removed. Thereby, an archive is called
domination-free if no two points in the archive do domi-
nate each other. Obviously, during execution of the func-
tion
, dominated solutions must be deleted in order
to keep the archive domination-free. Several data structures
are proposed for storing the non-dominated solutions in the
archive [6, 12] in order to reduce the computational time of
the method. In this section, we are trying to focus on two
properties of the archive and their influences on the results
of the method: The size of archive and the initial archive
members.
4.1 Archive Size and
-dominance
In most of multi-objective optimization (MO) methods the
archive must contain a certain amount of solutions, while
keeping a good diversity of solutions. In some of MO meth-
ods (e.g., [17]) the archive must have a fixed size and in the
case that the number of non-dominated solutions is less than
the fixed size, some particles of the population are selected
at random to be inserted in the archive. In the case that the
size of the set of non-dominated solutions becomes higher
than the fixed size, truncation or clustering techniques are
applied. In [16, 11], the archive size has an upper bound and
as soon as the number of non-dominated solutions becomes
higher than the archive size, truncation techniques are used.
However, we have to note that by increasing the size of the
archive the computational time increases. In [10], the com-
putational time of the different test functions with different
number of objectives and archive sizes, for different pop-
ulation sizes are discussed. By increasing the number of
objectives and population size and archive size, the compu-
tational time of the method increases extremely.
Here, we propose to use the concept of
-domination
instead of domination when updating the archive i.e., in-
stead of comparing the particles using domination criterion,
we compare them using the
-dominance criterion. There-
fore, the size of the archive will have an upper bound of
, where is the upper bound of the objec-
tive values. It is obvious that the size of the archive depends
on the
value. Hence by using this -dominance we can
keep the size of the archive limited and we can reduce the
computational time. Applying the
-dominance in MOPSO
techniques also has influence on the convergence and diver-
sity of the results that will be discussed later.
4.2 Initial Archive
In the MOPSO methods the initial archive is empty. So in
the first generation the non-dominated solutions of the ini-
tial population are stored in the archive and the particles of
the population should select their best local guide among
these archive members. Selecting the first local guides from
the archive has a great impact on the diversity of solutions
in the next generations especially in methods explained
in [6, 11]. Hence the diversity of solutions depends on the
first non-dominated solutions. But if the initial archive is not
empty and contains some well-distributed non-dominated
solutions, the solutions converge faster than before, while
keeping a good diversity. Figure 3 left shows the initial pop-
ulation and the non-dominated particles among them which
are stored in the empty archive. In this figure, particles se-
f
1
f
1
f
2
initial archive member
particle
f
2
non−dominated particles
Figure 3: Influence of the initial archive
lect one of these archive members as the local guide by us-
ing Sigma method [11] and one can imagine that after one
generation particles will move towards the left part of the
space. In Figure 3 right, the initial archive is not empty, but
it has some members which dominate all the particles in the
population. This time in the next generation the particles
will obtain a better diversity than in the left figure.
Now, the question is how to find a good initial archive.
The initial archive can be found in different ways. The first
possibility is to run the MOPSO with an empty archive for
a large population and a few generations. The large popu-
lation gives us a good diversity and a few generations (e.g.,
5 generations) develops the population just to a little con-
vergence. Another possibility is to use the results of a short
MOEA (Multi-Objective Evolutionary Algorithm) method.
Here, short means a MOEA with a few individuals and a
few generations (e.g., 10 individuals and 10 generations).
We know that MOEA can give us some good solutions with
a very good diversity after a few generations. Short MOEAs
has also been used in combination with other methods like
subdivision methods [15].
5
-dominance and MOPSO
In this section we apply the
-dominance in the MOPSO
method and then compare it with MOPSO using the clus-
tering technique. Both of these methods use the Sigma
method [11] for finding the best local guides. The clus-
tering technique is explained in [16] and is also used in
MOPSO in [11]. Here, we also use an initial archive for
each test function. The initial archives are the results of a
short MOPSO using the Sigma method. The short MOP-
SOs have a bigger population size than the usual MOPSO
and are run for a few generations. In this section, we study
the influence of
-dominance on the computational time,
convergence and diversity of solutions and compare the con-
vergence and diversity of solutions. For comparing the di-
versity of solutions, we also suggest a new diversity metric.
5.1 Diversity Metric
We can consider the position of each solution in 2- and 3-
objective spaces by polar coordinates (
and ) and spherical
(
, and ) coordinates respectively. Inspired from these
coordinates, we can formulate the diversity of solutions by
a well distribution in terms of their angles
for 2-objective
spaces and
and for 3-objective spaces. However, for
higher dimensional objective spaces, we can not define a
coordinate axes which gives us a simple distribution like in
polar or spherical coordinates. Therefore, we suggest to use
the concept of the Sigma method, which we have introduced
in [11] for calculating the local guides in MOPSO. Here, we
explain briefly the Sigma method and how one can use it to
calculate the diversity of solutions.
Sigma Method [11] In this method, a value
is assigned
to each solution with coordinates
so that all the
solutions which are on the line
have the same
value of
. Therefore, is defined as follows:
(7)
Figure 4 shows the values of
for different lines. Indeed
states the angle between the line and the axis
.
0
0.5
1
0
0.5
1
0
0.5
1
f2
(c)
f1
f3
0
0.5
1
0
0.5
1
0
1
2
3
4
f2
(d)
f1
f3
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
f1
f2
(a)
0 0.2 0.4 0.6 0.8 1
0
0.5
1
1.5
2
f1
f2
(b)
σ = −0.6
σ = −0.01
σ = 0.7
σ = 0.9
σ = 0.75
σ = −0.01
σ = 0.7
σ = (0, 0, 0)
σ = (0, 0, 0)
σ =
(−0.75,0.1,0.8)
σ =
(−0.75,0.1,0.8)
Figure 4: Sigma method for 2- and 3-objective spaces.
In the general case, is a vector of elements, where
is the dimension of the objective space. In this case, each
element of
is the combination of two coordinates in terms
of the Equation (7). For example for three coordinates of
, and , it is defined as follows:
(8)
Different values of
for different values of , and
are shown in Figure 4. In the general case, when a point has
the same position in each dimension (e.g.,
in
3 dimensional space),
.
We have to note that the objective functions must contain
positive values, otherwise we have to transform them into
the positive regions and when the objectives are not in the
same ranges scaled sigma values [11] are used.
Sigma Diversity Metric Figure 5 shows the idea of us-
ing the Sigma method as a diversity metric for 2-objective
spaces. As it is shown,
lines with different sigma val-
ues are drawn from the origin. These lines are called refer-
ence lines and have the angle
to the axis, where
. We consider reference lines for
computing the diversity of an archive with the size of
( ). In the next step, the sigma value of each
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
f1
f2
Figure 5: 2-objective Sigma Diversity Metric. Black points
are the solutions of a 2-objective test function and the lines
are reference lines.
reference line should be computed, which we call reference
sigma value. In the case that all of the
solutions are well
distributed, they should be spread in many different regions.
For higher dimensional spaces, the reference lines are
also defined by lines passing the origin. In order to find the
angles between the lines and each coordinate axis, it is eas-
ier to find reference points located on the reference lines.
Hence in an
dimensional space, the coordinate of each
reference point is a vector of
elements. Algorithm 1 cal-
culates the coordinates of the reference points. For example
we obtain
, for and
. Indeed in this Algorithm, the first coordinate of
point is kept constant ( ) and the other coordinates
are changing. However for obtaining the whole reference
points, the algorithm must be repeated
times, each time
one of the coordinates must be kept constant. In our exam-
ple in the second (
th) run we keep the second coordinate
constant and obtain
. The
Algorithm 1 Calculate coordinates of reference points
Require:
Ensure:
for to do
for
to do
for to do
end for
end for
end for
I = i
number of reference points produced by Algorithm 1 de-
pends on
and . In 2-objective spaces, the number of ref-
erence lines is equal to
. In higher dimensional spaces
is the number of regions which are separated by refer-
ence lines on the plane generated by only two of coordinate
axes. For example in Figure 6, on the plane generated by
and axes, there are four regions separated by reference
lines. In higher dimensional spaces, the number of refer-
ence points made by Algorithm 1 is more than the required
number of reference lines. Because by repeating the algo-
rithm some points lie on the same reference line. In the pre-
vious example, the points
and , and
and are on the same lines. There-
fore, the number of reference lines can be calculated after
finding the sigma value (vector) of each reference point.
The points located on a reference line will have the same
sigma vectors and the number of reference lines is the num-
ber of non-repeated reference sigma vectors. Table 1 shows
Figure 6: Reference lines in 3-objective space ( ).
the number of reference lines for different values of
in 3-
objective spaces. A binary Flag is also kept beside each ref-
Table 1: Number of reference lines obtained from different
values for 3-objective spaces
number of ref.
4 25 0.15
6 67 0.1
8 133 0.1
10 223 0.1
12 337 0.1
14 475 0.05
16 637 0.05
18 823 0.05
20 1033 0.05
erence sigma vector, which is at the beginning. the Flag of
each reference sigma vector can only turn to
, when at least
one solution has a sigma vector equal or with a distance
1
less than
to it. A counter counts the reference lines with
Flags equal to
and the diversity metric becomes:
number of reference lines
(9)
The value of
depends on the test function, however it
should decrease when increasing the number of reference
lines. Table 1 shows an example on choosing
for 3-
objective test functions.
The Sigma diversity metric is easy to implement and is
very efficient in computing the diversity of solutions in high
dimensional spaces. The 2-objective Sigma diversity met-
ric seems to have some similarities to Entropy approach [5],
Sparsity measurement [3] and [8], especially when measur-
ing the diversity of very convex (or non-convex) objective
fronts. But in comparison to them, it is very easy to cal-
culate the diversity of solutions in high dimensional spaces
using the sigma diversity metric. The Sigma diversity met-
ric like the Sigma method can also be scaled for different
ranges of the objective values. However, the objective val-
ues must contain just the positive values, and the negative
values must be transfered to the positive part (i.e., upper
right quadrant of a circle in two dimensions). This is possi-
ble without loss of generality
5.2 Test Functions
The test functions are 2- and 3-objective optimization prob-
lems selected from [16], [4] and are shown in Table 2.
In Table 2,
. For test functions and ,
and for test functions and , .
5.2.1 Parameter Settings
The tests are done for 120 particles in the population and
300 generations with the turbulence factor of 0.01 and iner-
tia weight of 0.4. Initial archives are obtained by running a
MOPSO with population size 100 and 200 generations for
test functions and and population size 500 and 10 gen-
erations for test functions
and .
1
Euclidian distance
Table 2: Test Functions:
Function
1
2
3
4
5.2.2 Results
The MOPSO is run for different
values using the -
dominance technique. Then it is run again by using the
clustering technique. In the clustering technique, the max-
imum archive size is set to the archive size obtained by the
-dominance.
Tables 3 and 4 show the results of 2- and 3-objective test
functions. In these tables,
is the MOPSO using the -
dominance and
is the MOPSO using the clustering tech-
nique, size is the archive size,
and are the CPU times
needed to run each MOPSO on a 500 MHz Ultra-SPARC-
IIe SUN Workstation,
refers to the number of solu-
tions in
that are weakly dominated by and is the
Sigma diversity metric values (in percent). All the values
recorded in Tables 3 and 4 are average values from five dif-
ferent runs with different initial populations.
0 200 400 600
8
9
10
11
12
13
(a) t1 test function
size
log time (ms)
0 200 400 600
8
8.5
9
9.5
10
10.5
11
11.5
(b) t2 test function
size
log time (ms)
0 200 400 600 800 1000
6
8
10
12
14
16
(c) t3 test function
size
log time (ms)
0 200 400 600 800
6
8
10
12
14
16
(d) t4 test function
size
log time (ms)
Figure 7: Influence of on CPU time ( : Clustering, :
-dominance).
Table 3: Results of 2-objective test functions (time in mil-
liseconds).
: MOPSO using -dominance and : MOPSO
using the clustering technique. (
: the clustering can just
find 616 solutions)
test function
size
0.1 16 3041 33429 4 0 87 87
0.05 26 3312 37006 4 0 80 80
0.025 48 3305 144151 1 1 93 91
0.01 109 3754 204412 13 12 77 74
0.005 204 4401 318432 80 13 93 87
0.001 517 6331 23294 398 13 93 93
0.0001 941 8083 6096
828 7 97 93
0 616 6096 - - - - -
test function
0.025 20 3127 86329 0 9 50 45
0.01 40 3229 11807 3 5 30 28
0.0075 52 3300 12270 0 11 27 30
0.005 71 3358 15102 6 14 34 48
0.0025 123 3635 11509 80 2 43 53
0.001 170 3909 55498 33 54 51 60
0.0005 220 4353 78289 32 65 59 58
0.00025 249 4416 40306 179 7 61 61
0.0001 507 5868 41203 460 2 55 51
0 730 7560 - - - - -
Table 4: Results of 3-objective test functions (time in mil-
liseconds).
: MOPSO using -dominance and : MOPSO
using the clustering technique.
test function
size
0.1 68 2968 164954 16 0 77 91
0.07 113 3368 254867 17 0 83 83
0.06 130 3740 321244 12 0 86 93
0.05 176 4188 555989 19 2 91 93
0.04 219 5191 510482 25 1 98 98
0.03 351 6979 1161747 34 8 70 83
0.02 660 12126 3277912 73 7 92 93
0.015 956 16694 6154627 135 11 98 97
0 10692 172782 - - - - -
test function
0.1 30 2839 165275 3 0 32 28
0.05 76 3275 298039 1 0 23 29
0.04 84 3505 353349 1 1 23 28
0.03 133 4194 552847 1 0 22 25
0.025 157 4735 715612 1 1 21 24
0.02 216 5788 1001681 7 1 24 27
0.015 331 8103 1907411 2 0 18 19
0.01 610 13574 5152398 10 1 19 21
0 21351 373641 - - - - -
Influence on Computational time Figure 7 shows the
CPU times of the two methods in Tables 3 and 4 graphi-
cally, where size is the archive size and the CPU time is
shown in logarithmic milliseconds values. For all test func-
tions, the CPU time increases when increasing the archive
size. We have to note that when the limit of the archive size
is bigger than the number of non-dominated solutions, clus-
tering is not applied to the archive. This can be observed es-
pecially for both of the 2-objective test functions, the CPU
time of the clustering technique decreases for large archive
sizes. In both of 2- and 3-objective test functions the CPU
time of the program when using the
-dominance is much
less than when using the clustering techniques. The cluster-
ing technique takes in some cases more than 100 times the
-dominance to find the same number of solutions.
In Table 3, the
-dominance method finds 941 solutions
for the test function
, when . But if we apply
the method using the clustering technique, we see that we
never reach the number of 941 as the archive size in order
to apply clustering on it, therefore it will take less time than
the
-dominance method.
Influence on convergence In Tables 3 and 4, the factor
shows the number of solutions in set that are weakly
dominated by the solutions in set
, i.e.:
(10)
By comparing
and , where A is the MOPSO us-
ing the
-dominance and B MOPSO using clustering tech-
nique, we can conclude that for the same archive sizes the
-
dominance dominates more solutions of the results of clus-
tering technique. This also depends on the archive size and
the number of objectives. For test functions t1, t2 and t3, the
values of
are much higher than , which we can
conclude a better convergence. However, for the 3-objective
test function t4, they are comparable.
Influence on diversity As it is explained, we have intro-
duced a new diversity metric, which are demonstrated as the
values in Tables 3 and 4. The value is shown in per-
cent. Here, we study the results of 2- and 3-objective test
functions separately:
- 2-objective test functions: In both of the 2-objective
test functions (
and ), we have used the same number
of reference lines (Sigma values) as the archive size i.e.,
. Therefore, it is clear for the test function ,
which has discontinuities, the value of
will never reach
100
. The values of used in measuring the diversities
are as follows: for archive sizes less than 20, 0.1, between
20 and 50, 0.05, between 50 and 500, 0.01, and more than
500, 0.005. Comparing the diversity of the
-dominance
method with the clustering method by using
values, we
conclude that for bigger archive sizes the
-dominance has
bigger
values, which means a better diversity. In some
cases the clustering method obtains higher
values than
using
-dominance. However, the diversity of solutions is
comparable.
- 3-objective test functions: In both of the 3-objective
test functions, we can not achieve the best diversity of solu-
tions. However, the clustering method gives us better diver-
sity of solutions than the
-dominance method. One of the
reasons may be the shape of the approximate Pareto-front.
In Table 4, the number of reference lines is determined by
the value of
in Table 1. In our experiments, we have used
0
0.5
1
1.5
0
0.5
1
1.5
0
0.5
1
1.5
f2
(a) e−dominance
f1
f3
0
0.5
1
1.5
0
0.5
1
1.5
0
0.5
1
1.5
f2
(b) clustering
f1
f3
0 0.5 1 1.5
0
0.5
1
1.5
(c) e−dominance
θ
ϕ
0 0.5 1 1.5
0
0.5
1
1.5
(d) clustering
θ
ϕ
Figure 8: test function ( ). (a),(b) objective space
and (c),(d)
axis of the spherical coordinate.
0
0.5
1
0
0.5
1
0
2
4
6
f2
(a) e−dominance
f1
f3
0
0.5
1
0
0.5
1
0
2
4
6
f2
(b) clustering
f1
f3
0 0.5 1 1.5
0
0.5
1
1.5
(c) e−dominance
θ
ϕ
0 0.5 1 1.5
0
0.5
1
1.5
(d) clustering
θ
ϕ
Figure 9: test function ( ). (a),(b) objective space
and (c),(d)
axis of the spherical coordinate.
the number of reference lines very close to the archive sizes.
Figures 8 and 9 show the results of the
and test func-
tions and also the
- axis of the solutions (spherical coor-
dinates), for having a better observation on the diversity of
solutions. We can observe that the
-dominance method can
not obtain some solutions, therefore the diversity of solu-
tions of the
-dominance, especially for test function , is
not as good as the clustering method.
6 Conclusion and Future Work
In this paper, the influence of the
-dominance in compari-
son to the clustering techniques is studied. The
-dominance
bounds the number of solutions in the archive and decreases
the computational time. The computational time in some
cases is much less than the method using the clustering
technique. Using
-dominance has also influence on con-
vergence and diversity of solutions. The obtained solutions
have comparable convergence and diversity when compared
to clustering technique and in some cases are better in con-
vergence and diversity, especially for 2-objective test func-
tions.
The diversity of the solutions is compared with a new di-
versity metric called Sigma metric. According to this met-
ric, the diversity of the solutions obtained by
-dominance is
getting worse than the clustering technique for an increasing
number of objectives. However, we have to consider that the
results are just for the recorded number of generations and
if we run the methods for a large number of generations we
obtain a very good diversity and convergence of solutions.
The introduced diversity metric Sigma diversity metric is
easy to implement and efficient for high dimensional spaces
which makes it worthy in comparison to other diversity met-
rics. Dealing with continuous and positive objective values
it will give us a very good measurement of diversity of so-
lutions. In the case of negative objective values or when
the objective functions have different ranges, scaled Sigma
method should be used.
In this paper, we have also suggested to use an initial
archive instead of an empty archive. This has influence on
the diversity of the solutions.
In the future we would like to investigate and compare
the
-dominance method for different number of genera-
tions for different test functions with higher number of ob-
jectives.
Bibliography
[1] C. A. Coello Coello and M. S. Lechuga. Mopso: A
proposal for multiple objective particle swarm opti-
mization. In IEEE Proceedings World Congress on
Computational Intelligence, pages 1051–1056, 2003.
[2] K. Deb. Multi-Objective Optimization using Evolu-
tionary Algorithms. John Wiley & Sons, 2001.
[3] K. Deb, M. Mohan, and SV. Mishra. A fast multi-
objective evolutionary algorithm for finding well-
spread pareto-optimal solutions. In KanGAL Report
No. 2003002,Indian Institute Of Technology Kanpur,
2002.
[4] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler. Scal-
able multi-objective optimization test problems. In
IEEE Proceedings World Congress on Computational
Intelligence, 2002.
[5] A. Farhang-Mehr and S. Azarm. Diversity assessment
of paretooptimal sets: An entropy approach. In IEEE
Proceedings World Congress on Computational Intel-
ligence, 2002.
[6] J. E. Fieldsend and S. Singh. A multi-objective al-
gorithm based upon particle swarm optimisation, an
efficient data structure and turbulence. In The 2002
U.K. Workshop on Computational Intelligence, pages
34–44, 2002.
[7] X. Hu, R. Eberhart, and Y. Shi. Particle swarm with
extended memory for multiobjective optimization. In
IEEE Swarm Intelligence Symposium, pages 193–198,
2003.
[8] V. Khare, X. Yao, and K. Deb. Performance scaling
of multi-objective evolutionary algorithms. In Pro-
ceedings of Second International conference on Evo-
lutionary Multi-Criterion Optimization, pages 376
390, 2003.
[9] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler.
Archiving with guaranteed convergence and diversity
in multi-objective optimization. In Genetic and Evolu-
tionary Computation Conference (GECCO02), pages
439–447, 2002.
[10] S. Mostaghim and J. Teich. Quad-trees: A data struc-
ture for storing pareto-sets in multi-objective evolu-
tionary algorithms with elitism. In Evolutionary Com-
putation Based Multi-Criteria Optimization: Theoret-
ical Advances and Applications, to appear, 2003.
[11] S. Mostaghim and J. Teich. Strategies for finding good
local guides in multi-objective particle swarm opti-
mization. In IEEE Swarm Intelligence Symposium,
pages 26–33, 2003.
[12] S. Mostaghim, J. Teich, and A. Tyagi. Comparison of
data structures for storing pareto-sets in MOEAs. In
IEEE Proceedings World Congress on Computational
Intelligence, pages 843–849, 2002.
[13] C. H. Papadimitriou and M. Yannakakis. On the ap-
proximability of trade-offs and optimal access of web
sources (extended abstract). In IEEE Symposium on
Foundations of Computer Science, 2000.
[14] K. E. Parsopoulos and M. N. Vrahatis. Recent ap-
proaches to global optimization problems through par-
ticle swarm optimization. In Natural Computing, 1
(2-3), pages 235–306, Kluwer Academic Publishers,
2002.
[15] O. Sch¨utze, S. Mostaghim, M. Dellnitz, and J. Teich.
Covering pareto sets by multilevel evolutionary sub-
division techniques. In Proceedings of Second Inter-
national conference on Evolutionary Multi-Criterion
Optimization, pages 118–132, 2003.
[16] E. Zitzler. Evolutionary Algorithms for Multiobjec-
tive Optimization: Methods and Applications. TIK-
Schriftenreihe Nr. 30, Diss ETH No. 13398, Shaker
Verlag, Germany, Swiss Federal Institute of Technol-
ogy (ETH) Zurich, 1999.
[17] E. Zitzler, M. Laumanns, and L. Thiele. Spea2: Im-
proving the strength pareto evolutionary algorithm. In
EUROGEN 2001, Evolutionary Methods for Design,
Optimisation and Control with Applications to Indus-
trial Problems, 2001.
... To obtain the solution that meets both the design requirements according to the facts, multi-objective optimization is needed. Based on the BPNN prediction model, the MOPSO algorithm with ε-dominance proposed by Mostaghim et al. [33] is used to optimize the mathematical problem in Section 4.3.1. The MOPSO algorithm [34] was selected to obtain the Pareto front of the objectives for its effective searchability. ...
... To obtain the solution th both the design requirements according to the facts, multi-objective optimi needed. Based on the BPNN prediction model, the MOPSO algorithm with ε -do proposed by Mostaghim et al. [33] is used to optimize the mathematical problem tion 4.3.1. The MOPSO algorithm [34] was selected to obtain the Pareto front of t tives for its effective searchability. ...
Article
Full-text available
To control the welding residual stress and deformation of metal inert gas (MIG) welding, the influence of welding process parameters and preheat parameters (welding speed, heat input, preheat temperature, and preheat area) is discussed, and a prediction model is established to select the optimal combination of process parameters. Thermomechanical numerical analysis was performed to obtain the residual welding deformation and stress according to a 100 × 150 × 50 × 4 mm aluminum alloy 6061-T6 T-joint. Owing to the complexity of the welding process, an optimal Latin hypercube sampling (OLHS) method was adopted for sampling with uniformity and stratification. Analysis of variance (ANOVA) was used to find the influence degree of welding speed (7.5–9 mm/s), heat input (1500–1700 W), preheat temperature (80–125 °C), and preheat area (12–36 mm). The range of research parameters are according to the material, welding method, thickness of the welding plate, and welding procedure specification. Artificial neural network (ANN) and multi-objective particle swarm optimization (MOPSO) was combined to find the effective parameters to minimize welding deformation and stress. The results showed that preheat temperature and welding speed had the greatest effect on the minimization of welding residual deformation and stress, followed by the preheat area, respectively. The Pareto front was obtained by using the MOPSO algorithm with ε-dominance. The welding residual deformation and stress are the minimum at the same time, when the welding parameters are selected as preheating temperature 85 °C and preheating area 12 mm, welding speed is 8.8 mm/s and heat input is 1535 W, respectively. The optimization results were validated by the finite element (FE) method. The error between the FE results and the Pareto optimal compromise solutions is less than 12.5%. The optimum solutions in the Pareto front can be chosen by designers according to actual demand.
... In the proposed MOPSO method, ε-dominance method [40] was used to restrict the archive size while maintaining the diversity of nondominated solutions in the archive. A mutation operation using Cauchy method [41] was also included to avoid premature convergences. ...
Article
Full-text available
With an increasing number of distributed generators (DGs) integrated into distribution networks, operational problems such as excessive power losses, voltage violations and thermal overloads have occurred. Medium Voltage Direct Current (MVDC) technology represents a candidate solution to address these problems as well as to unlock the capacity of existing electrical network assets. In this paper, the capability of using an MVDC link to improve the performance of a distribution network, i.e. reducing power losses and increasing the hosting capacity for DG connections was investigated. A grid transformer (GT)-based control method was developed, in which the real-time data of the active power flow at GTs was used to specify the set-points of an MVDC link. The control strategies considered multiple objectives, i.e. power loss reduction, feeder load balancing, voltage profile improvement, and trade-off options among them. The response curves of these control strategies were developed through offline studies, where a multi-objective Particle Swarm Optimization (MOPSO) method was used. Case studies on a real distribution network were conducted to analyze the impacts of the MVDC link. The performances of the network were evaluated and compared between the proposed control strategies, using real demand and generation profiles. Results revealed that, for an MV distribution network, it might be beneficial to switch between different control strategies with the variations in demand and generation conditions. Results also showed that, regardless of the control strategy used, the MVDC link can significantly increase the network hosting capacity (up to 15%) for DGs, and reduce about 50% of power losses compared to a conventional alternative current (AC) line for the test network.
Chapter
Swarm Intelligence (SI) is about a collective behavior of a population of individuals. The main properties of such populations are that all of the individuals have the same simple rule from which the global collective behavior cannot be predicted.
Article
Full-text available
In this paper, a new version of the multi-objective particle swarm optimizer named the Diversity-enhanced fuzzy multi-objective particle swarm optimization (f-MOPSO/Div) algorithm is proposed. This algorithm is an improved version of our recently proposed f-MOPSO. In the proposed algorithm, a new characteristic of the particles in the objective space, which we named the “extremity,” is also evaluated, along with the Pareto dominance, to appoint proper guides for the particles in the search space. Three improvements are applied to the f-MOPSO to mitigate its shortcomings, generating f-MOPSO/Div: (1) selecting the global best solution based on the diversity of the extreme solutions, (2) impeding the particles to be trapped in the local optima using a mutation scheme based on the dynamic probability, and (3) removing the pre-optimization process. To validate f-MOPSO/Div, it was compared with some other popular multi-objective algorithms on 14 standard low- and high-dimensional test problem suites. After the comparative results indicated the outperformance of the proposal, the f-MOPSO/Div was applied to solve an optimal conjunctive water use management problem, in a semi-arid study area in west-central Iran, over a 13-year long-term planning period with two main objectives: (1) maximizing the aquifer sustainability as an environmental goal, and (2) maximizing the crop yields as a socio-economic goal. As the results suggest, the cumulative groundwater level drawdown is considerably decreased over the whole planning period to make the aquifer sustainable, while the water productivity is held at a desirable level, demonstrating the superiority of the f-MOPSO/Div when also applied to solve a large-scale real-world optimization problem.
Chapter
This chapter is devoted to the application of PSO and its variants on three very interesting problem types, namely (a) multiobjective, (b) constrained, and (c) minimax optimization problems. The biggest part of the chapter refers to the multiobjective case, since there is a huge bibliography with a rich assortment of PSO approaches developed to date. Different algorithm types are presented and briefly discussed, focusing on the most influential approaches.
Chapter
A well-designed supply chain network should not only meet the efficient cost but also realize the sustainable effect on environment. The purpose of this article is to develop a multi-objective model to capture the trade-off between total cost and environmental performance in the green dual-channel supply chain network. Moreover, the transportation mode has been considered as a decision variable. With regard to the complexity of such network, a new swarm intelligence algorithm known as a multi-objective particle swarm optimization (MOPSO) algorithm has been employed to tackle this problem. The effectiveness of the present model and approach is evaluated by a numerical experiment, and the results show that the added environmental performance is actually proportional with the increased cost. Additionally, the comparison between different mode decisions shows that a better trade-off between two objectives will be obtained when considering the transportation mode selection.
Article
A well-designed supply chain network should not only meet the efficient cost but also realize the sustainable effect on environment. The purpose of this article is to develop a multi-objective model to capture the trade-off between total cost and environmental performance in the green dual-channel supply chain network. Moreover, the transportation mode has been considered as a decision variable. With regard to the complexity of such network, a new swarm intelligence algorithm known as a multiobjective particle swarm optimization (MOPSO) algorithm has been employed to tackle this problem. The effectiveness of the present model and approach is evaluated by a numerical experiment, and the results show that the added environmental performance is actually proportional with the increased cost. Additionally, the comparison between different mode decisions shows that a better trade-off between two objectives will be obtained when considering the transportation mode selection.
Article
Full-text available
This paper addresses the inventory problem under order crossover. Order crossover occurs when orders do not arrive in same order in which they were issued. In this work, order crossover phenomenon is examined in a multi-objective mixture inventory system. Shortages in the model are considered as a combination of backorders and lost sales. Multi-objective cuckoo search (MOCS) algorithm is used to solve the inventory problem and generate Pareto curve for practitioners. A numerical problem is shown to demonstrate the results. The results show a remarkable reduction in inventory cost and a significant rise in service levels with proposed inventory system considering order crossover in comparison to existing inventory systems that ignore order crossover. Proposed multi-objective inventory system with order crossover is more sustainable in comparison to existing inventory systems. The performance of MOCS algorithm is compared with two high performing evolutionary algorithms such as non-dominated sorting genetic algorithm II and multi-objective particle swarm optimization. A benchmark problem is considered for comparison.
Article
Full-text available
This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and 1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.
Chapter
Full-text available
In multiobjective evolutionary algorithms (MOEAs) with elitism, the data structures for storing and updating archives may have a great impact on the required computational (CPU) time, especially when optimizing higher-dimensional problems with large Pareto sets. In this chapter, we introduce Quad-trees as an alternative data structure to linear lists for storing Pareto sets. In particular, we investigate several variants of Quad-trees and compare them with conventional linear lists. We also study the influence of population size and number of objectives on the required CPU time. These data structures are evaluated and compared on several multiobjective example problems. The results presented show that typically, linear lists perform better for small population sizes and higher-dimensional Pareto fronts (large archives) whereas Quad-trees perform better for larger population sizes and Pareto sets of small cardinality.
Conference Paper
Full-text available
We present new hierarchical set oriented methods for the numerical solution of multi-objective optimization problems. These methods are based on a generation of collections of subdomains (boxes) in parameter space which cover the entire set of Pareto points. In the course of the subdivision procedure these coverings get tighter until a desired granularity of the covering is reached. For the evaluation of these boxes we make use of evolutionary algorithms. We propose two particular strategies and discuss combinations of those which lead to a better algorithmic performance. Finally we illustrate the efficiency of our methods by several examples.
Article
We study problems in multiobjective optimization, in which solutions to a combinatorial optimization problem are evaluated with respect to several cost criteria, and we are interested in the trade-off between these objectives (the so-called Pareto curve). We point out that, under very general conditions, there is a polynomially succinct curve that -approximates the Pareto curve, for any > 0. We give a necessary and sufficient condition under which this approximate Pareto curve can be...
Conference Paper
MOEAs are getting immense popularity in the recent past, mainly because of their ability to find a wide spread of Pareto-optimal solutions in a single simulation run. Various evolutionary approaches to multi-objective optimization have been proposed since 1985. Some of fairly recent ones are NSGA-II, SPEA2, PESA (which are included in this study) and others. They all have been mainly applied to two to three objectives. In order to establish their superiority over classical methods and demonstrate their abilities for convergence and maintenance of diversity, they need to be tested on higher number of objectives. In this study, these state-of-the-art MOEAs have been investigated for their scalability with respect to the number of objectives (2 to 8). They have also been compared on the basis of -(1) Their ability to converge to Pareto front, (2) Diversity of obtained non-dominated solutions and (3) Their running time. Four scalable test problems (DTLZ1, 2, 3 and 6) are used for the comparative study.