Content uploaded by Sanaz Mostaghim
Author content
All content in this area was uploaded by Sanaz Mostaghim
Content may be subject to copyright.
The Role of dominance in Multi Objective Particle Swarm Optimization
Methods
Sanaz Mostaghim
Electrical Engineering Department,
University of Paderborn,
Paderborn, Germany
mostaghim@date.upb.de
J ¨urgen Teich
Computer Science Department
FriedrichAlexanderUniversity,
Erlangen, Germany
teich@informatik.unierlangen.de
Abstract In this paper, the inﬂuence of dominance on
Multiobjective Particle Swarm Optimization (MOPSO)
methods is studied. The most important role of

dominance is to bound the number of nondominated so
lutions stored in the archive (archive size), which has in
ﬂuences on computational time, convergence and diver
sity of solutions. Here,
dominance is compared with
the existing clustering technique for ﬁxing the archive
size and the solutions are compared in terms of com
putational time, convergence and diversity. A new di
versity metric is also suggested. The results show that
the
dominance method can ﬁnd solutions much faster
than the clustering technique with comparable and even
in some cases better convergence and diversity.
1 Introduction
Archiving is studied in many Multiobjective Optimization
(MO) methods. In the context of Evolutionary MO meth
ods, archiving is called elitism and is used in several meth
ods like Rudolph’s Elitist MOEA, Elitist NSGAII, SPEA,
PAES (see [2] for all) and SPEA2 [17]. In these methods,
the nondominated (best) solutions of each generation are
kept in an external population, called archive. Therefore
the archive must be updated in each generation. The time
needed for updating the archive depends on the archive size,
population size and the number of objectives and increases
extremely when increasing the values of these three fac
tors [10]. There are a few studies on data structures for
storing the archive as a nondominated set e.g., in [12, 6].
These data structures also take lots of time, when increas
ing the archive size. Indeed it is reasonable to ﬁx the size of
the archive to avoid the large number of comparisons during
updating. There are several methods, like clustering, trun
cation in SPEA2 and crowding techniques to ﬁx the archive
size. These methods must also keep good diversity of so
lutions, which tends to make them the most expensive part
in the updating procedure. Here, we propose to use the idea
of
dominance in [13, 9] to ﬁx the size of the archive to a
certain amount. This size depends on
. By increasing ,
the archive size decreases. We use this method to obtain the
approximate Paretofront and compare the method with the
existing MOPSO method which uses a clustering technique.
The
dominance method has inﬂuence on the convergence
and diversity of solutions, while reducing the computational
time. In some cases the computational time is much less
than 100 times that of the clustering technique. The com
parison is done in terms of computational time, convergence
and diversity of solutions. There are several metrics for
comparing convergence and diversity of solutions. Here we
suggest a new idea for a diversity metric (Sigma method),
which is inspired from [11]. In [11], we have proposed the
Sigma method for ﬁnding the local guides in MOPSO. The
idea of this method can be used to ﬁnd a good diversity
metric for different test functions, however, for some func
tions other diversity metrics are also suggested [8]. We also
study using an initial archive instead of an empty one. This
has more inﬂuences in MOPSO techniques than other MO
methods. The empty archive is ﬁlled in the ﬁrst generation
by the nondominated solutions in the initial population and
these archive members will be the local guides for the parti
cles in the population. But if they are not in well distributed
positions, we will lose the diversity of solutions just after
one generation, therefore there is the need to have an initial
well distributed archive.
In this paper, the deﬁnitions of domination, and 
domination are studied in Section 2. In Section 3 the
MOPSO method is brieﬂy reviewed and in Section 4 the
combination of MOPSO and
dominance, the results on
different test functions and comparison with clustering tech
nique are studied. Finally we conclude the paper in Sec
tion 5.
2 Deﬁnitions
A multiobjective optimization problem is of the form
(1)
subject to
, involving conﬂicting objective func
tions
that we want to minimize simultane
ously. The decision vectors
belong
to the feasible region
. The feasible region is formed
by constraint functions.
We denote the image of the feasible region by
and
call it a feasible objective region. The elements of
are
called objective vectors and they consist of objective (func
tion) values
.
Deﬁnition 1: Domination A decision vector
is
said to dominate a decision vector
(denoted
) iff:
 The decision vector is not worse than in all ob
jectives, i.e., .
 The decision vector is strictly better than in at
least one objective, or
for at least
one
.
Deﬁnition 2: Weak Domination A decision vector
weakly dominates (denoted ) iff:
 The decision vector
is not worse than in all ob
jectives, i.e.,
.
Deﬁnition 3: Pareto Optimal Front A decision vector
is called Paretooptimal if there does not exist an
other
that dominates it. An objective vector is
called Paretooptimal if the corresponding decision vector
is Paretooptimal.
Let
be a set of vectors. The Pareto Optimal
Front
contains all vectors , which are not
dominated by any vector
:
(2)
Deﬁnition 4:
domination A decision vector is
said to
dominate a decision vector for some
(denoted ) iff:

.

for at least one .
Figure 1 shows the concept of
domination. By considering
this deﬁnition, the domination areas increase by increasing
the objective values. For smaller values of objectives the
dominating area is smaller than for larger objective values.
f
2
f
2
/(1+ε)
f
1
f
1
/(1+ε)f
1
f
2
dominated by f
f
−dominated by fε
f
Figure 1: Domination and domination
Deﬁnition 5:
approximate Pareto Front Let
be a set of vectors and . The approximate Pareto
Front
contains all vectors , which are not
dominated by any vector :
such that (3)
We have to note that the set
is not unique, but contains
just certain amount of vectors, depending on the
value.
This has been studied in [13, 9]. For any ﬁnite
and any set
with objective vectors ,
there exists a set
containing:
(4)
Here, we consider that is the same for all objectives.
3 MOPSO Methods
Figure 2 shows the algorithm of a Multiobjective Optimiza
tion method, which we use here for Multi Objective Particle
Swarm Optimization technique. MOPSO methods are stud
ied in [1, 6, 11, 14]. In this algorithm
denotes the gener
ation index,
the population, and the archive at gen
eration
. In Step 2 the population is initialized, which
contains the initial particles, their positions
and their ini
tial velocities
. The external archive is also initial
ized in this step, which is empty. The function
in
Step 3, evaluates the particles in the population
and the
function
updates the archive and stores the
nondominated solutions among
and in the archive.
BEGIN
Step 1:
;
Step 2: Initialize population
and archive
Step 3: Evaluate
Step 4:
Step 5:
Step 6:
Step 7: Unless a is met,
goto Step 3
END
Figure 2: Typical structure of an archivebased MOPSO.
Step 5 is the most critical Step in MOPSO techniques.
In this step the velocity and position of each particle
is
updated as below:
(5)
where
, is the inertia weight of the particle,
and are two positive constants, and and are ran
dom values in the range
.
According to Equation 5, each particle has to change its po
sition
towards the position of a local guide which
must be selected from the updated set of nondominated so
lutions stored in the archive
. How to select the local
guide from the archive has a great impact on convergence
and diversity of the solutions and is studied in [1, 6, 7, 11].
In this equation,
is like a memory for the particle and
keeps the nondominated (best) position of the particle by
comparing the new position
in the objective space with
( is the last nondominated (best) position of the par
ticle
).
At the end of this step, a turbulence factor is added to the
positions of the particles. This is done by adding a random
value to the current position of each particle:
(6)
Where
is a random value added to the updated
position of each particle with a probability.
The steps of the MOPSO algorithm are iteratively repeated
until a termination criterion is met such as a maximum num
ber of generations or when there has been no change in the
set of nondominated solutions found for a given number of
generations. The output of the MOPSO method is the set of
nondominated solutions stored in the ﬁnal archive.
4 Archiving
As it is explained in Section 3, an external archive is used
to keep nondominated solutions found in each generation.
The archive members must be updated in each generation
by the function Update (Step 4, Figure 2). The Update
function compares whether members of the current popu
lation are nondominated with respect to the members
of the actual archive
and how and which of such can
didates should be considered for insertion into the archive
and which should be removed. Thereby, an archive is called
dominationfree if no two points in the archive do domi
nate each other. Obviously, during execution of the func
tion
, dominated solutions must be deleted in order
to keep the archive dominationfree. Several data structures
are proposed for storing the nondominated solutions in the
archive [6, 12] in order to reduce the computational time of
the method. In this section, we are trying to focus on two
properties of the archive and their inﬂuences on the results
of the method: The size of archive and the initial archive
members.
4.1 Archive Size and
dominance
In most of multiobjective optimization (MO) methods the
archive must contain a certain amount of solutions, while
keeping a good diversity of solutions. In some of MO meth
ods (e.g., [17]) the archive must have a ﬁxed size and in the
case that the number of nondominated solutions is less than
the ﬁxed size, some particles of the population are selected
at random to be inserted in the archive. In the case that the
size of the set of nondominated solutions becomes higher
than the ﬁxed size, truncation or clustering techniques are
applied. In [16, 11], the archive size has an upper bound and
as soon as the number of nondominated solutions becomes
higher than the archive size, truncation techniques are used.
However, we have to note that by increasing the size of the
archive the computational time increases. In [10], the com
putational time of the different test functions with different
number of objectives and archive sizes, for different pop
ulation sizes are discussed. By increasing the number of
objectives and population size and archive size, the compu
tational time of the method increases extremely.
Here, we propose to use the concept of
domination
instead of domination when updating the archive i.e., in
stead of comparing the particles using domination criterion,
we compare them using the
dominance criterion. There
fore, the size of the archive will have an upper bound of
, where is the upper bound of the objec
tive values. It is obvious that the size of the archive depends
on the
value. Hence by using this dominance we can
keep the size of the archive limited and we can reduce the
computational time. Applying the
dominance in MOPSO
techniques also has inﬂuence on the convergence and diver
sity of the results that will be discussed later.
4.2 Initial Archive
In the MOPSO methods the initial archive is empty. So in
the ﬁrst generation the nondominated solutions of the ini
tial population are stored in the archive and the particles of
the population should select their best local guide among
these archive members. Selecting the ﬁrst local guides from
the archive has a great impact on the diversity of solutions
in the next generations especially in methods explained
in [6, 11]. Hence the diversity of solutions depends on the
ﬁrst nondominated solutions. But if the initial archive is not
empty and contains some welldistributed nondominated
solutions, the solutions converge faster than before, while
keeping a good diversity. Figure 3 left shows the initial pop
ulation and the nondominated particles among them which
are stored in the empty archive. In this ﬁgure, particles se
f
1
f
1
f
2
initial archive member
particle
f
2
non−dominated particles
Figure 3: Inﬂuence of the initial archive
lect one of these archive members as the local guide by us
ing Sigma method [11] and one can imagine that after one
generation particles will move towards the left part of the
space. In Figure 3 right, the initial archive is not empty, but
it has some members which dominate all the particles in the
population. This time in the next generation the particles
will obtain a better diversity than in the left ﬁgure.
Now, the question is how to ﬁnd a good initial archive.
The initial archive can be found in different ways. The ﬁrst
possibility is to run the MOPSO with an empty archive for
a large population and a few generations. The large popu
lation gives us a good diversity and a few generations (e.g.,
5 generations) develops the population just to a little con
vergence. Another possibility is to use the results of a short
MOEA (MultiObjective Evolutionary Algorithm) method.
Here, short means a MOEA with a few individuals and a
few generations (e.g., 10 individuals and 10 generations).
We know that MOEA can give us some good solutions with
a very good diversity after a few generations. Short MOEAs
has also been used in combination with other methods like
subdivision methods [15].
5
dominance and MOPSO
In this section we apply the
dominance in the MOPSO
method and then compare it with MOPSO using the clus
tering technique. Both of these methods use the Sigma
method [11] for ﬁnding the best local guides. The clus
tering technique is explained in [16] and is also used in
MOPSO in [11]. Here, we also use an initial archive for
each test function. The initial archives are the results of a
short MOPSO using the Sigma method. The short MOP
SOs have a bigger population size than the usual MOPSO
and are run for a few generations. In this section, we study
the inﬂuence of
dominance on the computational time,
convergence and diversity of solutions and compare the con
vergence and diversity of solutions. For comparing the di
versity of solutions, we also suggest a new diversity metric.
5.1 Diversity Metric
We can consider the position of each solution in 2 and 3
objective spaces by polar coordinates (
and ) and spherical
(
, and ) coordinates respectively. Inspired from these
coordinates, we can formulate the diversity of solutions by
a well distribution in terms of their angles
for 2objective
spaces and
and for 3objective spaces. However, for
higher dimensional objective spaces, we can not deﬁne a
coordinate axes which gives us a simple distribution like in
polar or spherical coordinates. Therefore, we suggest to use
the concept of the Sigma method, which we have introduced
in [11] for calculating the local guides in MOPSO. Here, we
explain brieﬂy the Sigma method and how one can use it to
calculate the diversity of solutions.
Sigma Method [11] In this method, a value
is assigned
to each solution with coordinates
so that all the
solutions which are on the line
have the same
value of
. Therefore, is deﬁned as follows:
(7)
Figure 4 shows the values of
for different lines. Indeed
states the angle between the line and the axis
.
0
0.5
1
0
0.5
1
0
0.5
1
f2
(c)
f1
f3
0
0.5
1
0
0.5
1
0
1
2
3
4
f2
(d)
f1
f3
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
f1
f2
(a)
0 0.2 0.4 0.6 0.8 1
0
0.5
1
1.5
2
f1
f2
(b)
σ = −0.6
σ = −0.01
σ = 0.7
σ = 0.9
σ = 0.75
σ = −0.01
σ = 0.7
σ = (0, 0, 0)
σ = (0, 0, 0)
σ =
(−0.75,0.1,0.8)
σ =
(−0.75,0.1,0.8)
Figure 4: Sigma method for 2 and 3objective spaces.
In the general case, is a vector of elements, where
is the dimension of the objective space. In this case, each
element of
is the combination of two coordinates in terms
of the Equation (7). For example for three coordinates of
, and , it is deﬁned as follows:
(8)
Different values of
for different values of , and
are shown in Figure 4. In the general case, when a point has
the same position in each dimension (e.g.,
in
3 dimensional space),
.
We have to note that the objective functions must contain
positive values, otherwise we have to transform them into
the positive regions and when the objectives are not in the
same ranges scaled sigma values [11] are used.
Sigma Diversity Metric Figure 5 shows the idea of us
ing the Sigma method as a diversity metric for 2objective
spaces. As it is shown,
lines with different sigma val
ues are drawn from the origin. These lines are called refer
ence lines and have the angle
to the axis, where
. We consider reference lines for
computing the diversity of an archive with the size of
( ). In the next step, the sigma value of each
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
f1
f2
Figure 5: 2objective Sigma Diversity Metric. Black points
are the solutions of a 2objective test function and the lines
are reference lines.
reference line should be computed, which we call reference
sigma value. In the case that all of the
solutions are well
distributed, they should be spread in many different regions.
For higher dimensional spaces, the reference lines are
also deﬁned by lines passing the origin. In order to ﬁnd the
angles between the lines and each coordinate axis, it is eas
ier to ﬁnd reference points located on the reference lines.
Hence in an
dimensional space, the coordinate of each
reference point is a vector of
elements. Algorithm 1 cal
culates the coordinates of the reference points. For example
we obtain
, for and
. Indeed in this Algorithm, the ﬁrst coordinate of
point is kept constant ( ) and the other coordinates
are changing. However for obtaining the whole reference
points, the algorithm must be repeated
times, each time
one of the coordinates must be kept constant. In our exam
ple in the second (
th) run we keep the second coordinate
constant and obtain
. The
Algorithm 1 Calculate coordinates of reference points
Require:
Ensure:
for to do
for
to do
for to do
end for
end for
end for
I = i
number of reference points produced by Algorithm 1 de
pends on
and . In 2objective spaces, the number of ref
erence lines is equal to
. In higher dimensional spaces
is the number of regions which are separated by refer
ence lines on the plane generated by only two of coordinate
axes. For example in Figure 6, on the plane generated by
and axes, there are four regions separated by reference
lines. In higher dimensional spaces, the number of refer
ence points made by Algorithm 1 is more than the required
number of reference lines. Because by repeating the algo
rithm some points lie on the same reference line. In the pre
vious example, the points
and , and
and are on the same lines. There
fore, the number of reference lines can be calculated after
ﬁnding the sigma value (vector) of each reference point.
The points located on a reference line will have the same
sigma vectors and the number of reference lines is the num
ber of nonrepeated reference sigma vectors. Table 1 shows
Figure 6: Reference lines in 3objective space ( ).
the number of reference lines for different values of
in 3
objective spaces. A binary Flag is also kept beside each ref
Table 1: Number of reference lines obtained from different
values for 3objective spaces
number of ref.
4 25 0.15
6 67 0.1
8 133 0.1
10 223 0.1
12 337 0.1
14 475 0.05
16 637 0.05
18 823 0.05
20 1033 0.05
erence sigma vector, which is at the beginning. the Flag of
each reference sigma vector can only turn to
, when at least
one solution has a sigma vector equal or with a distance
1
less than
to it. A counter counts the reference lines with
Flags equal to
and the diversity metric becomes:
number of reference lines
(9)
The value of
depends on the test function, however it
should decrease when increasing the number of reference
lines. Table 1 shows an example on choosing
for 3
objective test functions.
The Sigma diversity metric is easy to implement and is
very efﬁcient in computing the diversity of solutions in high
dimensional spaces. The 2objective Sigma diversity met
ric seems to have some similarities to Entropy approach [5],
Sparsity measurement [3] and [8], especially when measur
ing the diversity of very convex (or nonconvex) objective
fronts. But in comparison to them, it is very easy to cal
culate the diversity of solutions in high dimensional spaces
using the sigma diversity metric. The Sigma diversity met
ric like the Sigma method can also be scaled for different
ranges of the objective values. However, the objective val
ues must contain just the positive values, and the negative
values must be transfered to the positive part (i.e., upper
right quadrant of a circle in two dimensions). This is possi
ble without loss of generality
5.2 Test Functions
The test functions are 2 and 3objective optimization prob
lems selected from [16], [4] and are shown in Table 2.
In Table 2,
. For test functions and ,
and for test functions and , .
5.2.1 Parameter Settings
The tests are done for 120 particles in the population and
300 generations with the turbulence factor of 0.01 and iner
tia weight of 0.4. Initial archives are obtained by running a
MOPSO with population size 100 and 200 generations for
test functions and and population size 500 and 10 gen
erations for test functions
and .
1
Euclidian distance
Table 2: Test Functions:
Function
1
2
3
4
5.2.2 Results
The MOPSO is run for different
values using the 
dominance technique. Then it is run again by using the
clustering technique. In the clustering technique, the max
imum archive size is set to the archive size obtained by the
dominance.
Tables 3 and 4 show the results of 2 and 3objective test
functions. In these tables,
is the MOPSO using the 
dominance and
is the MOPSO using the clustering tech
nique, size is the archive size,
and are the CPU times
needed to run each MOPSO on a 500 MHz UltraSPARC
IIe SUN Workstation,
refers to the number of solu
tions in
that are weakly dominated by and is the
Sigma diversity metric values (in percent). All the values
recorded in Tables 3 and 4 are average values from ﬁve dif
ferent runs with different initial populations.
0 200 400 600
8
9
10
11
12
13
(a) t1 test function
size
log time (ms)
0 200 400 600
8
8.5
9
9.5
10
10.5
11
11.5
(b) t2 test function
size
log time (ms)
0 200 400 600 800 1000
6
8
10
12
14
16
(c) t3 test function
size
log time (ms)
0 200 400 600 800
6
8
10
12
14
16
(d) t4 test function
size
log time (ms)
Figure 7: Inﬂuence of on CPU time ( : Clustering, :
dominance).
Table 3: Results of 2objective test functions (time in mil
liseconds).
: MOPSO using dominance and : MOPSO
using the clustering technique. (
: the clustering can just
ﬁnd 616 solutions)
test function
size
0.1 16 3041 33429 4 0 87 87
0.05 26 3312 37006 4 0 80 80
0.025 48 3305 144151 1 1 93 91
0.01 109 3754 204412 13 12 77 74
0.005 204 4401 318432 80 13 93 87
0.001 517 6331 23294 398 13 93 93
0.0001 941 8083 6096
828 7 97 93
0 616 6096     
test function
0.025 20 3127 86329 0 9 50 45
0.01 40 3229 11807 3 5 30 28
0.0075 52 3300 12270 0 11 27 30
0.005 71 3358 15102 6 14 34 48
0.0025 123 3635 11509 80 2 43 53
0.001 170 3909 55498 33 54 51 60
0.0005 220 4353 78289 32 65 59 58
0.00025 249 4416 40306 179 7 61 61
0.0001 507 5868 41203 460 2 55 51
0 730 7560     
Table 4: Results of 3objective test functions (time in mil
liseconds).
: MOPSO using dominance and : MOPSO
using the clustering technique.
test function
size
0.1 68 2968 164954 16 0 77 91
0.07 113 3368 254867 17 0 83 83
0.06 130 3740 321244 12 0 86 93
0.05 176 4188 555989 19 2 91 93
0.04 219 5191 510482 25 1 98 98
0.03 351 6979 1161747 34 8 70 83
0.02 660 12126 3277912 73 7 92 93
0.015 956 16694 6154627 135 11 98 97
0 10692 172782     
test function
0.1 30 2839 165275 3 0 32 28
0.05 76 3275 298039 1 0 23 29
0.04 84 3505 353349 1 1 23 28
0.03 133 4194 552847 1 0 22 25
0.025 157 4735 715612 1 1 21 24
0.02 216 5788 1001681 7 1 24 27
0.015 331 8103 1907411 2 0 18 19
0.01 610 13574 5152398 10 1 19 21
0 21351 373641     
Inﬂuence on Computational time Figure 7 shows the
CPU times of the two methods in Tables 3 and 4 graphi
cally, where size is the archive size and the CPU time is
shown in logarithmic milliseconds values. For all test func
tions, the CPU time increases when increasing the archive
size. We have to note that when the limit of the archive size
is bigger than the number of nondominated solutions, clus
tering is not applied to the archive. This can be observed es
pecially for both of the 2objective test functions, the CPU
time of the clustering technique decreases for large archive
sizes. In both of 2 and 3objective test functions the CPU
time of the program when using the
dominance is much
less than when using the clustering techniques. The cluster
ing technique takes in some cases more than 100 times the
dominance to ﬁnd the same number of solutions.
In Table 3, the
dominance method ﬁnds 941 solutions
for the test function
, when . But if we apply
the method using the clustering technique, we see that we
never reach the number of 941 as the archive size in order
to apply clustering on it, therefore it will take less time than
the
dominance method.
Inﬂuence on convergence In Tables 3 and 4, the factor
shows the number of solutions in set that are weakly
dominated by the solutions in set
, i.e.:
(10)
By comparing
and , where A is the MOPSO us
ing the
dominance and B MOPSO using clustering tech
nique, we can conclude that for the same archive sizes the

dominance dominates more solutions of the results of clus
tering technique. This also depends on the archive size and
the number of objectives. For test functions t1, t2 and t3, the
values of
are much higher than , which we can
conclude a better convergence. However, for the 3objective
test function t4, they are comparable.
Inﬂuence on diversity As it is explained, we have intro
duced a new diversity metric, which are demonstrated as the
values in Tables 3 and 4. The value is shown in per
cent. Here, we study the results of 2 and 3objective test
functions separately:
 2objective test functions: In both of the 2objective
test functions (
and ), we have used the same number
of reference lines (Sigma values) as the archive size i.e.,
. Therefore, it is clear for the test function ,
which has discontinuities, the value of
will never reach
100
. The values of used in measuring the diversities
are as follows: for archive sizes less than 20, 0.1, between
20 and 50, 0.05, between 50 and 500, 0.01, and more than
500, 0.005. Comparing the diversity of the
dominance
method with the clustering method by using
values, we
conclude that for bigger archive sizes the
dominance has
bigger
values, which means a better diversity. In some
cases the clustering method obtains higher
values than
using
dominance. However, the diversity of solutions is
comparable.
 3objective test functions: In both of the 3objective
test functions, we can not achieve the best diversity of solu
tions. However, the clustering method gives us better diver
sity of solutions than the
dominance method. One of the
reasons may be the shape of the approximate Paretofront.
In Table 4, the number of reference lines is determined by
the value of
in Table 1. In our experiments, we have used
0
0.5
1
1.5
0
0.5
1
1.5
0
0.5
1
1.5
f2
(a) e−dominance
f1
f3
0
0.5
1
1.5
0
0.5
1
1.5
0
0.5
1
1.5
f2
(b) clustering
f1
f3
0 0.5 1 1.5
0
0.5
1
1.5
(c) e−dominance
θ
ϕ
0 0.5 1 1.5
0
0.5
1
1.5
(d) clustering
θ
ϕ
Figure 8: test function ( ). (a),(b) objective space
and (c),(d)
axis of the spherical coordinate.
0
0.5
1
0
0.5
1
0
2
4
6
f2
(a) e−dominance
f1
f3
0
0.5
1
0
0.5
1
0
2
4
6
f2
(b) clustering
f1
f3
0 0.5 1 1.5
0
0.5
1
1.5
(c) e−dominance
θ
ϕ
0 0.5 1 1.5
0
0.5
1
1.5
(d) clustering
θ
ϕ
Figure 9: test function ( ). (a),(b) objective space
and (c),(d)
axis of the spherical coordinate.
the number of reference lines very close to the archive sizes.
Figures 8 and 9 show the results of the
and test func
tions and also the
 axis of the solutions (spherical coor
dinates), for having a better observation on the diversity of
solutions. We can observe that the
dominance method can
not obtain some solutions, therefore the diversity of solu
tions of the
dominance, especially for test function , is
not as good as the clustering method.
6 Conclusion and Future Work
In this paper, the inﬂuence of the
dominance in compari
son to the clustering techniques is studied. The
dominance
bounds the number of solutions in the archive and decreases
the computational time. The computational time in some
cases is much less than the method using the clustering
technique. Using
dominance has also inﬂuence on con
vergence and diversity of solutions. The obtained solutions
have comparable convergence and diversity when compared
to clustering technique and in some cases are better in con
vergence and diversity, especially for 2objective test func
tions.
The diversity of the solutions is compared with a new di
versity metric called Sigma metric. According to this met
ric, the diversity of the solutions obtained by
dominance is
getting worse than the clustering technique for an increasing
number of objectives. However, we have to consider that the
results are just for the recorded number of generations and
if we run the methods for a large number of generations we
obtain a very good diversity and convergence of solutions.
The introduced diversity metric Sigma diversity metric is
easy to implement and efﬁcient for high dimensional spaces
which makes it worthy in comparison to other diversity met
rics. Dealing with continuous and positive objective values
it will give us a very good measurement of diversity of so
lutions. In the case of negative objective values or when
the objective functions have different ranges, scaled Sigma
method should be used.
In this paper, we have also suggested to use an initial
archive instead of an empty archive. This has inﬂuence on
the diversity of the solutions.
In the future we would like to investigate and compare
the
dominance method for different number of genera
tions for different test functions with higher number of ob
jectives.
Bibliography
[1] C. A. Coello Coello and M. S. Lechuga. Mopso: A
proposal for multiple objective particle swarm opti
mization. In IEEE Proceedings World Congress on
Computational Intelligence, pages 1051–1056, 2003.
[2] K. Deb. MultiObjective Optimization using Evolu
tionary Algorithms. John Wiley & Sons, 2001.
[3] K. Deb, M. Mohan, and SV. Mishra. A fast multi
objective evolutionary algorithm for ﬁnding well
spread paretooptimal solutions. In KanGAL Report
No. 2003002,Indian Institute Of Technology Kanpur,
2002.
[4] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler. Scal
able multiobjective optimization test problems. In
IEEE Proceedings World Congress on Computational
Intelligence, 2002.
[5] A. FarhangMehr and S. Azarm. Diversity assessment
of paretooptimal sets: An entropy approach. In IEEE
Proceedings World Congress on Computational Intel
ligence, 2002.
[6] J. E. Fieldsend and S. Singh. A multiobjective al
gorithm based upon particle swarm optimisation, an
efﬁcient data structure and turbulence. In The 2002
U.K. Workshop on Computational Intelligence, pages
34–44, 2002.
[7] X. Hu, R. Eberhart, and Y. Shi. Particle swarm with
extended memory for multiobjective optimization. In
IEEE Swarm Intelligence Symposium, pages 193–198,
2003.
[8] V. Khare, X. Yao, and K. Deb. Performance scaling
of multiobjective evolutionary algorithms. In Pro
ceedings of Second International conference on Evo
lutionary MultiCriterion Optimization, pages 376–
390, 2003.
[9] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler.
Archiving with guaranteed convergence and diversity
in multiobjective optimization. In Genetic and Evolu
tionary Computation Conference (GECCO02), pages
439–447, 2002.
[10] S. Mostaghim and J. Teich. Quadtrees: A data struc
ture for storing paretosets in multiobjective evolu
tionary algorithms with elitism. In Evolutionary Com
putation Based MultiCriteria Optimization: Theoret
ical Advances and Applications, to appear, 2003.
[11] S. Mostaghim and J. Teich. Strategies for ﬁnding good
local guides in multiobjective particle swarm opti
mization. In IEEE Swarm Intelligence Symposium,
pages 26–33, 2003.
[12] S. Mostaghim, J. Teich, and A. Tyagi. Comparison of
data structures for storing paretosets in MOEAs. In
IEEE Proceedings World Congress on Computational
Intelligence, pages 843–849, 2002.
[13] C. H. Papadimitriou and M. Yannakakis. On the ap
proximability of tradeoffs and optimal access of web
sources (extended abstract). In IEEE Symposium on
Foundations of Computer Science, 2000.
[14] K. E. Parsopoulos and M. N. Vrahatis. Recent ap
proaches to global optimization problems through par
ticle swarm optimization. In Natural Computing, 1
(23), pages 235–306, Kluwer Academic Publishers,
2002.
[15] O. Sch¨utze, S. Mostaghim, M. Dellnitz, and J. Teich.
Covering pareto sets by multilevel evolutionary sub
division techniques. In Proceedings of Second Inter
national conference on Evolutionary MultiCriterion
Optimization, pages 118–132, 2003.
[16] E. Zitzler. Evolutionary Algorithms for Multiobjec
tive Optimization: Methods and Applications. TIK
Schriftenreihe Nr. 30, Diss ETH No. 13398, Shaker
Verlag, Germany, Swiss Federal Institute of Technol
ogy (ETH) Zurich, 1999.
[17] E. Zitzler, M. Laumanns, and L. Thiele. Spea2: Im
proving the strength pareto evolutionary algorithm. In
EUROGEN 2001, Evolutionary Methods for Design,
Optimisation and Control with Applications to Indus
trial Problems, 2001.