Content uploaded by Thuy Thi Ngo
Author content
All content in this area was uploaded by Thuy Thi Ngo on Mar 17, 2016
Content may be subject to copyright.
Content uploaded by Thuy Thi Ngo
Author content
All content in this area was uploaded by Thuy Thi Ngo on Dec 28, 2015
Content may be subject to copyright.
Content uploaded by Thuy Thi Ngo
Author content
All content in this area was uploaded by Thuy Thi Ngo on Dec 23, 2015
Content may be subject to copyright.
A new collaborative approach to particle swarm
optimization for global optimization
Joong Hoon Kim*, ThiThuy Ngo, Ali Sadollah
School of Civil, Environmental, and Architectural Engineering, Korea University, 136-713,
Seoul, South Korea
{jaykim, ngothuy, sadollah}@korea.ac.kr
Abstract.Particle swarm optimization (PSO) is population based metaheuristic al-
gorithm which mimics animal flocking behavior for food searching and widely
applied in various fields. In standard PSO, movement behavior of particles is
forced by the current bests, global best and personal best. Des1pite moving toward
the current bests enhances convergence, however, there is a high chance for trap-
ping in local optima. To overcome this local trapping, a new updating equation
proposedfor particles so calledextraordinary particle swarm optimization (EPSO).
The particles in EPSO move toward their own targets selected at each iteration.
The targets can be the global best, local bests or even the worst particle. This ap-
proach can make particles jump from local optima. The performance of EPSO has
been carried out for unconstrained benchmarks and compared to various optimiz-
ers in the literature. The optimization results obtained by the EPSO surpass those
of standard PSO and its variants for most of benchmark problems.
Keywords: Metaheuristics, Particle swarm optimization, Extraordinary particle
swarm optimization, Global optimization.
1. Introduction
Nature is rich source for inspiring of optimization algorithms. Nowadays, most
of new algorithms have been developed by mimicking natural systems. These me-
taheuristic algorithms, so called nature-inspired algorithms can be classified em-
phasizing on the sources into three main categories, namely, biology, physics, and
chemistry [1]. Those algorithms have been widely applied for many areasin both
designing and operation optimization problems. Some of the most well-known
bio-inspired algorithms are genetic algorithms (GAs) [2], ant system [3], andparti-
cle swarm optimization (PSO) [4]. These algorithms can solve optimization prob-
lems with large dimensions; however, no specific algorithm could be suited for all
problems [5].
2
Nowadays, the PSO is one of the most common optimizer applied for problems
in different areas. The PSO is a population-based optimization technique original-
ly developed by Kennedy and Eberhart [4]inspiring the movement behavior of
bird and fish flock. The PSO mimics the society ofanimals(i.e., bird or fish) in
which each individual so called particle sharesits information with the others.
An individual in the standard PSO tends to move toward the best particle
among them (global best, gbest) also with considering their best experience (per-
sonal best, pbest).By this motion approach, the PSO has several advantages: fast
convergence, easy implementation, and simple computation. In the other
hand,thePSO exhibits some drawbacks in trapping local optima and premature
convergence. To overwhelm these disadvantages, some variants of PSO are pro-
posed. In general, variants of PSO can be classify into three main aspects: improve
moving environment such as quantum-behaved particles swarm [6,7]; modify
swarm population such as nonlinear simplexinitializing [8], partitioning the popu-
lation [9], or using Gaussian mutation [10]; and alter parameters of inertia
weight[11-17].
In this paper, a new approach is proposed to the standard PSO focusing on
movement behavior of particles. Instead of the two coefficients presenting cogni-
tive and social components, a combined operator is used. Particles in the new
movement strategy can exchange information with any other particles such as
global best, local bests, or even the worst particle. This approach may help the
PSO to escape from the local optima. The particles may not fly toward currents
bestsandhave opportunities to break the movingrule used in the PSO and become
extraordinary particles.Also, for the mutation part in this improved PSO, the uni-
form random search is utilized.
2. Particle swarm optimization
The PSO originally developed by Eberhart and Kennedy [4]inspired by collec-
tive behavior of animal flocking. The particles in PSO move through the search
space following direction of global and personal bests. Each particle in the PSO at
iteration t is evaluated by fitness function and characterized by its location and ve-
locity [18].
The PSO starts with an initial group of random particles in the D-dimensional
search space of the problem. The particle ithat iteration tisrepresented by position
12
, ,...,
tD
iii i
Xxxxand velocity
12
, ,...,
tD
iiii
Vvvv. At every iteration, location of
each particle is updated by moving toward the two best locations (i.e., global and
personal bests) as given in the following equation [18]:
11 2 2
(1) ()() () () () ()
ii ii i
Vt wtVt rC pbestt X t rC gbestt X t
(1)
(1) () (1)
iii
Xt Xt Vt
(2)
3
where r1 and r2 are uniform distributed random numbers in [0,1], C1 and C2 are
cognitive and social coefficients known as acceleration constants, respectively;
pbesti(t) denotes the best personal position of the particle ithandgbest is the best
position among all particles at iteration t;w(t) is an inertia weight a user parameter
between zero to one. The velocity of each particle is limited by the range of
min max
[v ,v ]
dd
. In entire paper, notations having vector sign corresponds vector
values, otherwise the rest of notations and parameters consider as scalar values.
3. Extraordinary particle swarm optimization
In the PSO, balance of exploitation and exploration is obtained by combining
local and global searchesas well as inertia weight coefficient(see Eq. (1)). The in-
ertia weight represents exploitation-exploration tradeoff [12]; larger inertia weight
is preferred for global search. However, this tradeoff cannot always successfully
apply [19].
The PSO simulates movement behaviors of animal flocking when searching
food. By information sharing mechanism among particles, each individual is
forced to move following the current best particles. Although, this process repre-
sents some advantages, however, there are several drawbacks in terms of being
trapped in local optima and fast immature convergence rate. Directions oriented
by current bests of particles may not be always the best way and moving toward
the worst particles, in fact, may neither be the worst way to the global optima.
This fact therefore motivates a new movement strategy for particles to escape
from local optima and speed up the mature convergence.
Particles in EPSO, so called extraordinary particles, fly to their determined tar-
gets which are updated at each iteration. The target of each particle is stochastical-
ly defined and could be any particle in population, the global best, the local best,
or normal particle with different cost/fitness. These extraordinary particles tend to
break the movement rule applied in standard PSO. The location and velocity of
particle ith is updated by iteration as the following equation:
(1) () ()
iTii
Vt C X t X t
(3)
(1) () (1)
iii
Xt Xt Vt
(4)
whereXTi(t)is the determined target T ofparticle ith at iteration t; C is combined
componentrepresenting bothcognitive and social factors. The target particle could
be the best among entire particles or the best that extraordinary particle has expe-
rienced or just a normal particle in the swarm population. If the chosen target is
global or personal bests, C coefficient, therefore is cognitive or social coefficient,
respectively.
The random search is involved in EPSO by limiting the potential range of tar-
gets. If the target chosen by particle ith belong to the range, the particle will be
guided by its target, otherwise random search is taken part. The range of potential
4
target is defined by the upper bound Tup which calculated by user-defined parame-
ter αand population sizeNpop,as follows:Tup= round (α × Npop). Sorting particle by
cost/fitness is consisted before determining targets. It means if αis 0.9, 90% of
best particles in population can be randomly chosen as target solution, more ex-
ploitation considering worst and best solutions and less exploration using the ran-
dom search. In the other hand, if αis 0.1,10% of best particles in population can be
selected as target solution and more exploration using random search and more
greedy movements to the other solutions.
The detailed procedure of EPSO is described in the following steps:
Step 1: Setting initial parameters:combined coefficient C[0,2], andαindicates
theupper bound of a potential target in entire population [0,1].
Step 2: Initializing the swarm population:Randomly generate new particles be-
tween lower and upper bounds of a given problem.
Step 3: Sorting particles based on their cost/fitness.
Step 4: Selecting the target: Select the target for each particle at each iteration
among all particles using the determined index (T) of particles.
p
op
T round rand N
, (5)
Whererand is uniform distributed random number [0,1]. The index of target (T)
ranges from zero to Npop.
Step 5: Updating particles to their new locations using the following equation:
() () () 0,
(1) iTii up
i
iii
X
tCXt Xt ifT T
Xt
LB rand UB LB Otherwise
, (6)
where LB and UB are lower and upper bounds of the search space, respective-
ly.
Step 6: Check the stopping criterion. If the stopping condition is met, stop.
Otherwise, return to Step 3.
4. Optimization results and discussions
To validate performance of EPSO, MATLAB programming software has been
used for various unconstrained benchmark problems. To compare the ability of
EPSO with other optimizers, thirteen unconstrained benchmark functions are in-
vestigated in this paper. Statistical indicators have been obtained from 30 inde-
pendent runs, and collected from the literatures for other optimization methods.
For all benchmark functions, we used the recommended parameters with Npop =
20, C=0.3, and α=0.8.
The unconstrained benchmark problems used in our experimental study are ex-
tracted from [20]. A comparison of EPSO with other variants of PSO and other
population-based metaheuristic algorithms has been given in the following.
The optimization results of EPSO applied for 13 functions (F1 to F13) are repre-
sented in Table 1 and compared to other algorithms including the real-
5
codedgeneticalgorithm (RGA) [21] andthe gravitational search algorithm GSA
[20].Also, a comparative study with other variants of PSO for several benchmarks
is given in Table 2. The reported variants included in this study are the standard
particle swarm optimization (PSO) [18], the cooperative PSO (CPSO) [22], the
comprehensive learning PSO (CLPSO) [23], the fully informed particle swarm
(FIPS) [24] and the Frankenstein's PSO (F-PSO) [25].
The results from EPSO for 30D problems are obtained after 30 independent
runs with maximum number of function evaluations (NFEs) of 50,000 and
200,000 for Tables 1 and 2, respectively. In the entire paper, the values lower than
1.00E-324 (i.e., defined zero in MATLAB) areconsidered as 0.00E+00. The opti-
mization results of other algorithmsare extracted from [19,20], except for GSA
which had been implemented by authors.
In Table 1, the EPSO shows the far better performance rather than the RGA and
GSA in terms of the best solution, the average of solutions and standard deviation
(SD) for all considered functions. The consequences of 30 independentruns with
200,000 NFEs for the EPSO and several variants of PSO shows the superiority of
the EPSO over the other reported improved versions as shown in Table2. In most
of functions (9 out of 10 functions, highlighted in bold in Table 2) the EPSO
found the lower average solution than the others. In particular, the EPSO found
the global optimal solution of multimodal functionsF9 andF11, while the others ex-
cept CLPSO failed.
Table 1.Comparison of EPSO with the considered optimizers for reported
benchmarks.
Test
function RGA GSA EPSO
Average SD Average SD Average SD
F1 2.313E+01 1.215E+01 6.800E-17 2.180E-17 7.776E-18 6.010E-18
F2 1.073E+00 2.666E-01 6.060E-08 1.190E-08 6.787E-12 3.008E-12
F3 5.617E+02 1.256E+02 9.427E+02 2.466E+02 2.121E-01 5.461E-01
F4 1.178E+01 1.576E+00 4.207E+00 1.122E+00 9.941E-03 9.855E-03
F5 1.180E+03 5.481E+02 4.795E+01 3.956E+00 1.785E-02 2.136E-02
F6 2.401E+01 1.017E+01 9.310E-01 2.510E+00 0.000E+00 0.000E+00
F
7
6.750E-02 2.870E-02 7.820E-02 4.100E-02 6.470E-04 4.542E-04
F8 -1.248E+04 5.326E+01 -3.604E+03 5.641E+02 -1.257E+04 3.851E-12
F9 5.902E+00 1.171E+00 2.940E+01 4.727E+00 2.274E-14 2.832E-14
F10 2.140E+00 4.014E-01 4.800E-09 5.420E-10 1.284E-09 7.280E-10
F11 1.168E+00 7.950E-02 1.669E+01 4.283E+00 2.533E-08 1.311E-07
F12 5.100E-02 3.520E-02 5.049E-01 4.249E-01 6.055E-20 8.970E-20
F13 8.170E-02 1.074E-01 3.405E+00 3.683E+00 9.373E-16 1.802E-15
Table 2. Comparison of EPSO with several variants of PSO for reported func-
tions.
Test
function PSO CPSO CLPSO
Average SD Average SD Average SD
F1 5.198E-40 1.130E-74 5.146E-13 7.759E-25 4.894E-39 6.781E-78
F2 2.070E-25 1.442E-49 1.253E-07 1.179E-14 8.868E-24 7.901E-49
F3 1.458E+00 1.178E+00 1.889E+03 9.911E+06 1.922E+02 3.843E+03
F5 2.540E+01 5.903E+02 8.265E-01 2.345E+00 1.322E+01 2.148E+02
6
F6 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
F
7
1.238E-02 2.311E-05 1.076E-02 2.770E-05 4.064E-03 9.618E-07
F8 -1.100E+04 1.375E+05 -1.213E+04 3.380E+04 -1.255E+04 4.257E+03
F9 3.476E+01 1.064E+02 3.601E-13 1.504E-24 0.000E+00 0.000E+00
F10 1.492E-14 1.863E-29 1.609E-07 7.861E-14 9.237E-15 6.616E-30
F11 2.162E-02 4.502E-04 2.125E-02 6.314E-04 0.000E+00 0.000E+00
Table 2. Cont.
Test
function FIPS F-PSO EPSO
Average SD Average SD Average SD
F1 4.588E-27 1.958E-53 2.409E-16 2.005E-31 1.662E-74 2.761E-74
F2 2.324E-16 1.141E-32 1.580E-11 1.030E-22 1.903E-47 2.152E-47
F3 9.463E+00 2.598E+01 1.732E+02 9.158E+03 2.014E-03 1.934E-03
F5 2.671E+01 2.003E+02 2.816E+01 2.313E+02 2.824E-05 3.650E-05
F6 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
F
7
3.305E-03 8.668E-07 4.169E-03 2.401E-06 2.580E-04 1.871E-04
F8 -1.105E+04 9.442E+05 -1.122E+04 2.771E+05 -1.257E+04 2.482E-12
F9 5.850E+01 1.919E+02 7.384E+01 3.706E+02 0.000E+00 0.000E+00
F10 1.386E-14 2.323E-29 2.179E-09 1.719E-18 1.214E-14 3.106E-15
F11 2.478E-04 1.827E-06 1.474E-03 1.285E-05 0.000E+00 0.000E+00
5. Conclusions
This paper proposed a new cooperative approach for particle swarm
optimization (PSO) using extraordinary particles which tend to interrupt the
defined rules of standard PSO and move toward their own targets. The algorithm
therefore is called extraordinary PSO (EPSO). The proposed improved PSO
enhances the advantages of standard PSO with two user-defined parameters (C
and α) and overcomes the drawbacks of standard PSO by its approach for finding
optimal solution. The performance of EPSO through unconstrained problems has
surpassed the standard PSO and other reported variants of PSO.
Acknowledgment
This work was supported by the National Research Foundation of Korea (NRF)
grant funded by the Korean government (MSIP) (NRF-2013R1A2A1A01013886).
References
[1] Fister, Jr.I., Yang, X.S., Fister, I., Brest, J., Fister, D.: A Brief Review of Nature-Inspired
Algorithms for Optimization, CoRR (2013) 1-7, abs/1307.4186.
7
[2] Holland, J. H.: Adaptation in natural and artificial systems: an introductory analysis with
applications to biology, control, and artificial intelligence. Michigan: University o
f
Michigan Press, 1975.
[3] Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony o
f
cooperating agents, IEEE T. Syst. Man Cyb. (1996) 29–41.
[4] Kennedy, J., Eberhart, R.C.: Particle swarm optimization, in IEEE Intl. Conf. Neural
Networks, Piscataway, NJ, 1995, pp. 1942-1948.
[5] Wolpert, D.H., Macready, W.G. : No free lunch theorems for optimization, IEEE Trans.
Evol. Comput. 1 (1997) 67–82.
[6] Sun, J., Feng, B.: Particle swarm optimization with particles having quantum behavior, in
IEE Proc. Con. Evolut. Comput. (2004) 325-331.
[7] Chen, W., Zhou, D.: An Improved Quantum-
b
ehaved Particle Swarm Optimization
Algorithm Based on Comprehensive Learning Strategy,J. Control and Decision (2012) 719-
723.
[8] Parsopoulos, K.E., Vrahatis, M.N.: Initializing the particle swarm optimizer using the
nonlinear simplex method,Advances in Intelligent Systems, Fuzzy Systems, Evolutionary
Computation (2002) 216-221.
[9] Jiang, Y., Hu, T.et al.: An improved particle swarm optimization algorithm,Appl. Math.s
Comput.(2007) 231-239.
[10] Higashi, N., Iba, H.: Particle swarm optimization with gaussian mutation, in Proc. IEEE
Swarm Intelligence Symposium, Indiana, 2003, pp. 72-79.
[11] Shi, Y.H., Eberhart, R.C.: A modified particle swarm optimizer,in IEEE Intl. Conf. Evol.
Comput., Anchorage Alaska, 1998, pp. 69-73.
[12] Eberhart, R., Shi, Y.: Comparing inertia weights and constriction factors in particle swarm
optimization, in IEEE Con. Evol. Comput (2000) 84-88.
[13] Chatterjee, A., Siarry, P.: Nonlinear inertia weight variation for dynamic adaption in
particle swarm optimization, Computer and Operations Research (2006) 859-871.
[14] Lei, K., Qiu, Y., He, Y.: A new adaptive well-chosen inertia weight strategy to
automatically harmonize global and local search ability in particle swarm optimizaton, in
ISSCAA, 2006.
[15] Yang, X., Yuan, J., et al.: A modified particle swarm optimizer with dynamic
adaptation,Appl. Math. Comput. (2007) 1205-1213.
[16] Arumugam, M.S. , Rao, M.V.C.: On the improved performances of the particle swarm
optimization algorithms with adaptive parameters, cross-over operators and root
meansquare (RMS) variants for computing optimal control of a class of hybrid systems,
Appl.Soft Comput. (2008) 324-336.
[17] Panigrahi, B.K., Pandi, V.R., Das, S. : Adaptive particle swarm optimization approach for
static and dynamic economic load dispatch,Energ. Convers. and Manage. (2008) 1407-
1415.
[18] Eberhart, R.C. , Kennedy, J.:A new optimizer using particle swarm theory, pp. 39-43, 1995.
[19] Nickabadi, A., Ebadzadeh, M.M., Safabakhsh, R.:A novel particle swarm optimization
algorithm with adaptive inertia weight, Appl. Soft Comput. (2011) 3658-3670.
[20] Rashedi, E., Nezamabadi-pour, H., Saryazdi, S.:GSA: A Gravitational Search Algorithm,
Information Sciences 179(13) (2009) 2232–2248.
[21] Sarafrazi, S., Nezamabadi-pour, H., S. Saryazdi,: Disruption: A new operator in
gravitational search algorithm,Scientia Iranica (2011) 539-548.
[22] Fan v.d. Bergh, A.P. Engelbrecht, A Cooperative approach to particle swarm optimization,
8
IEEE T. Evolut. Comput. (2004) 225-239.
[23] Liang, J.J., Qin, A.K.:Comprehensive learning particle swarm optimizer for global
optimization of multimodal functions, in IEEE T. Evolut. Comput. (2006) 281-295.
[24] Mendes, R., Kennedy, J., Neves, J.: The fully imformed particle swarm: simpler, maybe
better, IEEE T. Evolut. Comput. (2004) 204-210.
[25] M.A. Oca, T. Stutzle, Frankenstein’s pso:a composite particle swarm optimization
algorithm, in IEEE T. Evolut. Comput. (2009) 1120-1132.