Conference PaperPDF Available

A New Collaborative Approach to Particle Swarm Optimization for Global Optimization

  • Vietnam Institute of Meteorology Hydrology and Climate change

Abstract and Figures

Particle swarm optimization (PSO) is population based metaheuristic algorithm which mimics animal flocking behavior for food searching and widely applied in various fields. In standard PSO, movement behavior of particles is forced by the current bests, global best and personal best. Despite moving toward the current bests enhances convergence, however, there is a high chance for trapping in local optima. To overcome this local trapping, a new updating equation proposed for particles so called extraordinary particle swarm optimization (EPSO). The particles in EPSO move toward their own targets selected at each iteration. The targets can be the global best, local bests or even the worst particle. This approach can make particles jump from local optima. The performance of EPSO has been carried out for unconstrained benchmarks and compared to various optimizers in the literature. The optimization results obtained by the EPSO surpass those of standard PSO and its variants for most of benchmark problems.
Content may be subject to copyright.
A new collaborative approach to particle swarm
optimization for global optimization
Joong Hoon Kim*, ThiThuy Ngo, Ali Sadollah
School of Civil, Environmental, and Architectural Engineering, Korea University, 136-713,
Seoul, South Korea
{jaykim, ngothuy, sadollah}
Abstract.Particle swarm optimization (PSO) is population based metaheuristic al-
gorithm which mimics animal flocking behavior for food searching and widely
applied in various fields. In standard PSO, movement behavior of particles is
forced by the current bests, global best and personal best. Des1pite moving toward
the current bests enhances convergence, however, there is a high chance for trap-
ping in local optima. To overcome this local trapping, a new updating equation
proposedfor particles so calledextraordinary particle swarm optimization (EPSO).
The particles in EPSO move toward their own targets selected at each iteration.
The targets can be the global best, local bests or even the worst particle. This ap-
proach can make particles jump from local optima. The performance of EPSO has
been carried out for unconstrained benchmarks and compared to various optimiz-
ers in the literature. The optimization results obtained by the EPSO surpass those
of standard PSO and its variants for most of benchmark problems.
Keywords: Metaheuristics, Particle swarm optimization, Extraordinary particle
swarm optimization, Global optimization.
1. Introduction
Nature is rich source for inspiring of optimization algorithms. Nowadays, most
of new algorithms have been developed by mimicking natural systems. These me-
taheuristic algorithms, so called nature-inspired algorithms can be classified em-
phasizing on the sources into three main categories, namely, biology, physics, and
chemistry [1]. Those algorithms have been widely applied for many areasin both
designing and operation optimization problems. Some of the most well-known
bio-inspired algorithms are genetic algorithms (GAs) [2], ant system [3], andparti-
cle swarm optimization (PSO) [4]. These algorithms can solve optimization prob-
lems with large dimensions; however, no specific algorithm could be suited for all
problems [5].
Nowadays, the PSO is one of the most common optimizer applied for problems
in different areas. The PSO is a population-based optimization technique original-
ly developed by Kennedy and Eberhart [4]inspiring the movement behavior of
bird and fish flock. The PSO mimics the society ofanimals(i.e., bird or fish) in
which each individual so called particle sharesits information with the others.
An individual in the standard PSO tends to move toward the best particle
among them (global best, gbest) also with considering their best experience (per-
sonal best, pbest).By this motion approach, the PSO has several advantages: fast
convergence, easy implementation, and simple computation. In the other
hand,thePSO exhibits some drawbacks in trapping local optima and premature
convergence. To overwhelm these disadvantages, some variants of PSO are pro-
posed. In general, variants of PSO can be classify into three main aspects: improve
moving environment such as quantum-behaved particles swarm [6,7]; modify
swarm population such as nonlinear simplexinitializing [8], partitioning the popu-
lation [9], or using Gaussian mutation [10]; and alter parameters of inertia
In this paper, a new approach is proposed to the standard PSO focusing on
movement behavior of particles. Instead of the two coefficients presenting cogni-
tive and social components, a combined operator is used. Particles in the new
movement strategy can exchange information with any other particles such as
global best, local bests, or even the worst particle. This approach may help the
PSO to escape from the local optima. The particles may not fly toward currents
bestsandhave opportunities to break the movingrule used in the PSO and become
extraordinary particles.Also, for the mutation part in this improved PSO, the uni-
form random search is utilized.
2. Particle swarm optimization
The PSO originally developed by Eberhart and Kennedy [4]inspired by collec-
tive behavior of animal flocking. The particles in PSO move through the search
space following direction of global and personal bests. Each particle in the PSO at
iteration t is evaluated by fitness function and characterized by its location and ve-
locity [18].
The PSO starts with an initial group of random particles in the D-dimensional
search space of the problem. The particle ithat iteration tisrepresented by position
, ,...,
iii i
Xxxxand velocity
, ,...,
Vvvv. At every iteration, location of
each particle is updated by moving toward the two best locations (i.e., global and
personal bests) as given in the following equation [18]:
11 2 2
(1) ()() () () () ()
ii ii i
Vt wtVt rC pbestt X t rC gbestt X t
(1) () (1)
Xt Xt Vt
where r1 and r2 are uniform distributed random numbers in [0,1], C1 and C2 are
cognitive and social coefficients known as acceleration constants, respectively;
pbesti(t) denotes the best personal position of the particle ithandgbest is the best
position among all particles at iteration t;w(t) is an inertia weight a user parameter
between zero to one. The velocity of each particle is limited by the range of
min max
[v ,v ]
. In entire paper, notations having vector sign corresponds vector
values, otherwise the rest of notations and parameters consider as scalar values.
3. Extraordinary particle swarm optimization
In the PSO, balance of exploitation and exploration is obtained by combining
local and global searchesas well as inertia weight coefficient(see Eq. (1)). The in-
ertia weight represents exploitation-exploration tradeoff [12]; larger inertia weight
is preferred for global search. However, this tradeoff cannot always successfully
apply [19].
The PSO simulates movement behaviors of animal flocking when searching
food. By information sharing mechanism among particles, each individual is
forced to move following the current best particles. Although, this process repre-
sents some advantages, however, there are several drawbacks in terms of being
trapped in local optima and fast immature convergence rate. Directions oriented
by current bests of particles may not be always the best way and moving toward
the worst particles, in fact, may neither be the worst way to the global optima.
This fact therefore motivates a new movement strategy for particles to escape
from local optima and speed up the mature convergence.
Particles in EPSO, so called extraordinary particles, fly to their determined tar-
gets which are updated at each iteration. The target of each particle is stochastical-
ly defined and could be any particle in population, the global best, the local best,
or normal particle with different cost/fitness. These extraordinary particles tend to
break the movement rule applied in standard PSO. The location and velocity of
particle ith is updated by iteration as the following equation:
(1) () ()
Vt C X t X t
(1) () (1)
Xt Xt Vt
whereXTi(t)is the determined target T ofparticle ith at iteration t; C is combined
componentrepresenting bothcognitive and social factors. The target particle could
be the best among entire particles or the best that extraordinary particle has expe-
rienced or just a normal particle in the swarm population. If the chosen target is
global or personal bests, C coefficient, therefore is cognitive or social coefficient,
The random search is involved in EPSO by limiting the potential range of tar-
gets. If the target chosen by particle ith belong to the range, the particle will be
guided by its target, otherwise random search is taken part. The range of potential
target is defined by the upper bound Tup which calculated by user-defined parame-
ter αand population sizeNpop,as follows:Tup= round (α × Npop). Sorting particle by
cost/fitness is consisted before determining targets. It means if αis 0.9, 90% of
best particles in population can be randomly chosen as target solution, more ex-
ploitation considering worst and best solutions and less exploration using the ran-
dom search. In the other hand, if αis 0.1,10% of best particles in population can be
selected as target solution and more exploration using random search and more
greedy movements to the other solutions.
The detailed procedure of EPSO is described in the following steps:
Step 1: Setting initial parameters:combined coefficient C[0,2], andαindicates
theupper bound of a potential target in entire population [0,1].
Step 2: Initializing the swarm population:Randomly generate new particles be-
tween lower and upper bounds of a given problem.
Step 3: Sorting particles based on their cost/fitness.
Step 4: Selecting the target: Select the target for each particle at each iteration
among all particles using the determined index (T) of particles.
T round rand N
, (5)
Whererand is uniform distributed random number [0,1]. The index of target (T)
ranges from zero to Npop.
Step 5: Updating particles to their new locations using the following equation:
() () () 0,
(1) iTii up
tCXt Xt ifT T
LB rand UB LB Otherwise
 , (6)
where LB and UB are lower and upper bounds of the search space, respective-
Step 6: Check the stopping criterion. If the stopping condition is met, stop.
Otherwise, return to Step 3.
4. Optimization results and discussions
To validate performance of EPSO, MATLAB programming software has been
used for various unconstrained benchmark problems. To compare the ability of
EPSO with other optimizers, thirteen unconstrained benchmark functions are in-
vestigated in this paper. Statistical indicators have been obtained from 30 inde-
pendent runs, and collected from the literatures for other optimization methods.
For all benchmark functions, we used the recommended parameters with Npop =
20, C=0.3, and α=0.8.
The unconstrained benchmark problems used in our experimental study are ex-
tracted from [20]. A comparison of EPSO with other variants of PSO and other
population-based metaheuristic algorithms has been given in the following.
The optimization results of EPSO applied for 13 functions (F1 to F13) are repre-
sented in Table 1 and compared to other algorithms including the real-
codedgeneticalgorithm (RGA) [21] andthe gravitational search algorithm GSA
[20].Also, a comparative study with other variants of PSO for several benchmarks
is given in Table 2. The reported variants included in this study are the standard
particle swarm optimization (PSO) [18], the cooperative PSO (CPSO) [22], the
comprehensive learning PSO (CLPSO) [23], the fully informed particle swarm
(FIPS) [24] and the Frankenstein's PSO (F-PSO) [25].
The results from EPSO for 30D problems are obtained after 30 independent
runs with maximum number of function evaluations (NFEs) of 50,000 and
200,000 for Tables 1 and 2, respectively. In the entire paper, the values lower than
1.00E-324 (i.e., defined zero in MATLAB) areconsidered as 0.00E+00. The opti-
mization results of other algorithmsare extracted from [19,20], except for GSA
which had been implemented by authors.
In Table 1, the EPSO shows the far better performance rather than the RGA and
GSA in terms of the best solution, the average of solutions and standard deviation
(SD) for all considered functions. The consequences of 30 independentruns with
200,000 NFEs for the EPSO and several variants of PSO shows the superiority of
the EPSO over the other reported improved versions as shown in Table2. In most
of functions (9 out of 10 functions, highlighted in bold in Table 2) the EPSO
found the lower average solution than the others. In particular, the EPSO found
the global optimal solution of multimodal functionsF9 andF11, while the others ex-
cept CLPSO failed.
Table 1.Comparison of EPSO with the considered optimizers for reported
function RGA GSA EPSO
Average SD Average SD Average SD
F1 2.313E+01 1.215E+01 6.800E-17 2.180E-17 7.776E-18 6.010E-18
F2 1.073E+00 2.666E-01 6.060E-08 1.190E-08 6.787E-12 3.008E-12
F3 5.617E+02 1.256E+02 9.427E+02 2.466E+02 2.121E-01 5.461E-01
F4 1.178E+01 1.576E+00 4.207E+00 1.122E+00 9.941E-03 9.855E-03
F5 1.180E+03 5.481E+02 4.795E+01 3.956E+00 1.785E-02 2.136E-02
F6 2.401E+01 1.017E+01 9.310E-01 2.510E+00 0.000E+00 0.000E+00
6.750E-02 2.870E-02 7.820E-02 4.100E-02 6.470E-04 4.542E-04
F8 -1.248E+04 5.326E+01 -3.604E+03 5.641E+02 -1.257E+04 3.851E-12
F9 5.902E+00 1.171E+00 2.940E+01 4.727E+00 2.274E-14 2.832E-14
F10 2.140E+00 4.014E-01 4.800E-09 5.420E-10 1.284E-09 7.280E-10
F11 1.168E+00 7.950E-02 1.669E+01 4.283E+00 2.533E-08 1.311E-07
F12 5.100E-02 3.520E-02 5.049E-01 4.249E-01 6.055E-20 8.970E-20
F13 8.170E-02 1.074E-01 3.405E+00 3.683E+00 9.373E-16 1.802E-15
Table 2. Comparison of EPSO with several variants of PSO for reported func-
Average SD Average SD Average SD
F1 5.198E-40 1.130E-74 5.146E-13 7.759E-25 4.894E-39 6.781E-78
F2 2.070E-25 1.442E-49 1.253E-07 1.179E-14 8.868E-24 7.901E-49
F3 1.458E+00 1.178E+00 1.889E+03 9.911E+06 1.922E+02 3.843E+03
F5 2.540E+01 5.903E+02 8.265E-01 2.345E+00 1.322E+01 2.148E+02
F6 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
1.238E-02 2.311E-05 1.076E-02 2.770E-05 4.064E-03 9.618E-07
F8 -1.100E+04 1.375E+05 -1.213E+04 3.380E+04 -1.255E+04 4.257E+03
F9 3.476E+01 1.064E+02 3.601E-13 1.504E-24 0.000E+00 0.000E+00
F10 1.492E-14 1.863E-29 1.609E-07 7.861E-14 9.237E-15 6.616E-30
F11 2.162E-02 4.502E-04 2.125E-02 6.314E-04 0.000E+00 0.000E+00
Table 2. Cont.
function FIPS F-PSO EPSO
Average SD Average SD Average SD
F1 4.588E-27 1.958E-53 2.409E-16 2.005E-31 1.662E-74 2.761E-74
F2 2.324E-16 1.141E-32 1.580E-11 1.030E-22 1.903E-47 2.152E-47
F3 9.463E+00 2.598E+01 1.732E+02 9.158E+03 2.014E-03 1.934E-03
F5 2.671E+01 2.003E+02 2.816E+01 2.313E+02 2.824E-05 3.650E-05
F6 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00
3.305E-03 8.668E-07 4.169E-03 2.401E-06 2.580E-04 1.871E-04
F8 -1.105E+04 9.442E+05 -1.122E+04 2.771E+05 -1.257E+04 2.482E-12
F9 5.850E+01 1.919E+02 7.384E+01 3.706E+02 0.000E+00 0.000E+00
F10 1.386E-14 2.323E-29 2.179E-09 1.719E-18 1.214E-14 3.106E-15
F11 2.478E-04 1.827E-06 1.474E-03 1.285E-05 0.000E+00 0.000E+00
5. Conclusions
This paper proposed a new cooperative approach for particle swarm
optimization (PSO) using extraordinary particles which tend to interrupt the
defined rules of standard PSO and move toward their own targets. The algorithm
therefore is called extraordinary PSO (EPSO). The proposed improved PSO
enhances the advantages of standard PSO with two user-defined parameters (C
and α) and overcomes the drawbacks of standard PSO by its approach for finding
optimal solution. The performance of EPSO through unconstrained problems has
surpassed the standard PSO and other reported variants of PSO.
This work was supported by the National Research Foundation of Korea (NRF)
grant funded by the Korean government (MSIP) (NRF-2013R1A2A1A01013886).
[1] Fister, Jr.I., Yang, X.S., Fister, I., Brest, J., Fister, D.: A Brief Review of Nature-Inspired
Algorithms for Optimization, CoRR (2013) 1-7, abs/1307.4186.
[2] Holland, J. H.: Adaptation in natural and artificial systems: an introductory analysis with
applications to biology, control, and artificial intelligence. Michigan: University o
Michigan Press, 1975.
[3] Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony o
cooperating agents, IEEE T. Syst. Man Cyb. (1996) 29–41.
[4] Kennedy, J., Eberhart, R.C.: Particle swarm optimization, in IEEE Intl. Conf. Neural
Networks, Piscataway, NJ, 1995, pp. 1942-1948.
[5] Wolpert, D.H., Macready, W.G. : No free lunch theorems for optimization, IEEE Trans.
Evol. Comput. 1 (1997) 67–82.
[6] Sun, J., Feng, B.: Particle swarm optimization with particles having quantum behavior, in
IEE Proc. Con. Evolut. Comput. (2004) 325-331.
[7] Chen, W., Zhou, D.: An Improved Quantum-
ehaved Particle Swarm Optimization
Algorithm Based on Comprehensive Learning Strategy,J. Control and Decision (2012) 719-
[8] Parsopoulos, K.E., Vrahatis, M.N.: Initializing the particle swarm optimizer using the
nonlinear simplex method,Advances in Intelligent Systems, Fuzzy Systems, Evolutionary
Computation (2002) 216-221.
[9] Jiang, Y., Hu, al.: An improved particle swarm optimization algorithm,Appl. Math.s
Comput.(2007) 231-239.
[10] Higashi, N., Iba, H.: Particle swarm optimization with gaussian mutation, in Proc. IEEE
Swarm Intelligence Symposium, Indiana, 2003, pp. 72-79.
[11] Shi, Y.H., Eberhart, R.C.: A modied particle swarm optimizer,in IEEE Intl. Conf. Evol.
Comput., Anchorage Alaska, 1998, pp. 69-73.
[12] Eberhart, R., Shi, Y.: Comparing inertia weights and constriction factors in particle swarm
optimization, in IEEE Con. Evol. Comput (2000) 84-88.
[13] Chatterjee, A., Siarry, P.: Nonlinear inertia weight variation for dynamic adaption in
particle swarm optimization, Computer and Operations Research (2006) 859-871.
[14] Lei, K., Qiu, Y., He, Y.: A new adaptive well-chosen inertia weight strategy to
automatically harmonize global and local search ability in particle swarm optimizaton, in
ISSCAA, 2006.
[15] Yang, X., Yuan, J., et al.: A modied particle swarm optimizer with dynamic
adaptation,Appl. Math. Comput. (2007) 1205-1213.
[16] Arumugam, M.S. , Rao, M.V.C.: On the improved performances of the particle swarm
optimization algorithms with adaptive parameters, cross-over operators and root
meansquare (RMS) variants for computing optimal control of a class of hybrid systems,
Appl.Soft Comput. (2008) 324-336.
[17] Panigrahi, B.K., Pandi, V.R., Das, S. : Adaptive particle swarm optimization approach for
static and dynamic economic load dispatch,Energ. Convers. and Manage. (2008) 1407-
[18] Eberhart, R.C. , Kennedy, J.:A new optimizer using particle swarm theory, pp. 39-43, 1995.
[19] Nickabadi, A., Ebadzadeh, M.M., Safabakhsh, R.:A novel particle swarm optimization
algorithm with adaptive inertia weight, Appl. Soft Comput. (2011) 3658-3670.
[20] Rashedi, E., Nezamabadi-pour, H., Saryazdi, S.:GSA: A Gravitational Search Algorithm,
Information Sciences 179(13) (2009) 2232–2248.
[21] Sarafrazi, S., Nezamabadi-pour, H., S. Saryazdi,: Disruption: A new operator in
gravitational search algorithm,Scientia Iranica (2011) 539-548.
[22] Fan v.d. Bergh, A.P. Engelbrecht, A Cooperative approach to particle swarm optimization,
IEEE T. Evolut. Comput. (2004) 225-239.
[23] Liang, J.J., Qin, A.K.:Comprehensive learning particle swarm optimizer for global
optimization of multimodal functions, in IEEE T. Evolut. Comput. (2006) 281-295.
[24] Mendes, R., Kennedy, J., Neves, J.: The fully imformed particle swarm: simpler, maybe
better, IEEE T. Evolut. Comput. (2004) 204-210.
[25] M.A. Oca, T. Stutzle, Frankenstein’s pso:a composite particle swarm optimization
algorithm, in IEEE T. Evolut. Comput. (2009) 1120-1132.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
In this paper, a new variant of particle swarm optimisation (PSO) called PSO with improved learning strategy (PSO-ILS) is developed. Specifically, an ILS module is proposed to generate a more effective and efficient exemplar, which could offer a more promising search direction to the PSO-ILS particle. Comparison is made on the PSO-ILS with 6 well-established PSO variants on 10 benchmark functions to investigate the optimisation capability of the proposed algorithm. The simulation results reveal that PSO-ILS outperforms its peers for the majority of the tested benchmarks by demonstrating superior search accuracy, reliability and efficiency.
Full-text available
Swarm intelligence and bio-inspired algorithms form a hot topic in the developments of new algorithms inspired by nature. These nature-inspired metaheuristic algorithms can be based on swarm intelligence, biological systems, physical and chemical systems. Therefore, these algorithms can be called swarm-intelligence-based, bio-inspired, physics-based and chemistry-based, depending on the sources of inspiration. Though not all of them are efficient, a few algorithms have proved to be very efficient and thus have become popular tools for solving real-world problems. Some algorithms are insufficiently studied. The purpose of this review is to present a relatively comprehensive list of all the algorithms in the literature, so as to inspire further research.
A quantum-behaved particle swarm optimization(CLQPSO) algorithm based on comprehensive learning strategy is presented, which helps prevent the original quantum-behaved particle swarm optimization(QPSO) algorithm's tendency to be easily trapped into local optima as a result of the rapid decline in diversity. The learning strategy changes the updating method of local attractor in QPSO, which makes fully use of the social information of the swarm. The 8 benchmark functions are used to test the performance of CLQPSO. The experiments results show that the proposed algorithm can find better solutions than the original QPSO algorithm and the PSO algorithm with higher efficiency.