Multi-Population Parallel Imperialist Competitive Algorithm for Solving
Systems of Nonlinear Equations
Department of Information
University of Turku
Department of Computer
University of Tabriz
Department of Information
University of Turku
Department of Mathematic
University College of
Royal Institute of Technology (KTH)
Department of Information Technology
University of Turku
Royal Institute of Technology
University of Turku, Finland
Abstract— the widespread importance of optimization and
solving NP-hard problems, like solving systems of nonlinear
equations, is indisputable in a diverse range of sciences. Vast uses
of non-linear equations are undeniable. Some of their applications
are in economics, engineering, chemistry, mechanics, medicine,
and robotics. There are different types of methods of solving the
systems of nonlinear equations. One of the most popular of them
is Evolutionary Computing (EC). This paper presents an
evolutionary algorithm that is called Parallel Imperialist
Competitive Algorithm (PICA) which is based on a multi-
population technique for solving systems of nonlinear equations.
In order to demonstrate the efficiency of the proposed approach,
some well-known problems are utilized. The results indicate that
the PICA has a high success and a quick convergence rate.
Keywords— parallel imperialist competitive algorithm
(PICA); multi-population technique; evolutionary computing
(EC); super linear performance; nonlinear equations; multi-
Systems of nonlinear equations are one of the NP-Hard
problems, which resemble the multi-objective optimization
problems. Systems of nonlinear equations are utilized in a range
of engineering applications, such as weather forecast,
petroleum geological prospecting, computational mechanics,
and control fields. The quality of answers of the classical
methods, like the Newton-type methods, depends on the initial
guess of the solution. However, selecting suitable initial
solutions for the most systems of nonlinear equations is
So far, several methods have been proposed for optimization
problems. They can be classified into two major classes:
mathematical methods and evolutionary computing (EC)
methods. There are different types of EC methods, most of them
are implemented in a sequential mode and some in a parallel
Sequential EC methods are more popular than other
mathematical methods for solving nonlinear equations, but they
do not always provide sufficient accuracy. There are different
kinds of parallel EC methods that are capable of improving the
accuracy of results. In this work, we utilize a multi-population
EC method that improves the results of the used benchmarks.
The rest of the paper is organized as follows: In Section II, the
imperialist competitive algorithm is reviewed. Section III
introduces the parallel implementation of the ICA based on the
multi-population technique. In Section IV, the proposed
algorithm is compared with the related previous works. Finally,
Section V concludes the paper and indicates the future works.
II. RELATED WORK
Let us first look into the sequential algorithms that have been
proposed for solving systems of nonlinear equations. El-Emary
and El-Kareem employed Gauss-Legendre integration as a
technique to solve the system of nonlinear equations and used
genetic algorithm (GA) to discover the results without
converting the nonlinear equations to linear equations .
Mastorakis employed genetic algorithm (GA) to solve a non-
linear equation as well as systems of non-linear equations .
Li and Zeng , used a neural-network algorithm for solving
a set of nonlinear equations. The computation is carried out by
a simple gradient descent rule with variable step-size levels
. Huan-Tong et al. proposed a modified evolution strategy
(ES) based on a probability ranking method to solve
complicated nonlinear systems of equations (NSE) problems
. M. Abdollahi et al. applied the imperialist competitive
algorithm for solving nonlinear systems of equations 
Ouyang et al. employed a hybrid particle swarm optimization
(HPSO) algorithm. The particle swarm optimization (PSO)
method focuses on ”exploration”, and the Nelder-Mead simplex
method (SM) focuses on ”exploitation” , while Wu et al.
used a new variation of the social emotional optimization
algorithm called MSEOA, mainly inspired by the Metropolis
Rule . M. Abdollahi et al. proposed a cuckoo optimization
algorithm for solving nonlinear systems equations , .
Luo at al. applied a combination of the chaos search and
Newton type methods . Grosan and Abraham employed a
new perspective of the evolutionary algorithms (EA) , Mo et
al. proposed a combination of the conjugate direction method
(CD) , and M. Jaberipour used the particle swarm algorithm
. Henderson at al. , and Pourjafari et al.  introduced
a methodology based on a polarization technique and a novel
optimization method based on Invasive Weed Optimization
(IWO), respectively, for finding all roots of a system of
In past years, researchers have utilized some parallel EC
methods for optimization problems such as Parallel Genetic
Algorithms , and Parallel PSO , Parallel ABC (PABC)
, Parallel Ant Colony Optimization (PACO) , and Parallel
Memetic  algorithms. Wu and Kang used a parallel elite-
subspace evolutionary algorithm for solving systems of
nonlinear equations . Some parallel EC methods can
achieve super-linear performance , where each one is
implemented with different techniques and hardware platforms.
For example, Parallel Genetic Algorithms are implemented in
the following four categories : Master-Slave Genetic
Algorithms, Corse Grain Genetic Algorithms (Multi-
Populations Genetic Algorithms), Fine Grain Genetic
Algorithms, and Hybrid Genetic Algorithms .
The obtained results of mathematical methods are sensitive to
the initial guess of the solution. The population size of the
evolutionary algorithms is large and the convergence of the
evolutionary methods to the global minimum is slow. The EC
methods are impractical for large-scale problems, like systems
of nonlinear equations because of their high linear algebra costs
and large memory requirements. For this reason, it is necessary
to find an efficient algorithm for solving systems of nonlinear
equations. Let systems of nonlinear equations be of the form:
In order to transform (1) to an optimization problem, we will
use the auxiliary function:
Where f (x) is the global minimum that will be minimized.
In this paper, a parallel imperialist competitive algorithm
(PICA) based on the multi-population technique for solving
systems of nonlinear equations problems is presented. The
proposed method overcomes the mentioned weaknesses of
evolutionary methods for solving the systems of nonlinear
equations problems. We have also selected some well-known
problems for the evaluation.
The Imperialist Competitive Algorithm (ICA) was introduced
by E. Atashpaz and C. Lucas , and is inspired by
imperialistic competition. ICA is an evolutionary algorithm and
optimizes results of problems. In this algorithm, all countries
are divided into two types: imperialist states and colonies.
Imperialistic competition is the main part of this algorithm, and
the expectation is that the colonies converge to the global
minimum of the cost function. In the first step, the algorithm
creates some countries and after sorting them the best countries
are selected to be imperialists and the rest of the countries form
the colonies of these imperialists (Fig 1, step 1). After dividing
all colonies among imperialists, these colonies start moving
toward their relevant imperialist countries (Fig 1, step 2). In the
next step, the ICA computes the power of each imperialist and
the imperialistic competition begins. The weakest imperialist
loses its weakest colony and the selected imperialist captures
this colony (Fig 1, step 5). These steps are then repeated until
the termination condition is satisfied. The termination
conditions can be different. For example, the ICA algorithm
could stop after certain number of iterations or when all
colonies have become members of one imperialist (see Fig. 1).
ICA is a suitable method for optimization problems, but there
exist some challenges concerning the evolutionary algorithms.
For example, when we are considering a far-reaching search
area, we need a large initial population to obtain an appropriate
result, but with a resource constrained processor we may not be
able to satisfy this requirement. Also, when we face a complex
problem that needs complex computations, the run time will
increase, and therefore we need to utilize an efficient method to
improve the speed, stability, and accuracy.
The sequential ICA algorithm inherently has a parallel
structure, and therefore, a parallel ICA implementation is a
viable solution to improve the ICA. In the ICA, each imperialist
and colonies work independently, and after a decade a colony
moves to another imperialist- Hence, this algorithm works
similarly as a multi-population method that works on a
processor. In the next section, we utilize a multi-population
ICA to solve some complex problems.
IV. THE PROPOSED METHOD
In this work, we utilize a multi-population model to
implement the Parallel Imperialist Competitive Algorithm
(PICA), by applying a selective local search strategy, in order
to solve systems of nonlinear equations. We intend to utilize the
full capacity of evolutionary algorithms (e.g. faster
convergence, run time speed, and accuracy) for solving such
problems. There are different approaches for parallelizing
evolutionary algorithms, such as the master slave, multi-
population, fine-grain, and hybrid methods; but multi-
population method has better convergences and has more
accurate results than other parallel methods. Of these, we use
the multi-population method. In the multi-population method,
there are some independent populations in different processors,
each covering its own independent area of the search space.
Each processor runs the ICA on its population with independent
parameter values. Due to this independency, different levels of
exploration and exploitation can be utilized, and therefore the
application can get out from local optimums. The other
advantage of the multi-population method is its ability for
migration, which can significantly improve performance of
solving nonlinear equations. It is the most important parallel
operation in the multi-population implementation, as letting
processors to share the best results and investigate areas with
other processors help discover better results in a smaller number
of iterations. There are different kinds of migration techniques,
for example, each processor can select the best or worst
countries to be sent to the other processors or can randomly
select some countries. In our work, we select the best countries
to be migrated between processors to share the best results
together, and the processors replace their worst countries with
the received best countries. Receiver processors are selected
based on the connection topology used. Figure 1 illustrates the
migration behaviour. For example, for a network with the fully
connected topology, each processor can receive countries from
any other processor. It can be very useful if all processors
receive all migrated countries, but in practice we have to find
balance between data communication cost (communication
time) and achieved improvement in results.
In this work, each country is a possible solution for the
selected nonlinear equation. Hence, the algorithm randomly
creates the initial populations, which are the possible solutions.
Some of them are then selected, at each iteration step, to be
processed by the ICA in order to converge them towards better
Fig. 1. Imperialist Competetive Algorithm 
In our implementation, several processors are connected
together using a ring topology and message passing based
communication. The ring topology has been selected because
of its low communication cost and simplicity. Each processor
is first initialized with a set of independent countries (the
number of countries in each processor is the same) and the ICA
to be independently run on it. After some decades (the period
varies from an execution to another), the best country migrates
from each processor Pi to the next processor Pi+1 in the ring
and replaces the worst country in Pi+1. Since we utilize the ring
topology to connect processors together, and because the
migration takes place in all processors synchronously, the
numbers of countries in any two processors are equal at any
given time. The migration strategy can affect the result as well.
Generally, it is better to establish a balance between the
migration rate and data communication. The chosen ring
topology is utilized to reduce the migration rate and to decrease
the distance of the migrations.
Fig. 2. Multi-population migration operation
Figure 2 shows the architecture of the multi-population
structure with a ring topology. Figure 3 presents the Multi-
Population ICA pseudo code. In our implementation, all
parameters in different processors are equal, and all ICA
computations run independently in different processors. On the
other hand, the migration operations run synchronously as
In the multi-population ICA, the pressure of selection
increases by growing the number of countries, which helps to
obtain more accurate results in a shortest time and converge to
the results faster than the sequential ICA. Therefore it is
beneficial to increase the number of countries.
1-Create independent initial countries.
2-Run ICA algorithm independently.
3-If now is time of migration do
a) Wait until all processors arrive to this point.
b) Send the best country to processor
c) Receive a country from (mod(#processors) and
replace the worse country with the received one.
4-If termination condition is reached, then terminate algorithm.
5-Show the best country.
Fig. 3. Multi-population ICA pseudo code
V. EXPERIMENT AND RESULTS
In this section, five commonly explored problems have been
utilized to demonstrate the performance of the PICA. The
obtained results have been compared with the other well-known
methods that have used the same problems. The parallel ICA
has been implemented on both share memory and massage
passing models. The massage passing interface (MPI) has been
utilized to parallelize our algorithm and MPICH2 to run the
In the multi-population ICA, processors have been connected
in a ring topology with different processors on the different
tests. The proposed algorithm has been tested on an Intel core
i3-330M, processors 2.13 GHz (64-bit) and memory 4 GB. The
best results of benchmarks have been obtained by 30
independent runs. The used parameters for solving the
problems have been illustrated in Table 1.
Test 1: 10-dimention Rastrigin Function
The answer of this test with f (0, 0, 0,…, 0) is 0. This test has
been solved by Mo et al.  and ICA  with 1000 iterations
and 300 population sizes. PICA has been applied to optimize it
with the same parameters.
TABLE I. USED PARAMETERS IN PICA FOR TESTS AND CASES
Number of empires
The results of Mo et al. , ICA  and our proposed
algorithm have been presented in Tables 2-4.
Figures 4 and 5 indicate the convergence history of ICA and
PICA, respectively. The stability chart of PICA has been
indicated in Figure 6. PICA has reached to the optimized
answer before 50 iterations.
Test 2: This example has been used as a benchmark in  and
The answer of the test 2 is 1.21598D and the variables of the
function are in (3, 13). The comparison results of ICA  and
PICA with 1000 iterations and 300 population sizes have been
given in Table 5. PICA has solved Test 2 quicker than prior
methods before 200 iterations (see Figures 7-9).
B. Case study
In this section, three commonly explored systems of nonlinear
equations have been used to demonstrate the performance of the
proposed method, and the obtained results have been compared
with the other known methods.
Case 1: This example has been given in , , , and :
The solutions in  and  have been obtained with 120
iterations with an unknown number of population sizes. The
parameters of the ICA method  have been set to 50
iterations with 250 countries. The obtained solutions by PICA
are better and more accurate than the previous works (see Table
6). Figures 10-12 indicate the convergence history of the Case
1. Figure 13 shows the stability chart of this case.
Case 2: (Problem 2 in , Test Problem 14.1.4 in , and
Case study in  and )
The results of Case 2 in , , , and  with the 50
iterations and the 250 population sizes were compared with
PICA in Table 8. The obtained solutions of PICA have
outperformed the mentioned methods with 250 countries and
35 iterations. The speed up of the proposed algorithm gets better
than the other literatures (see figures 14-16).
Case 3: (Problem 6 in  and Test Problem 14.1.6 in )
Case 3 has been solved by the filled function method in  and
has been proposed as a problem in  and .
The number of iterations for this problem in ,  and 
is 1000 and the population size is 300. Our results with the same
iterations and countries have been compared in Table 9. The
convergence history of ICA  and PICA have been shown in
Figures 17 and 18, respectively. Figure 19 shows the stability
chart of PICA for the Case 3. The statistical results of tests and
cases have been illustrated in Table 7. The comparison
statistical results of the serial ICA and the parallel ICA have
been given in Table 10.
In this paper a parallel implementation of ICA based on the
multi-population method has been utilized to solve the systems
of nonlinear equations. There are different kinds of the PICA
implementation such as the master-slave, the multi-population,
and the hybrid methods that each one has different advantages.
For example, the Master-Slave method can be utilized when we
simply intend to increase the speed of our algorithm, but Multi-
Population method must be used when we intend to increase
both speed and accuracy. Multi-Population method increases
the number of the initial population and therefore, the pressure
of selection grows that it causes to find more accurate results.
In our implementation, the ring connection topology has been
considered to connect the processors. In our algorithm, the
migration operation causes that each processor has the ability
to send its countries to next processors and receives some
countries from previous processors. With the aforementioned
ability each processor shares the best results with other
processors which reduces the number of iterations
In this paper, PICA has been compared with other methods
through some well-known benchmarks and case studies. PICA
has obtained more accurate results with the lower number of
The most important result of PICA is about super linear
performance (where the efficiency value of the algorithm is
more than one). Our implementation achieved the super linear
performance that means it is an outstanding method and is the
best way to solve non-linear problems.
Fig. 4. The convergence history of Rastrigin Function (from )
Fig. 5. The convergence history of Rastrigin with PICA (test 1)
Fig. 6. The stability chart of Rastrigin with PICA (test 1)
Fig. 7. The convegence history of test 2 with D=100 (from )
Fig. 8. The convergence history of PICA for test 2 with D=100
Fig. 9. The Stability chart of test 2 with D=100
Fig. 10. The convergence history of case 1 (from )
Fig. 11. The convergence history of case 1 (form )
Fig. 12. The convergence history of case 1 with PICA
Fig. 13. The stability chart of case 1 with PICA
Fig. 14. The stability chart of case 1 with PICA
Fig. 15. The convergence history of case 2 (from )
Fig. 16. The convergence history of case 2 with PICA
Fig. 17. The stability chart of case 2 with PICA
Fig. 18. The convergence history of case 3 (from )
Fig. 19. The convergence history of case 3 with PICA
Fig. 20. The stability chart of case 3 with PICA
TABLE II. RESULTS OF TEST 1 WITH MO ET AL. ( FROM  )
TABLE III. RESULTS OF TEST 1 WITH ICA (FROM  )
After 200 iterations
After 400 iterations
After 600 iterations
After 800 iterations
After 1000 iteration
TABLE IV. RESULTS OF TEST 1 WITH PICA (PRESENT STUDY )
After 200 iterations
After 400 iterations
After 600 iterations
After 800 iterations
After 1000 iteration
TABLE V. COMPARISON RESULTS OF TEST 2 WITH D=100
TABLE VI. COMPARISON RESULTS OF PICA FOR CASE 1 WITH , ,  AND 
PPSO  and Gyurhan 
PPSO  and Gyurhan 
PICA (present study)
PICA (present study)
PICA (present study)
TABLE VII. STATISTICAL RESULTS
Std. Error Mean
TABLE VIII. COMPARISON RESULTS OF CASE 2
The best in 
The best in 
The best in COA
The best in ICA
The best of PICA
TABLE IX. COMPARISON RESULTS OF CASE 3
TABLE X. THE COMPARISON STATISTICAL RESULTS OF SERIAL ICA 
VII. CONCLUSION AND FUTURE WORKS
In this paper, the parallel imperialist competitive algorithm
based on the MPI instructions (Multi-Population) was utilized
to solve the systems of nonlinear equations. The PICA was
compared with the serial ICA and some of the other proposed
methods. According to the obtained results, the PICA is
suitable for solving different kinds of complex problems, and it
is faster and more efficient than the other methods. The figures
indicated that the answers of our algorithm are stable and the
convergence of the PICA to the best solution is faster than the
other methods, with the lower number of iterations and better
run time. As a result, we claim that the proposed PICA is a
faster and more accurate method, which can be employed to
solve and improve the complex problems. At the end, our future
works will consist of using the proposed parallel algorithm to
solve some of the more practical optimization problems, like
constrained engineering optimizatio
 Y.Z. Luo, G.J. Tang, L.N. Zhou, Hybrid approach for solving systems of
nonlinear equations using chaos optimization and quasi-Newton method,
Appl. Soft. Comput. 8 (2008) 1068-1073.J. Clerk Maxwell, A Treatise on
Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892,
 Y. Mo, H. Liu, Q. Wang, Conjugate direction particle swarm optimization
solving systems of nonlinear equations, Comput. Math. Appl. 57 (2009)
 M. Jaberipour, E. Khorram, B. Karimi, Particle swarm algorithm for
solving systems of nonlinear equations, Comput. Math. Appl. 62 (2011)
 E. Cantu-Paz, A Survey of Parallel Genetic Algorithms, Department of
Computer Science and Illinois Genetic Algorithms Laboratory University
of Illinois at Urbana-Champaign, 1997.
 H. Liu, P. Li, and Y. Wen, Parallel Ant Colony Optimization Algorithm,
World Congress on Intelligent Control and Automation, China, June,
 R. Parpinelli, C. Benitez, and S. Lopes, Parallel Approaches for the
Artificial Bee Colony Algorithm, Handbook of Swarm Intelligence,
 C. Grosan, A. Abraham, A New Approach for Solving Nonlinear
Equations Systems, IEEE Trans. Syst. Man Cybern. A 38 (3) (2008)
Senior Member, IEEE.
 C. Wang, R. Luo, k. Wu, B. Han, A new filled function method for an
unconstrained nonlinear equation, Comput. Appl. Math. 235 (2011) 1689-
 L. Vanneschi, D. Codecasa, and G. Mauri, A Comparative Study of Four
Parallel and Distributed PSO Methods, New Generat. Comput. 29(2011)
 J. Digalakis, and K. Margaritis, A Parallel Memetic Algorithm for Solving
Optimization Problems, 4th Metaheuristics International Conference,
Parallel Distributed Processing Laboratory, Greece, 2001.
 Gyurhan H. Nedzhibov, A family of multi-point iterative methods for
solving systems of nonlinear equations, J. Comput. Appl. Math.
 E. C. G. Wille, E. Y. H. S. Lopes, Discrete Capacity Assignment in IP
networks using Particle Swarm Optimization, Appl. Math. Comput.,217
 C. A. Floudas, P. M. Pardalos, C. S. Adjiman, W. R. Esposito, Z. H.
Gumus, S. T. Harding, J. L. Klepeis, C. A. Meyer, C. A. Schweiger,
Handbook of Test Problems in Local and Global Optimization, Kluwer
Academic Publishers, Dordrecht, the Netherlands, 1999.
 A. Mousa, W. Wahed, R. Allah, A Hybrid Ant Colony Optimization
Approach Based Local Search Scheme for Multi Objective Design
Optimizations, Electr. Pow. Syst. Res. 81 (2011) 1014-1023.
 Ibrahiem M. M. El-Emary and Mona M. Abd El-Kareem, Toward Using
Genetic Algorithm for Solving Nonlinear Eqution Systems, World Appl.
Sci. J. 5 (2008) 282-289.
 M. Abdollahi, A. Isazadeh, D. Abdollahi, Solving systems of nonlinear
equations using imperialist competitive algorithm, The 8th International
Industrial Engineering Conference, 8 (2012) 1-6.
 Nikos E. Mastorakis, Solving Non-linear Equations via Genetic
Algorithms, Proceedings of the 6th WSEAS Int. Conf. on
EvolutionaryComputing, Lisbon, Portugal, June 16-18 (2005) 24-28.
 G. Li, Zh. Zeng, A neural-network algorithm for solving nonlinear
equation systems, IEEE International Conference on Computational
Intelligence and Security, CIS08 (2008) 20-23.
 G. Huan-Tong, S. Yi-Jie, S. Qing-Xi, W. Ting-Ting, Research of Ranking
Method in Evolution Strategy for Solving Nonlinear System of Equations,
IEEE International Conference on Information Science and Engineering,
ICISE09 (2009) 348-351.
 A. Ouyang, Y. Zhou, Q. Luo, Hybrid Particle Swarm Optimization
Algorithm for Solving Systems of Nonlinear Equations, IEEE
International Conference on Granular Computing, GRC09 (2009) 460-
 J. Wu, Zh. Cui, J. Liu, Using Hybrid Social Emotional Optimization
Algorithm with Metropolis Rule to Solve Nonlinear equations, IEEE
International Conference on Cognitive Informatics & Cognitive
Computing, ICCI*CC’11 (2011) 405-411.
 N. Henderson, W. F. Sacco, G. Mendes Platt, Finding more than one root
of nonlinear equations via a polarization technique: An application to
double retrograde vaporization, Chem. Eng. Res. Des. 88 (2010) 551-561.
 E. Pourjafari, H. Mojallali, Solving nonlinear equations systems with a
new approach based on invasive weed optimization algorithm and
clustering, Swarm Evol. Comput. 4 (2012) 3343.
 M. Abdollahi, A. Isazadeh, D. Abdollahi, Imperialist competitive
algorithm for solving systems of nonlinear equations, Comput. Math.
Appl.65 (2013) 1894-1908.
 M. Abdollahi, Sh. Lotfi, D. Abdollahi, Solving systems of nonlinear
equations using cuckoo optimization algorithm, The 3rd International
conference on The Contemporary Issues in Computer Sciences and
Information Technology (CICIS), 3 (2012) 191-194.
 E. Atashpaz-Gargari, C. Lucas, Imperialist competitive algorithm: an
algorithm for optimization inspired by imperialistic competition, in:IEEE
Congress on Evolutionary Computation, 2007, pp. 46614667.
 A. Majd, Sh. Lotfi and G. Sahebi, “Review on Parallel Evolutionary
Computing and Introduce Three General Framework to Parallelize All EC
Algorithms,” 5th Conference on Information and Knowledge Technology
 A. Majd, Sh. Lotfi, G. Sahebi, M. Daneshtalab and J. Plosila, “PICA:
Multi-Population Implementation of Parallel Imperialist Competitive
Algorithms,” 24th Euromicro International Conferences on Parallel,
Distributed and Network-Based Processing, PDP 2016.
 M. Abdollahi, A. Bouyer, D. Abdollahi, Improved cuckoo optimization
algorithm fo solving systems of nonlinear equations, J. Supercomput. 72