Conference PaperPDF Available

Multi-Population Parallel Imperialist Competitive Algorithm for Solving Systems of Nonlinear Equations

Authors:

Abstract and Figures

The widespread importance of optimization and solving NP-hard problems, like solving systems of nonlinear equations, is indisputable in a diverse range of sciences. Vast uses of non-linear equations are undeniable. Some of their applications are in economics, engineering, chemistry, mechanics, medicine, and robotics. There are different types of methods of solving the systems of nonlinear equations. One of the most popular of them is Evolutionary Computing (EC). This paper presents an evolutionary algorithm that is called Parallel Imperialist Competitive Algorithm (PICA) which is based on a multi-population technique for solving systems of nonlinear equations. In order to demonstrate the efficiency of the proposed approach, some well-known problems are utilized. The results indicate that the PICA has a high success and a quick convergence rate.
Content may be subject to copyright.
Multi-Population Parallel Imperialist Competitive Algorithm for Solving
Systems of Nonlinear Equations
Amin Majd
Department of Information
Technology
University of Turku
Turku, Finland
Amin.Majd@utu.fi
Mahdi Abdollahi
Department of Computer
Sciences
University of Tabriz
Tabriz, Iran
m.abdolahi89@ms.tabrizu.ac.ir
Golnaz Sahebi
Department of Information
Technology
University of Turku
Turku, Finland
Golnaz.Sahebi@utu.fi
Davoud Abdollahi
Department of Mathematic
Sciences
University College of
Daneshvaran
Tabriz, Iran
abdollahi_d@daneshvaran.ac.ir
Masoud Daneshtalab
Royal Institute of Technology (KTH)
Stockholm, Sweden,
masdan@kth.se
Juha Plosila
Department of Information Technology
University of Turku
Turku, Finland
juplos@utu.fi
Hannu Tenhunen
Royal Institute of Technology
Stockholm,Sweden
University of Turku, Finland
Hannu@kth.se
Abstract the widespread importance of optimization and
solving NP-hard problems, like solving systems of nonlinear
equations, is indisputable in a diverse range of sciences. Vast uses
of non-linear equations are undeniable. Some of their applications
are in economics, engineering, chemistry, mechanics, medicine,
and robotics. There are different types of methods of solving the
systems of nonlinear equations. One of the most popular of them
is Evolutionary Computing (EC). This paper presents an
evolutionary algorithm that is called Parallel Imperialist
Competitive Algorithm (PICA) which is based on a multi-
population technique for solving systems of nonlinear equations.
In order to demonstrate the efficiency of the proposed approach,
some well-known problems are utilized. The results indicate that
the PICA has a high success and a quick convergence rate.
Keywords parallel imperialist competitive algorithm
(PICA); multi-population technique; evolutionary computing
(EC); super linear performance; nonlinear equations; multi-
objective optimization;
I. INTRODUCTION
Systems of nonlinear equations are one of the NP-Hard
problems, which resemble the multi-objective optimization
problems. Systems of nonlinear equations are utilized in a range
of engineering applications, such as weather forecast,
petroleum geological prospecting, computational mechanics,
and control fields. The quality of answers of the classical
methods, like the Newton-type methods, depends on the initial
guess of the solution. However, selecting suitable initial
solutions for the most systems of nonlinear equations is
extremely difficult.
So far, several methods have been proposed for optimization
problems. They can be classified into two major classes:
mathematical methods and evolutionary computing (EC)
methods. There are different types of EC methods, most of them
are implemented in a sequential mode and some in a parallel
mode.
Sequential EC methods are more popular than other
mathematical methods for solving nonlinear equations, but they
do not always provide sufficient accuracy. There are different
kinds of parallel EC methods that are capable of improving the
accuracy of results. In this work, we utilize a multi-population
EC method that improves the results of the used benchmarks.
The rest of the paper is organized as follows: In Section II, the
imperialist competitive algorithm is reviewed. Section III
introduces the parallel implementation of the ICA based on the
multi-population technique. In Section IV, the proposed
algorithm is compared with the related previous works. Finally,
Section V concludes the paper and indicates the future works.
II. RELATED WORK
Let us first look into the sequential algorithms that have been
proposed for solving systems of nonlinear equations. El-Emary
and El-Kareem employed Gauss-Legendre integration as a
technique to solve the system of nonlinear equations and used
genetic algorithm (GA) to discover the results without
converting the nonlinear equations to linear equations [15].
Mastorakis employed genetic algorithm (GA) to solve a non-
linear equation as well as systems of non-linear equations [17].
Li and Zeng [18], used a neural-network algorithm for solving
a set of nonlinear equations. The computation is carried out by
a simple gradient descent rule with variable step-size levels
[18]. Huan-Tong et al. proposed a modified evolution strategy
(ES) based on a probability ranking method to solve
complicated nonlinear systems of equations (NSE) problems
[19]. M. Abdollahi et al. applied the imperialist competitive
algorithm for solving nonlinear systems of equations [16]
Ouyang et al. employed a hybrid particle swarm optimization
(HPSO) algorithm. The particle swarm optimization (PSO)
method focuses on ”exploration”, and the Nelder-Mead simplex
method (SM) focuses on ”exploitation” [20], while Wu et al.
used a new variation of the social emotional optimization
algorithm called MSEOA, mainly inspired by the Metropolis
Rule [21]. M. Abdollahi et al. proposed a cuckoo optimization
algorithm for solving nonlinear systems equations [25], [29].
Luo at al. applied a combination of the chaos search and
Newton type methods [1]. Grosan and Abraham employed a
new perspective of the evolutionary algorithms (EA) [7], Mo et
al. proposed a combination of the conjugate direction method
(CD) [2], and M. Jaberipour used the particle swarm algorithm
[3]. Henderson at al. [22], and Pourjafari et al. [23] introduced
a methodology based on a polarization technique and a novel
optimization method based on Invasive Weed Optimization
(IWO), respectively, for finding all roots of a system of
nonlinear equations.
In past years, researchers have utilized some parallel EC
methods for optimization problems such as Parallel Genetic
Algorithms [4], and Parallel PSO [9], Parallel ABC (PABC)
[6], Parallel Ant Colony Optimization (PACO) [5], and Parallel
Memetic [10] algorithms. Wu and Kang used a parallel elite-
subspace evolutionary algorithm for solving systems of
nonlinear equations [14]. Some parallel EC methods can
achieve super-linear performance [12], where each one is
implemented with different techniques and hardware platforms.
For example, Parallel Genetic Algorithms are implemented in
the following four categories [4]: Master-Slave Genetic
Algorithms, Corse Grain Genetic Algorithms (Multi-
Populations Genetic Algorithms), Fine Grain Genetic
Algorithms, and Hybrid Genetic Algorithms [27].
The obtained results of mathematical methods are sensitive to
the initial guess of the solution. The population size of the
evolutionary algorithms is large and the convergence of the
evolutionary methods to the global minimum is slow. The EC
methods are impractical for large-scale problems, like systems
of nonlinear equations because of their high linear algebra costs
and large memory requirements. For this reason, it is necessary
to find an efficient algorithm for solving systems of nonlinear
equations. Let systems of nonlinear equations be of the form:
(1)
In order to transform (1) to an optimization problem, we will
use the auxiliary function:


(2)
Where f (x) is the global minimum that will be minimized.
In this paper, a parallel imperialist competitive algorithm
(PICA) based on the multi-population technique for solving
systems of nonlinear equations problems is presented. The
proposed method overcomes the mentioned weaknesses of
evolutionary methods for solving the systems of nonlinear
equations problems. We have also selected some well-known
problems for the evaluation.
III. BACKGROUND
The Imperialist Competitive Algorithm (ICA) was introduced
by E. Atashpaz and C. Lucas [26], and is inspired by
imperialistic competition. ICA is an evolutionary algorithm and
optimizes results of problems. In this algorithm, all countries
are divided into two types: imperialist states and colonies.
Imperialistic competition is the main part of this algorithm, and
the expectation is that the colonies converge to the global
minimum of the cost function. In the first step, the algorithm
creates some countries and after sorting them the best countries
are selected to be imperialists and the rest of the countries form
the colonies of these imperialists (Fig 1, step 1). After dividing
all colonies among imperialists, these colonies start moving
toward their relevant imperialist countries (Fig 1, step 2). In the
next step, the ICA computes the power of each imperialist and
the imperialistic competition begins. The weakest imperialist
loses its weakest colony and the selected imperialist captures
this colony (Fig 1, step 5). These steps are then repeated until
the termination condition is satisfied. The termination
conditions can be different. For example, the ICA algorithm
could stop after certain number of iterations or when all
colonies have become members of one imperialist (see Fig. 1).
ICA is a suitable method for optimization problems, but there
exist some challenges concerning the evolutionary algorithms.
For example, when we are considering a far-reaching search
area, we need a large initial population to obtain an appropriate
result, but with a resource constrained processor we may not be
able to satisfy this requirement. Also, when we face a complex
problem that needs complex computations, the run time will
increase, and therefore we need to utilize an efficient method to
improve the speed, stability, and accuracy.
The sequential ICA algorithm inherently has a parallel
structure, and therefore, a parallel ICA implementation is a
viable solution to improve the ICA. In the ICA, each imperialist
and colonies work independently, and after a decade a colony
moves to another imperialist- Hence, this algorithm works
similarly as a multi-population method that works on a
processor. In the next section, we utilize a multi-population
ICA to solve some complex problems.
IV. THE PROPOSED METHOD
In this work, we utilize a multi-population model to
implement the Parallel Imperialist Competitive Algorithm
(PICA), by applying a selective local search strategy, in order
to solve systems of nonlinear equations. We intend to utilize the
full capacity of evolutionary algorithms (e.g. faster
convergence, run time speed, and accuracy) for solving such
problems. There are different approaches for parallelizing
evolutionary algorithms, such as the master slave, multi-
population, fine-grain, and hybrid methods; but multi-
population method has better convergences and has more
accurate results than other parallel methods. Of these, we use
the multi-population method. In the multi-population method,
there are some independent populations in different processors,
each covering its own independent area of the search space.
Each processor runs the ICA on its population with independent
parameter values. Due to this independency, different levels of
exploration and exploitation can be utilized, and therefore the
application can get out from local optimums. The other
advantage of the multi-population method is its ability for
migration, which can significantly improve performance of
solving nonlinear equations. It is the most important parallel
operation in the multi-population implementation, as letting
processors to share the best results and investigate areas with
other processors help discover better results in a smaller number
of iterations. There are different kinds of migration techniques,
for example, each processor can select the best or worst
countries to be sent to the other processors or can randomly
select some countries. In our work, we select the best countries
to be migrated between processors to share the best results
together, and the processors replace their worst countries with
the received best countries. Receiver processors are selected
based on the connection topology used. Figure 1 illustrates the
migration behaviour. For example, for a network with the fully
connected topology, each processor can receive countries from
any other processor. It can be very useful if all processors
receive all migrated countries, but in practice we have to find
balance between data communication cost (communication
time) and achieved improvement in results.
In this work, each country is a possible solution for the
selected nonlinear equation. Hence, the algorithm randomly
creates the initial populations, which are the possible solutions.
Some of them are then selected, at each iteration step, to be
processed by the ICA in order to converge them towards better
results.
Fig. 1. Imperialist Competetive Algorithm [28]
In our implementation, several processors are connected
together using a ring topology and message passing based
communication. The ring topology has been selected because
of its low communication cost and simplicity. Each processor
is first initialized with a set of independent countries (the
number of countries in each processor is the same) and the ICA
to be independently run on it. After some decades (the period
varies from an execution to another), the best country migrates
from each processor Pi to the next processor Pi+1 in the ring
and replaces the worst country in Pi+1. Since we utilize the ring
topology to connect processors together, and because the
migration takes place in all processors synchronously, the
numbers of countries in any two processors are equal at any
given time. The migration strategy can affect the result as well.
Generally, it is better to establish a balance between the
migration rate and data communication. The chosen ring
topology is utilized to reduce the migration rate and to decrease
the distance of the migrations.
Best
Worst
P1
Best
Worst
P2
Best
Worst
P3
Best
Worst
P4
Best
Best
Best
Best
Fig. 2. Multi-population migration operation
Figure 2 shows the architecture of the multi-population
structure with a ring topology. Figure 3 presents the Multi-
Population ICA pseudo code. In our implementation, all
parameters in different processors are equal, and all ICA
computations run independently in different processors. On the
other hand, the migration operations run synchronously as
explained above.
In the multi-population ICA, the pressure of selection
increases by growing the number of countries, which helps to
obtain more accurate results in a shortest time and converge to
the results faster than the sequential ICA. Therefore it is
beneficial to increase the number of countries.
Processor :
1-Create independent initial countries.
2-Run ICA algorithm independently.
3-If now is time of migration do
a) Wait until all processors arrive to this point.
b) Send the best country to processor
(mod(#processors).
c) Receive a country from (mod(#processors) and
replace the worse country with the received one.
4-If termination condition is reached, then terminate algorithm.
5-Show the best country.
6-End.
Fig. 3. Multi-population ICA pseudo code
V. EXPERIMENT AND RESULTS
In this section, five commonly explored problems have been
utilized to demonstrate the performance of the PICA. The
obtained results have been compared with the other well-known
methods that have used the same problems. The parallel ICA
has been implemented on both share memory and massage
passing models. The massage passing interface (MPI) has been
utilized to parallelize our algorithm and MPICH2 to run the
algorithm.
In the multi-population ICA, processors have been connected
in a ring topology with different processors on the different
tests. The proposed algorithm has been tested on an Intel core
i3-330M, processors 2.13 GHz (64-bit) and memory 4 GB. The
best results of benchmarks have been obtained by 30
independent runs. The used parameters for solving the
problems have been illustrated in Table 1.
A. Benchmarks
Test 1: 10-dimention Rastrigin Function


 
(3)
The answer of this test with f (0, 0, 0,…, 0) is 0. This test has
been solved by Mo et al. [2] and ICA [24] with 1000 iterations
and 300 population sizes. PICA has been applied to optimize it
with the same parameters.
TABLE I. USED PARAMETERS IN PICA FOR TESTS AND CASES
Parameters
Test 1
Test 2
Case 1
Case 2
Case 3
Total population
size
300
300
250
250
300
Number of empires
10
10
10
10
10
Number of
iteration
1000
1000
45
50
1000
Revolution rate
0.04
0.04
0.04
0.04
0.04
Ξ
0.02
0.02
0.02
0.02
0.02
θ
0.5
0.5
0.5
0.5
0.5
β
2
2
2
2
2
The results of Mo et al. [2], ICA [24] and our proposed
algorithm have been presented in Tables 2-4.
Figures 4 and 5 indicate the convergence history of ICA and
PICA, respectively. The stability chart of PICA has been
indicated in Figure 6. PICA has reached to the optimized
answer before 50 iterations.
Test 2: This example has been used as a benchmark in [3] and
[24].
 


(4)
The answer of the test 2 is 1.21598D and the variables of the
function are in (3, 13). The comparison results of ICA [24] and
PICA with 1000 iterations and 300 population sizes have been
given in Table 5. PICA has solved Test 2 quicker than prior
methods before 200 iterations (see Figures 7-9).
B. Case study
In this section, three commonly explored systems of nonlinear
equations have been used to demonstrate the performance of the
proposed method, and the obtained results have been compared
with the other known methods.
Case 1: This example has been given in [3], [11], [24], and [25]:


(5)
The solutions in [3] and [11] have been obtained with 120
iterations with an unknown number of population sizes. The
parameters of the ICA method [24] have been set to 50
iterations with 250 countries. The obtained solutions by PICA
are better and more accurate than the previous works (see Table
6). Figures 10-12 indicate the convergence history of the Case
1. Figure 13 shows the stability chart of this case.
Case 2: (Problem 2 in [8], Test Problem 14.1.4 in [13], and
Case study in [24] and [25])



 

(6)
The results of Case 2 in [8], [13], [24], and [25] with the 50
iterations and the 250 population sizes were compared with
PICA in Table 8. The obtained solutions of PICA have
outperformed the mentioned methods with 250 countries and
35 iterations. The speed up of the proposed algorithm gets better
than the other literatures (see figures 14-16).
Case 3: (Problem 6 in [8] and Test Problem 14.1.6 in [13])
Case 3 has been solved by the filled function method in [8] and
has been proposed as a problem in [13] and [24].







(7)
The number of iterations for this problem in [8], [13] and [24]
is 1000 and the population size is 300. Our results with the same
iterations and countries have been compared in Table 9. The
convergence history of ICA [24] and PICA have been shown in
Figures 17 and 18, respectively. Figure 19 shows the stability
chart of PICA for the Case 3. The statistical results of tests and
cases have been illustrated in Table 7. The comparison
statistical results of the serial ICA and the parallel ICA have
been given in Table 10.
VI. DISCUSSION
In this paper a parallel implementation of ICA based on the
multi-population method has been utilized to solve the systems
of nonlinear equations. There are different kinds of the PICA
implementation such as the master-slave, the multi-population,
and the hybrid methods that each one has different advantages.
For example, the Master-Slave method can be utilized when we
simply intend to increase the speed of our algorithm, but Multi-
Population method must be used when we intend to increase
both speed and accuracy. Multi-Population method increases
the number of the initial population and therefore, the pressure
of selection grows that it causes to find more accurate results.
In our implementation, the ring connection topology has been
considered to connect the processors. In our algorithm, the
migration operation causes that each processor has the ability
to send its countries to next processors and receives some
countries from previous processors. With the aforementioned
ability each processor shares the best results with other
processors which reduces the number of iterations
considerably.
In this paper, PICA has been compared with other methods
through some well-known benchmarks and case studies. PICA
has obtained more accurate results with the lower number of
iterations.
The most important result of PICA is about super linear
performance (where the efficiency value of the algorithm is
more than one). Our implementation achieved the super linear
performance that means it is an outstanding method and is the
best way to solve non-linear problems.
Fig. 4. The convergence history of Rastrigin Function (from [24])
Fig. 5. The convergence history of Rastrigin with PICA (test 1)
Fig. 6. The stability chart of Rastrigin with PICA (test 1)
Fig. 7. The convegence history of test 2 with D=100 (from [24])
Fig. 8. The convergence history of PICA for test 2 with D=100
Fig. 9. The Stability chart of test 2 with D=100
Fig. 10. The convergence history of case 1 (from [3])
Fig. 11. The convergence history of case 1 (form [24])
Fig. 12. The convergence history of case 1 with PICA
Fig. 13. The stability chart of case 1 with PICA
Fig. 14. The stability chart of case 1 with PICA
Fig. 15. The convergence history of case 2 (from [24])
Fig. 16. The convergence history of case 2 with PICA
Fig. 17. The stability chart of case 2 with PICA
Fig. 18. The convergence history of case 3 (from [24])
Fig. 19. The convergence history of case 3 with PICA
Fig. 20. The stability chart of case 3 with PICA
TABLE II. RESULTS OF TEST 1 WITH MO ET AL. ( FROM [2] )
Vari
ables
Initial
iteration
After 200
iterations
After 400
iterations
After 600
iterations
After 800
iterations
After 1000
iteration
0.1431
-0.0001
-0.0007
0.0001
-0.0000
-0.0000
2.1983
-0.0001
0.0000
0.0001
0.0001
0.0001
1.9401
0.0000
0.0000
0.0001
-0.0000
0.0001
-1.7080
-0.0002
-0.0001
0.0000
-0.0000
-0.0000
0.2261
-0.9950
-0.9962
-0.9948
0.0001
0.0001
0.9392
0.9950
0.9941
0.9949
0.9950
0.9949
-0.1129
0.9949
0.9949
0.0001
-0.0001
0.0000
-0.1516
0.9950
0.9949
0.9949
0.9949
-0.0000
-2.1893
-0.0001
0.0000
0.0001
-0.0000
-0.0000

4.9798
0.9950
0.0000
0.0001
-0.0000
-0.0000
TABLE III. RESULTS OF TEST 1 WITH ICA (FROM [24] )
Variables
Initial iteration
After 200 iterations
After 400 iterations
After 600 iterations
After 800 iterations
After 1000 iteration
-2.883527
-0.1084e-007
-0.1404e-008
-0.1404e-008
-0.1404e-008
-0.1404e-008
-2.111072
0.1710e-007
0.0275e-008
0.0275e-008
0.0275e-008
0.0275e-008
-0.869045
-0.0048e-007
-0.0656e-008
-0.0656e-008
-0.0656e-008
-0.0656e-008
1.985114
0.6947e-007
0.0855e-008
0.0855e-008
0.0855e-008
0.0855e-008
-1.156667
0.0328e-007
-0.1015e-008
-0.1015e-008
-0.1015e-008
-0.1015e-008
3.083374
0.0356e-007
0.0899e-008
0.0899e-008
0.0899e-008
0.0899e-008
3.093877
-0.0948e-007
0.0349e-008
0.0349e-008
0.0349e-008
0.0349e-008
-2.020172
0.1528e-007
0.1610e-008
0.1610e-008
0.1610e-008
0.1610e-008
2.832951
-0.2304e-007
-0.0180e-008
-0.0180e-008
-0.0180e-008
-0.0180e-008

-2.208695
-0.2454e-007
0.0147e-008
0.0147e-008
0.0147e-008
0.0147e-008

83.041615
1.3287e-012
0
0
0
0
TABLE IV. RESULTS OF TEST 1 WITH PICA (PRESENT STUDY )
Variables
Initial iteration
After 200 iterations
After 400 iterations
After 600 iterations
After 800 iterations
After 1000 iteration
1.928441
-4.716237e-011
-4.716237e-011
-4.716237e-011
-4.716237e-011
-4.716237e-011
-2.248101
2.157231e-011
2.157231e-011
2.157231e-011
2.157231e-011
2.157231e-011
1.341671
-1.001342e-011
-1.001342e-011
-1.001342e-011
-1.001342e-011
-1.001342e-011
0.728811
1.713127e-011
1.713127e-011
1.713127e-011
1.713127e-011
1.713127e-011
1.728128
-7.887191e-011
-7.887191e-011
-7.887191e-011
-7.887191e-011
-7.887191e-011
-2.839121
-2.837190e-012
-2.837190e-012
-2.837190e-012
-2.837190e-012
-2.837190e-012
1.871831
1.238291e-011
1.238291e-011
1.238291e-011
1.238291e-011
1.238291e-011
1.934281
-6.348271e-011
-6.348271e-011
-6.348271e-011
-6.348271e-011
-6.348271e-011
1.409124
8.119381e-011
8.119381e-011
8.119381e-011
8.119381e-011
8.119381e-011

0.365281
1.981381e-011
1.981381e-011
1.981381e-011
1.981381e-011
1.981381e-011

1.241803e+002
0
0
0
0
0
TABLE V. COMPARISON RESULTS OF TEST 2 WITH D=100

Initial
iteration
After 100
iteration
After 200
iteration
After 300
iteration
After 400
iteration
After 500
iteration
After 600
iteration
After 700
iteration
After 800
iteration
After 900
iteration
ICA [24]
29.786871
-78.6467
-103.3897
-113.0125
-120.5298
-121.5923
-121.5979
-121.5982
-121.5982
-121.5982
PICA
-20.0000
-95.0017
-121.5982
-121.5982
-121.5982
-121.5982
-121.5982
-121.5982
-121.5982
-121.5982
TABLE VI. COMPARISON RESULTS OF PICA FOR CASE 1 WITH [3], [11], [24] AND [25]
Methods

PPSO [3] and Gyurhan [11]
-0.29051455550725
1.08421508149135
4.686326815078573e-029
PPSO [3] and Gyurhan [11]
-0.793700525984100
-0.793700525984100
1.577721810442024e-030
COA [25]
1.08421508149135
-0.29051455550725
4.686326815078573e-029
COA [25]
-0.29051455550725
1.08421508149135
4.686326815078573e-029
ICA [24]
1.084215081491351
-0.290514555507251
3.562200025138631e-030
ICA [24]
-0.793700525984100
-0.793700525984100
1.577721810442024e-030
ICA [24]
-0.290514555507251
1.084215081491351
3.562200025138631e-030
PICA (present study)
1.0842150814913511
-0.2905145555072514
4.9303806576313238e-032
PICA (present study)
-0.79370052598409995582
-0.79370052598409995582
3.9443045261050590e-031
PICA (present study)
-0.2905145555072514
1.0842150814913511
4.9303806576313238e-032
TABLE VII. STATISTICAL RESULTS
Problem
N
Mean
Std. Deviation
Std. Error Mean
Worst
Best
Test 1
30
0.0
0.0
0.0
0.0
0.0
Test 2
30
-1.215981999999999e+002
7.226896453227138e-014
1.319444736062194e-014
-1.215982000000000e+002
-1.215982000000000e+002
Case 1
30
1.988586000000001e-031
1.739458967563944e-031
3.175803047964731e-032
3.944300000000000e-031
4.930400000000000e-032
Case 2
30
1.046312443884771e-026
1.511543708264576e-026
2.759688618905053e-027
4.414500000000001e-026
0.0
Case 3
30
6.898049999999997e-037
1.150107106181705e-037
2.099798685342363e-038
9.039099999999999e-037
5.800000000000000e-037
TABLE VIII. COMPARISON RESULTS OF CASE 2
Methods
X
Variable values
f
Functions values
F(x)
The best in [8]
0.500432850000000
-0.000238520000000
7.693745216994211e-008
3.141863170000000
0.000141590000000
The best in [13]
0.299450000000000
6.139739265609290e-007
1.014347133848949e-012
2.836930000000000
-7.983627943186633e-007
0.500000000000000
2.111655261760603e-007
5.316365008296489e-012
3.141590000000000
-2.296034435467220e-006
The best in COA
[25]
0.299300000000000
-7.128922385554737e-005
5.792081721117691e-009
2.836600000000000
2.664447941302939e-005
The best in ICA
[24]
0.299448692495720
1.305289210051797e-012
5.631272867601562e-024
2.836927770471037
2.284838984678572e-013
0.500000000000000
0
0
3.141592653589794
0
The best of PICA
0.29944869249092598
-1.387778780781446e-016
6.856310602018560e-032
2.8369277704589400
2.220446049250313e-016
0.500000000000000
0
0
3.141592653589794
0
TABLE IX. COMPARISON RESULTS OF CASE 3
Methods
x
Variables values
f
Functions values
The best
in [8]
0.67154465
-0.00000375
0.74097111
0.00001537
0.95189459
0.00000899
-0.30643725
0.00001084
0.96381470
0.00001039
-0.26657405
0.00000709
0.40463693
0.00000049
0.91447470
-0.00000498
The best
in [13]
0.1644
-8.8531e-005
-0.9864
3.5894e-005
-0.9471
6.6216e-006
-0.3210
2.1560e-005
-0.9982
1.2320e-005
-0.0594
3.9410e-005
0.4110
-6.8400e-005
0.9116
-6.4440e-005
The best
of ICA
[24]
0.164431665854327
2.775557561562891e-016
-0.986388476850967
-1.110223024625157e-016
0.718452601027603
-1.110223024625157e-016
0.718452601027603
1.734723475976807e-018
0.997964383970433
0
0.063773727557003
0
-0.527809105283546
0
-0.849363025083964
0
The best
of PICA
0.164431665854327405
5.368529659036217811e-019
-0.986388476850967110
2.548307417523678423e-019
0.718452601027603350
-3.378192205891815512e-019
-0.695575919707310931
3.389211820587187123e-019
0.997964383970432520
0
0.063773727557002571
0
-0.527809105283546241
0
-0.849363025083964123
0
TABLE X. THE COMPARISON STATISTICAL RESULTS OF SERIAL ICA [24]
AND PICA
Problem
Speed Up
Efficiency
Serial ICA
time
PICA time
#processors
Super linear
performance
?
Test 1
4.02
2.01
3.38
0.84
2
Yes
Test 2
7.22
3.61
11.82
1.63
2
Yes
Case 1
2.82
1.41
0.0341
0.01
2
2
Yes
Case 2
5.1
2.55
2.1
0.41
1
2
Yes
Case 3
6.24
3.12
6.78
1.08
2
Yes
VII. CONCLUSION AND FUTURE WORKS
In this paper, the parallel imperialist competitive algorithm
based on the MPI instructions (Multi-Population) was utilized
to solve the systems of nonlinear equations. The PICA was
compared with the serial ICA and some of the other proposed
methods. According to the obtained results, the PICA is
suitable for solving different kinds of complex problems, and it
is faster and more efficient than the other methods. The figures
indicated that the answers of our algorithm are stable and the
convergence of the PICA to the best solution is faster than the
other methods, with the lower number of iterations and better
run time. As a result, we claim that the proposed PICA is a
faster and more accurate method, which can be employed to
solve and improve the complex problems. At the end, our future
works will consist of using the proposed parallel algorithm to
solve some of the more practical optimization problems, like
constrained engineering optimizatio
REFERENCES
[1] Y.Z. Luo, G.J. Tang, L.N. Zhou, Hybrid approach for solving systems of
nonlinear equations using chaos optimization and quasi-Newton method,
Appl. Soft. Comput. 8 (2008) 1068-1073.J. Clerk Maxwell, A Treatise on
Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892,
pp.6873.
[2] Y. Mo, H. Liu, Q. Wang, Conjugate direction particle swarm optimization
solving systems of nonlinear equations, Comput. Math. Appl. 57 (2009)
1877-1882.
[3] M. Jaberipour, E. Khorram, B. Karimi, Particle swarm algorithm for
solving systems of nonlinear equations, Comput. Math. Appl. 62 (2011)
566-576.
[4] E. Cantu-Paz, A Survey of Parallel Genetic Algorithms, Department of
Computer Science and Illinois Genetic Algorithms Laboratory University
of Illinois at Urbana-Champaign, 1997.
[5] H. Liu, P. Li, and Y. Wen, Parallel Ant Colony Optimization Algorithm,
World Congress on Intelligent Control and Automation, China, June,
2006.
[6] R. Parpinelli, C. Benitez, and S. Lopes, Parallel Approaches for the
Artificial Bee Colony Algorithm, Handbook of Swarm Intelligence,
8(2010) 329-345.
[7] C. Grosan, A. Abraham, A New Approach for Solving Nonlinear
Equations Systems, IEEE Trans. Syst. Man Cybern. A 38 (3) (2008)
Senior Member, IEEE.
[8] C. Wang, R. Luo, k. Wu, B. Han, A new filled function method for an
unconstrained nonlinear equation, Comput. Appl. Math. 235 (2011) 1689-
1699.
[9] L. Vanneschi, D. Codecasa, and G. Mauri, A Comparative Study of Four
Parallel and Distributed PSO Methods, New Generat. Comput. 29(2011)
129-161.
[10] J. Digalakis, and K. Margaritis, A Parallel Memetic Algorithm for Solving
Optimization Problems, 4th Metaheuristics International Conference,
Parallel Distributed Processing Laboratory, Greece, 2001.
[11] Gyurhan H. Nedzhibov, A family of multi-point iterative methods for
solving systems of nonlinear equations, J. Comput. Appl. Math.
222(2008) 244250.
[12] E. C. G. Wille, E. Y. H. S. Lopes, Discrete Capacity Assignment in IP
networks using Particle Swarm Optimization, Appl. Math. Comput.,217
(2011) 5338-5346.
[13] C. A. Floudas, P. M. Pardalos, C. S. Adjiman, W. R. Esposito, Z. H.
Gumus, S. T. Harding, J. L. Klepeis, C. A. Meyer, C. A. Schweiger,
Handbook of Test Problems in Local and Global Optimization, Kluwer
Academic Publishers, Dordrecht, the Netherlands, 1999.
[14] A. Mousa, W. Wahed, R. Allah, A Hybrid Ant Colony Optimization
Approach Based Local Search Scheme for Multi Objective Design
Optimizations, Electr. Pow. Syst. Res. 81 (2011) 1014-1023.
[15] Ibrahiem M. M. El-Emary and Mona M. Abd El-Kareem, Toward Using
Genetic Algorithm for Solving Nonlinear Eqution Systems, World Appl.
Sci. J. 5 (2008) 282-289.
[16] M. Abdollahi, A. Isazadeh, D. Abdollahi, Solving systems of nonlinear
equations using imperialist competitive algorithm, The 8th International
Industrial Engineering Conference, 8 (2012) 1-6.
[17] Nikos E. Mastorakis, Solving Non-linear Equations via Genetic
Algorithms, Proceedings of the 6th WSEAS Int. Conf. on
EvolutionaryComputing, Lisbon, Portugal, June 16-18 (2005) 24-28.
[18] G. Li, Zh. Zeng, A neural-network algorithm for solving nonlinear
equation systems, IEEE International Conference on Computational
Intelligence and Security, CIS08 (2008) 20-23.
[19] G. Huan-Tong, S. Yi-Jie, S. Qing-Xi, W. Ting-Ting, Research of Ranking
Method in Evolution Strategy for Solving Nonlinear System of Equations,
IEEE International Conference on Information Science and Engineering,
ICISE09 (2009) 348-351.
[20] A. Ouyang, Y. Zhou, Q. Luo, Hybrid Particle Swarm Optimization
Algorithm for Solving Systems of Nonlinear Equations, IEEE
International Conference on Granular Computing, GRC09 (2009) 460-
465.
[21] J. Wu, Zh. Cui, J. Liu, Using Hybrid Social Emotional Optimization
Algorithm with Metropolis Rule to Solve Nonlinear equations, IEEE
International Conference on Cognitive Informatics & Cognitive
Computing, ICCI*CC’11 (2011) 405-411.
[22] N. Henderson, W. F. Sacco, G. Mendes Platt, Finding more than one root
of nonlinear equations via a polarization technique: An application to
double retrograde vaporization, Chem. Eng. Res. Des. 88 (2010) 551-561.
[23] E. Pourjafari, H. Mojallali, Solving nonlinear equations systems with a
new approach based on invasive weed optimization algorithm and
clustering, Swarm Evol. Comput. 4 (2012) 3343.
[24] M. Abdollahi, A. Isazadeh, D. Abdollahi, Imperialist competitive
algorithm for solving systems of nonlinear equations, Comput. Math.
Appl.65 (2013) 1894-1908.
[25] M. Abdollahi, Sh. Lotfi, D. Abdollahi, Solving systems of nonlinear
equations using cuckoo optimization algorithm, The 3rd International
conference on The Contemporary Issues in Computer Sciences and
Information Technology (CICIS), 3 (2012) 191-194.
[26] E. Atashpaz-Gargari, C. Lucas, Imperialist competitive algorithm: an
algorithm for optimization inspired by imperialistic competition, in:IEEE
Congress on Evolutionary Computation, 2007, pp. 46614667.
[27] A. Majd, Sh. Lotfi and G. Sahebi, “Review on Parallel Evolutionary
Computing and Introduce Three General Framework to Parallelize All EC
Algorithms,” 5th Conference on Information and Knowledge Technology
(IKT), 2013.
[28] A. Majd, Sh. Lotfi, G. Sahebi, M. Daneshtalab and J. Plosila, “PICA:
Multi-Population Implementation of Parallel Imperialist Competitive
Algorithms,” 24th Euromicro International Conferences on Parallel,
Distributed and Network-Based Processing, PDP 2016.
[29] M. Abdollahi, A. Bouyer, D. Abdollahi, Improved cuckoo optimization
algorithm fo solving systems of nonlinear equations, J. Supercomput. 72
(2016) 1246-1269.
... Use of Multi population Parallel Competitive Algorithm (PICA) to solve systems of nonlinear equations is a recent study which also goes under evolutionary approaches [20]. They have presented the method as a solution to the weaknesses in other evolutionary approaches such as large population sizes and slow convergence. ...
... In a recent research, Multi-Population Parallel Imperialist Competitive Algorithm (PICA) is used to solve systems of nonlinear equations [20]. It was first introduced in a study as an Imperialist competitive algorithm (ICA) [62]. ...
... The nonlinear system F21 is also tested in Ref. [20]. The system has five roots within [-5, 5] × [-5, 5]. ...
Article
Most numerical methods that are being used to solve systems of nonlinear equations require the differentiability of the functions and acceptable initial guesses. However, some optimization techniques have overcome these problems, but they are unable to provide more than one root approximation simultaneously. In this paper, we present a modified firefly algorithm treating the problem as an optimization problem, which is capable of giving multiple root approximations simultaneously within a reasonable state space. The new method does not concern initial guesses, differentiability and even the continuity of the functions. Results obtained are encouraging, giving sufficient evidence that the algorithm works well. We further illustrate the viability of our method using benchmark systems found in the literature.
... In this work, we combine the GA and ICA methods and customize this combined approach for the scheduling problem. The GA operations are basically suitable for scheduling but a GA's convergence strategy to find more reliable and accurate results is not efficient [10], because the selection operation in a GA misses some chromosomes that have a probability to become potential genes for obtaining the best results. In the ICA, where we use countries as the population, all the countries will be available in all iterations; they may only move to other places in the search space. ...
... In the multi-population NoMeS, we increase the number of countries to get a higher selection pressure [10]; this facilitates obtaining more accurate results in a short time and much faster convergence to results, compared with the sequential NoMeS algorithm. Indeed, in EC methods, when the diversity of population is high enough, the rate of convergence improves. ...
Conference Paper
Full-text available
The task scheduling problem for Multiprocessor System-on-Chips (MPSoC), which plays a vital role in performance, is an NP-hard problem. Exploring the whole search space in order to find the optimal solution is not time efficient, thus metaheuristics are mostly used to find a near-optimal solution in a reasonable amount of time. We propose a novel metaheuristic method for near-optimal scheduling that can provide performance guarantees for multiple applications implemented on a shared platform. Applications are represented as directed acyclic task graphs (DAG) and are executed on an MPSoC platform with given communication costs. We introduce a novel multi-population method inspired by both genetic and imperialist competitive algorithms. It is specialized for the scheduling problem with the goal to improve the convergence policy and selection pressure. The potential of the approach is demonstrated by experiments using a Sobel filter, a SUSAN filter, RASTA-PLP and JPEG encoder as real-world case studies.
... In this work, we combine the GA and ICA methods and customize the resulting hybrid method for the scheduling problem in heterogeneous systems. The operations of GA are suitable for scheduling but its convergence strategy to find more reliable and accurate results is not efficient [15], [16], [17], because the selection operation in GA misses some chromosomes that have a probability to become potential genes for obtaining the best results. In ICA (where we use countries as the population), all the countries will be available in all iterations and they may move to other places in the search space. ...
... In the multi-population ICGA, we increase the number of all countries along with the selection pressure [16], [17] which helps in obtaining more accurate results in a short time period. Also, the convergence to results is much faster than in the sequential ICGA. ...
Conference Paper
Full-text available
Today, based on fast development especially in Network-on-Chip (NoC)-based many-core systems, the task scheduling problem plays a critical role in high-performance computing. It is an NP-hard problem. The complexity increases further when the scheduling problem is applied to heterogeneous platforms. Exploring the whole search space in order to find the optimal solution is not time efficient, thus metaheuristics are mostly used to find a near-optimal solution in a reasonable amount of time. We propose a compound method to select the best near-optimal task schedule in the heterogeneous platform in order to minimize the execution time. For this, we combine a new parallel meta-heuristic method with a greedy scheme. We introduce a novel metaheuristic method for near-optimal scheduling that can provide performance guarantees for multiple applications implemented on a shared platform. Applications are modeled as directed acyclic task graphs (DAG) for execution on a heterogeneous NoC-based many-core platform with given communication costs. We introduce an order-based encoding especially for pipelined operation that improves (decreases) execution time by more than 46%. Moreover, we present a novel multi-population method inspired by both genetic and imperialist competitive algorithms specialized for the scheduling problem, improving the convergence policy and selection pressure. The potential of the approach is demonstrated by experiments using a Sobel filter, SUSAN filter, RASTA-PLP, and JPEG encoder as real-world case studies.
... Abdollahi et al. [24] used the standard imperialist competitive algorithm (ICA), Oliveira et al. [25] proposed a variant of simulated annealing algorithm with fuzzy rules adaptations, Wu et al. [26] used a new variant of the Social emotional optimization for solving nonlinear systems of equations. Other applications of metaheuristics for nonlinear systems include invasive weed optimization algorithm [27], polarization technique [28], cuckoo optimization algorithm [29], genetic algorithm [30][31][32], artificial bee colony algorithm [33] and multi-population parallel ICA [34] their successful applications, there also exist two main challenges for metaheuristics that are (i) maintaining balance between exploration and exploitation (ii) avoiding large computational cost. Abdollahi et al. [35] highlighted that in most of the previous applications [1,[21][22][23][24][25][26]29,30,33,36,37] of metaheuristics to nonlinear systems large population sizes were used which resulted in high computation costs and slow convergence. ...
Article
Full-text available
This paper proposes a novel gradient free multi-simplex topology fabric aided imperialist competitive algorithm (ICA) for solving nonlinear systems of algebraic equations by transforming them to equivalent global optimization problems. The high dependence of traditional gradient based solvers of such systems on initial guesses and the Jacobeans resulting in false convergence is the main motivation behind the present work. The present work provides a mechanism for enhancing exploitation powers of imperialist search phase of the algorithm and hence improves the convergence speed. The variants emerging from the proposed approach are applied to diverse nonlinear systems arising in different scientific areas and the results so obtained are analyzed in details. Based on the analysis of empirical results on complex benchmark models, it is observed that the proposed enhancement of ICA has not only boosted the problem solving power of the imperialist system when applied multidimensional nonlinear physical systems but also emerged as an efficient gradient/initial guess free alternative solver with high accuracy for nonlinear systems.
... Arithmetic applications: This is a 10 variable complex nonlinear system proposed from interval arithmetic (Grosan & Abraham (2008b), Hong & Stahl (1994), Moore (1979)). (Majd et al. (2016)). They have used the following system for testing. ...
Article
The most acquainted methods to find root approximations of nonlinear equations and systems; numerical methods possess disadvantages such as necessity of acceptable initial guesses and the differentiability of the functions. Even having such qualities, for some univariate nonlinear equations and systems, approximations of roots is not possible with numerical methods. Research are geared towards finding alternate approaches, which are successful where numerical methods fail. One of the most disadvantageous properties in such approaches is inability of finding more than one approximation at a time. On the other hand these methods are incorporated with algorithm specific parameters which should be set properly in order to achieve good results. We present a modified firefly algorithm handling the problem as an optimisation problem, which is capable of giving multiple root approximations simultaneously within a reasonable state space while tuning the parameters of the proposed algorithm by itself, using a self-tuning framework. Differentiability and the continuity of the functions and the close initial guesses are needless to solve nonlinear systems using the proposed approach. Benchmark systems found in the literature were used to test the new algorithm. The root approximations and the tuned parameters obtained along with the statistical analysis illustrate the viability of the method.
... Arithmetic applications: this is a ten variable complex nonlinear system proposed from interval arithmetic (Grosan and Abraham, 2008b;Hong and Stahl, 1994;Moore, 1980). (Majd et al., 2016). They have used the following system for testing. ...
Article
The most acquainted methods to find root approximations of nonlinear equations and systems; numerical methods possess disadvantages such as necessity of acceptable initial guesses and the differentiability of the functions. Even having such qualities, for some univariate nonlinear equations and systems, approximation of roots is not possible with numerical methods. Research are geared towards finding alternate approaches, which are successful where numerical methods fail. One of the most disadvantageous property in such approaches is inability of finding more than one approximation at a time. On the other hand these methods are incorporated with algorithm specific parameters which should be set properly in order to achieve good results. We present a modified firefly algorithm handling the problem as an optimization problem, which is capable of giving multiple root approximations simultaneously within a reasonable state space while tuning the parameters of the proposed algorithm by itself, using a self-tuning framework. Differentiability and the continuity of the functions and the close initial guesses are needless to solve nonlinear systems using the proposed approach. Benchmark systems found in the literature were used to test the new algorithm. The root approximations and the tuned parameters obtained along with the statistical analysis illustrate the viability of the method.
... Some heuristic optimization algorithms have also been used for the solutions of the nonlinear equation systems. Genetic algorithms [2], parallel imperialist competitive algorithm [6], weed optimization algorithm [7], particle swarm optimization [8], immune genetic algorithm [9], leader glowworm swarm optimization [10] and hybrid social emotional optimization algorithm [11] have been used for the solution of the nonlinear equation systems. ...
Article
Full-text available
The aim of this study is to bring a new perspective for the solutions of non-linear equation systems. So this study handles the non-linear equation systems as a constrained optimization problem, while generally is handled unconstrained optimization problem or multi objective optimization problem. The object is to minimize the sum of the squares of nonlinear equations under the nonlinear equality constraints. A recently developed heuristic optimization algorithm called Grey Wolf Optimizer (GWO) is proposed for the solution of nonlinear equation systems. Two results were obtained. Firstly, it has been seen that GWO can be an alternative solution technique for the solution of nonlinear equation systems. Secondly, modelling the systems of nonlinear equations as constrained optimization gives better results.
... Simulation results showed that the proposed method got better solutions together with a simple convergence analysis. A multi-population parallel imperialist competitive algorithm was presented in Ref. [280] for solving systems of nonlinear equations, which were taken as NP-hard problems. In their study, the optimal solutions were obtained by the proposed algorithm, and experimental results demonstrated that the proposed algorithm had good performance. ...
Article
Multi-population based nature-inspired optimization algorithms have attracted wide research interests in the last decade, and become one of the frequently used methods to handle real-world optimization problems. Considering the importance and value of multi-population methods and its applications, we believe it is the right time to provide a comprehensive survey of the published work, and also to discuss several aspects for the future research. The purpose of this paper is to summarize the published techniques related to the multi-population methods in nature-inspired optimization algorithms. Beginning with the concept of multi-population optimization, we review basic and important issues in the multi-population methods and discuss their applications in science and engineering. Finally, this paper presents several interesting open problems with future research directions for multi-population optimization methods.
Article
Full-text available
The importance of optimization and NP-problem solving cannot be overemphasized. The usefulness and popularity of evolutionary computing methods are also well established. There are various types of evolutionary methods; they are mostly sequential but some of them have parallel implementations as well. We propose a multi-population method to parallelize the Imperialist Competitive Algorithm. The algorithm has been implemented with the Message Passing Interface on 2 computer platforms, and we have tested our method based on shared memory and message passing architectural models. An outstanding performance is obtained, demonstrating that the proposed method is very efficient concerning both speed and accuracy. In addition, compared with a set of existing well-known parallel algorithms, our approach obtains more accurate results within a shorter time period.
Conference Paper
Full-text available
Systems of nonlinear equations arise in a diverse range of sciences such as economics, engineering, chemistry, mechanics, medicine and robotics etc. For solving systems of nonlinear equations , there are several methods such as Newton type method, Particle Swarm algorithm (PSO), Conjugate Direction method (CD) which each has their own strengths and weaknesses. The most widely used algorithms are Newton-type methods, though their convergence and effective performance can be highly sensitive to the initial guess of the solution supplied to the methods. This paper introduces a novel evolutionary algorithm called Cuckoo Optimization Algorithm, and some well-known problems are presented to demonstrate the efficiency and better performance of this new robust optimization algorithm. In most instances the solutions have been significantly improved which proves its capability to deal with difficult optimization problems.
Article
Full-text available
Systems of nonlinear equations come into different range of sciences such as chemistry, economics, medicine, robotics, engineering, and mechanics. There are different methods for solving systems of nonlinear equations such as Newton type methods, imperialist competitive algorithm, particle swarm algorithm, conjugate direction method that each has their own advantages and weaknesses such as low convergence speed and poor quality of solutions. This paper improves cuckoo optimization algorithm for solving systems of nonlinear equations by changing the policy of egg laying radius, and some well-known problems are presented to demonstrate the efficiency and better performance of this new robust optimization algorithm. From obtained results, our approach found more accurate solutions with the lowest number of function evaluations.
Article
Full-text available
This paper proposes a new paradigm for solving systems of nonlinear equations through using Genetic Algorithm (GA) techniques. So, a great attention was presented to illustrate how genetic algorithms (GA) techniques can be used in finding the solution of a system described by nonlinear equations. To achieve this purpose, we apply Gauss–Legendre integration as a technique to solve the system of nonlinear equations then we use genetic algorithm (GA) to find the results without converting the nonlinear equations to linear equations. After that, we compare the obtained result that is achieved by using GA with the exact solution that is obtained by numerical methods. Also, in this paper, an approach to solve the system of nonlinear equations needed to define a Gauss–Legendre numerical integration is presented. The obtained results indicate that a GA is effective and represents an efficient approach to solve the systems of nonlinear equations that arise in the implementation of Gauss–Legendre numerical integration.
Conference Paper
Full-text available
In this paper, a method for solving non-linear equations via GA (Genetic Algorithms) is presented. The method is extended for systems of non - linear equations. The method is compared with a previous one in the literature and it is found that the present one is better. Directions for future research are also provided.
Conference Paper
Optimization and solving NP-hard problem are very important and Evolutionary Computing methods are useful and popular. There are different types of EC methods that most of them are serial and some other one has parallel implementation. In first step we want review some parallel implementation of EC methods and in second step we introduce three general frame work to parallelize all serial EC methods.
Article
A novel optimization-based method for solving systems of nonlinear equations is proposed. The method employs Invasive Weed Optimization (IWO) for solving nonlinear systems and can find all real and complex roots of a system and also detect multiplicity. The proposed solver consists of two parts: a two-phase root-finder that detects the solutions of a system using IWO algorithm, and an alteration technique which creates repulsion areas around previously found roots. The proposed solver determines all solutions using successive runs of the two-phase root-finder along with the alteration technique. Several illustrative examples together with three examples of engineering applications are provided to demonstrate merits of our proposed algorithm in solving nonlinear equations systems.
Article
As new methods, Evolutionary algorithms (EA) have been adopted to solve these complicated nonlinear systems of equations (NSE) problems. Especially, during solving these NSE problems with evolution strategy (ES), the traditional evaluation method between two different individuals has some drawbacks because of not considering the conflict of the different equation objectives, and then it more likely sticks into local optimum on these problems with two or more conflicting equation objectives. Therefore, this paper proposes a new probability ranking method based on the individual fitness value when the comparing two individuals appear the different equation-difference conflict. The experimental results show that our modified algorithm holds the better performance of global searching and less probability of sticking into local optimum.
Article
This paper presents an optimal design of a linear motor considering two objective functions namely, maximum force and minimum saturation and design of air-cored solenoid with maximum inductance and minimum volume as the objective functions. The proposed approach differs from the traditional ones in its design of a multipheromone ant colony optimization (MACO) as well as the inclusion of steady state genetic algorithm (SSGA) and local search approach. Detailed numerical results on different multiobjective design applications are reported. The results obtained by our implementation substantiate the success of this new approach.