ArticlePDF Available

Abstract and Figures

Solving systems of nonlinear equations is a relatively complicated problem in which arise in a diverse range of sciences. There are a number of different approaches have been proposed. In this paper, we employ the imperialist competitive algorithm (ICA) for solving systems of nonlinear equations. Some well-known problems are presented to demonstrate the efficiency of this new robust optimization method in comparison to other known methods.
Content may be subject to copyright.
Computers and Mathematics with Applications 65 (2013) 1894–1908
Contents lists available at SciVerse ScienceDirect
Computers and Mathematics with Applications
journal homepage:
Imperialist competitive algorithm for solving systems of
nonlinear equations
Mahdi Abdollahi a,, Ayaz Isazadeh b, Davoud Abdollahi c
aUniversity of Tabriz, Aras International Campus, Department of Computer Sciences, P. O. Box 51666-16471, Islamic Republic of Iran
bUniversity of Tabriz, Department of Computer Sciences, Islamic Republic of Iran
cUniversity College of Daneshvaran, Tabriz, Islamic Republic of Iran
article info
Article history:
Received 3 September 2012
Received in revised form 29 January 2013
Accepted 6 April 2013
Nonlinear equations
Root solvers
Evolutionary multi-objective optimization
Solving systems of nonlinear equations is a relatively complicated problem in which arise
a diverse range of sciences. There are a number of different approaches that have been
proposed. In this paper, we employ the imperialist competitive algorithm (ICA) for solving
systems of nonlinear equations. Some well-known problems are presented to demonstrate
the efficiency of this new robust optimization method in comparison to other known
©2013 Elsevier Ltd. All rights reserved.
1. Introduction
Systems of nonlinear equations arise in a diverse range of sciences such as economics, engineering, chemistry, mechanics,
medicine and robotics. The problem is nondeterministic polynomial-time hard when the equations in the system do not
exhibit nice linear or polynomial properties. However, a number of different approaches have been proposed such as Luo
et al. [1] and Mo et al. [2] used a combination of chaos search and Newton type methods and a combination of the conjugate
direction method (CD) respectively. In the same way, M. Jaberipour [3] used particle swarm algorithm but there still exist
some obstacles in solving systems of nonlinear equations. The most widely used algorithms are Newton-type methods,
though their convergence and effective performance can be highly sensitive to the initial guess of the solution supplied to
the methods. So the algorithm would fail with the improper initial guess. For this reason, it is necessary to find an efficient
algorithm for solving systems of nonlinear equations. Let the form of systems of nonlinear equations be
In order to transform (1) to an optimization problem, we will use the auxiliary function:
min f(x)=
i(x), x=(x1,x2,...,xn). (2)
Corresponding author. Tel.: +98 914 116 2612; fax: +98 411 669 6012.
E-mail addresses:, (M. Abdollahi), (A. Isazadeh), (D. Abdollahi).
0898-1221/$ – see front matter ©2013 Elsevier Ltd. All rights reserved.
M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1895
The equations system is reduced to the same form in the approach used in [3]. In Section 2, we describe the imperialist
competitive algorithm (ICA). In Section 3, some well-known systems are presented to demonstrate the effectiveness and
robustness of the proposed ICA. Then, in Section 4, we study some numerical tests. At the end, the conclusion is given in
Section 5.
2. Imperialist competitive algorithm
In this paper, we employ Imperialist competitive algorithm (ICA) to solve systems of nonlinear equations. Recently, a
number of methods have been proposed for solving systems of nonlinear equations such as genetic algorithms [4], particle
swarm algorithm [3]. ICA is a new evolutionary algorithm for optimizations which is inspired by imperialist competitive [5].
It is good to mention that ICA is a robust method based on imperialism which is the policy of extending the power and rule of
a government beyond its own borders [6]. In this algorithm, we start with an initial population as initial countries. Some of
the best countries among the population are selected to be the imperialists. The rest of the population is divided among the
mentioned imperialists as colonies. Then, the imperialistic competition begins among all the empires. The weakest empire
which cannot increase its power and is not able to succeed in this competition, will be eliminated from the competition. As a
result, all colonies move toward their relevant imperialists along with the competition among empires. Finally, the collapse
mechanism will hopefully cause all the countries to converge to a state in which there exists just one empire in the world
(in the domain of the problem), and all the other countries are colonies of that one empire. The robust empire would be our
2.1. Generating initial empires
Finding an optimal solution is the goal of optimization. We generate our countries which are the randomized solutions
as population [5]. In an N-dimensional problem, a country is an 1 ×Narray defined as follows:
country =(x1,x2,...,xn), xiR,1iN.(3)
We should generate Npop of them. The cost of each country is the cost of f(x)at the variables (x1,x2,...,xn). Then
cost =f(country)=f(x1,x2,...,xn). (4)
We select Nimp of the most powerful countries to form the empires. The remaining Ncol of the population will be the
colonies. As a result, we will have two types of countries: imperialist and colony. Now, we divide the Ncol colonies among
Nimp imperialists. We define the normalized cost of an imperialist by
where cnis the cost of nth imperialist and Cnis its normalized cost.
The normalized power of each imperialist is defined by
So, the initial number of colonies of an empire will be
where No.Cnis the initial number of colonies of nth empire and Ncol is the number of all colonies. To divide the colonies for
imperialists, we randomly choose No.Cnof colonies to give them to the nth empire.
2.2. Moving the colonies of an empire toward the imperialist
Each colony that moves toward the imperialist by x-units in the direction is the vector from colony to imperialist. xwill
be a random variable with uniform distribution. Then
xU(0, β ×d), β > 1 (8)
where dis the distance between colony and imperialist. βcauses the colony to get closer to the imperialist. We have put
β=2 for all of our problems. (See Figs. 1 and 2.)
To get different points around the imperialist we have to add a random amount of deviation to the direction of movement
like θwhich is equal to 0.5 in this paper.
1896 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908
Fig. 1. Moving colonies toward their relevant imperialist.
Source: From [5].
Fig. 2. Moving colonies toward their relevant imperialist in a randomly deviated direction.
Source: From [5].
2.3. Revolution
In each iteration, a number of colonies in an empire are replaced with the same number of new generated countries. We
have done this by generating some new countries and replacing them with some colonies of that empire, randomly. This
action is called revolution which has a sensitive role in this paper. The number of colonies of the empire which is supposed
to be replaced with the same number of new generated countries is:
N.R.C=round{RevolutionRate ×No.(The colonies of empiren)}(9)
where N.R.Cis the number of revolutionary colonies. This will improve the global convergence of the ICA and prevent it
sticking to a local minimum [7].
2.4. Exchanging positions of the imperialist and a colony
While moving a colony may access to a better position than that of the imperialist. So, the imperialist moves to the
position of that colony and vise versa.
2.5. Total power of an empire
The total power of an empire depends on its own all colonies as follows:
T·Cn=cost(imperialistn)+ξ·mean(cost(colonies of empiren)) (10)
where ξis a position coefficient. We have used the value of 0.02 in all of our problems.
2.6. Imperialistic competition
All empires are in competition with each other to take possession of colonies of other empires and control them. As a
result, the power of the weaker empires gradually begins to decrease and the power of more powerful ones increases. To
get to this goal, we find the possession probability of each empire based on its total power. The normalized total cost is
where T·Cnand N·T·Cnare respectively the total cost and the normalized total cost of nth empire. Now we could be able
to calculate the possession probability of each empire by
M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1897
Table 1
Used parameters in ICA for tests and cases.
Parameter Value
Empires 10
RevolutionRate 0.02
Divide the mentioned colonies among empires, based on the possession probability of them. The vector Pis formed as
P= [pp1,pp2,pp3,...,ppNimp ](13)
and also the vector Rwith uniformly distributed elements
R= [r1,r2,r3,...,rNimp ]r1,r2,r3,...,rNimp U(0,1). (14)
Finally, we have vector Dby
D=PR= [pp1r1,pp2r2,pp3r3,...,ppNimp rNimp ].(15)
The elements of Dwill hand the mentioned colonies to an empire whose relevant index in Dis maximum.
2.7. The eliminated empire
When an empire loses all of its colonies, it will collapse and become one of the rest colonies.
2.8. Convergence
At the end, we will have the most powerful empire with no any competitor and all colonies will be under the control of
this unique empire. So, all the colonies will have the same costs as the unique empire has. It means that there is no difference
between colonies and their unique empire. In this ideal world, we put an end to our algorithm.
3. Proposed method
Since the proposed RevolutionRate in [7] is fixed during each process, so, in some problems, especially in the systems
of nonlinear equations, ICA falls in the local optimum. In this paper, to improve the efficiency of the algorithm, a similar
behavior to mutation in GA is simulated. Therefore, in each process, a random number on (0, 1) is produced. If it was less
or equal to RevolutionRate, the position of a colony randomly changes. Otherwise, does not change. In each iteration, this is
applied on each colony of an empire. This method raises the efficiency of ICA significantly.
4. Experiment and results
In this section, we have investigated the performance of ICA with four benchmark functions.
Test 1: 10 dimensions Rastrigin function
i10 cos(2πxi)+10] |xi| ≤ 5.2.
The solution is f(0,0,0,...,0)=0. Apply ICA to optimize it with 1000 iterations, and used parameters are shown in
Table 1 for the same 300 countries in [2].
The results of Mo et al. [2] and our final optimal results were given in Tables 2 and 3respectively. Fig. 3 shows the
convergence history of the ICA.
The results of ICA are better and we reached the optimized solution before 250 iterations with the same population size
used in [2].
Test 2: The Hartman’s function [2]
f(x)= −
aij(xjpij )2,where 0 xj1,c=(1 1.233.2),
pij =
0.1312 0.1696 0.5569 0.0124 0.8283 0.5886
0.2329 0.4135 0.8307 0.3736 0.1004 0.9991
0.2348 0.1415 0.3522 0.2883 0.3047 0.6650
0.4047 0.8828 0.8732 0.5743 0.1091 0.0381
,pij =
10 3 17 3.5 1.7 8
0.05 10 17 0.1 8 14
3 3.5 1.7 10 17 8
17 8 0.05 10 0.1 14
1898 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908
Table 2
Results of Mo et al.
Source: From [2].
Variables Initial iteration After 200 iterations After 400 iterations After 600 iterations After 800 iterations After 1000 iterations
x10.1431 0.0001 0.0007 0.0001 0.0000 0.0000
x22.1983 0.0001 0.0000 0.0001 0.0001 0.0001
x31.9401 0.0000 0.0000 0.0001 0.0000 0.0001
x41.7080 0.0002 0.0001 0.0000 0.0000 0.0000
x50.2261 0.9950 0.9962 0.9948 0.0001 0.0001
x60.9392 0.9950 0.9941 0.9949 0.9950 0.9949
x70.1129 0.9949 0.9949 0.0001 0.0001 0.0000
x80.1516 0.9950 0.9949 0.9949 0.9949 0.0000
x92.1893 0.0001 0.0000 0.0001 0.0000 0.0000
x10 4.9798 0.9950 0.0000 0.0001 0.0000 0.0000
Table 3
Results of Test 1 with ICA.
Variables Initial iteration After 200 iterations After 400 iterations After 600 iterations After 800 iterations After 1000 iterations
x12.883527 0.1084e007 0.1404e008 0.1404e008 0.1404e008 0.1404e008
x22.111072 0.1710e007 0.0275e008 0.0275e008 0.0275e008 0.0275e008
x30.869045 0.0048e007 0.0656e008 0.0656e008 0.0656e008 0.0656e008
x41.985114 0.6947e007 0.0855e008 0.0855e008 0.0855e008 0.0855e008
x51.156667 0.0328e007 0.1015e008 0.1015e008 0.1015e008 0.1015e008
x63.083374 0.0356e007 0.0899e008 0.0899e008 0.0899e008 0.0899e008
x73.093877 0.0948e007 0.0349e008 0.0349e008 0.0349e008 0.0349e008
x82.020172 0.1528e007 0.1610e008 0.1610e008 0.1610e008 0.1610e008
x92.832951 0.2304e007 0.0180e008 0.0180e008 0.0180e008 0.0180e008
x10 2.208695 0.2454e007 0.0147e008 0.0147e008 0.0147e008 0.0147e008
f(x)83.041615 1.3287e012 0000
Fig. 3. The convergence history of Rastrigin function (Test 1).
where min f(x)= −3.3220. The ICA was run 10 times and the parameters were same to Test 1 with 300 iterations. The
results of Mo et al. [2] and ours are shown in Tables 4 and 5respectively with the same parameters.
Test 3: Six-Hump camelback.
The Six-Hump camelback has six local optima, two of which are global.
min f(x)=4x2
The global solutions in [3] were
f(0.08984,0.71266)=f(0.08984,0.71266)= −1.0316285
with the convergence history shown in Fig. 4.
The ICA reached the best result with the same parameters in Table 1 quicker than PSO in [3] as follows
f(x1,x2)=f(0.089842012773979,0.712656402251958)= −1.031628453489878
and the convergence history is shown in Fig. 5 with the same 50 iterations.
M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1899
Table 4
Results of Mo et al.
Source: From [2].
Ordinal number Optimal solution (x1,x2,x3,x4,x5,x6)Optimal value Iteration iterations Mean iteration of 10 runs
1 (0.2031, 0.1479, 0.4767, 0.2753, 0.3116, 0.6573) 3.3220 79
2 (0.2030, 0.1469, 0.4758, 0.2756, 0.3120, 0.6572) 3.3220 74
3 (0.2019, 0.1455, 0.4766, 0.2754, 0.3112, 0.6573) 3.3220 227
4 (0.2022, 0.1475, 0.4772, 0.2752, 0.3115, 0.6568) 3.3220 86
5 (0.2018, 0.1468, 0.4774, 0.2755, 0.3122, 0.6582) 3.3220 221
6 (0.2031, 0.1479, 0.4767,0.2755, 0.3116, 0.6573) 3.3220 79
7 (0.2030, 0.1469, 0.4758, 0.2756, 0.3120, 0.6572) 3.3220 74
8 (0.2019, 0.5455, 0.4766, 0.2754, 0.3112, 0.6573) 3.3220 227
9 (0.2022, 0.1475, 0.4772, 0.2752, 0.3115, 0.6568) 3.3220 86
10 (0.2018, 0.1468, 0.4774, 0.2755, 0.3122, 0.6582) 3.3220 220
Table 5
Results of ICA.
Ordinal number Optimal solution (x1,x2,x3,x4,x5,x6)Optimal value Iteration iterations Mean iteration of 10 runs
1 (0.2023, 0.1458, 0.4753, 0.2754, 0.3118, 0.6574) 3.3220 47
2 (0.2021, 0.1475, 0.4756, 0.2760, 0.3115, 0.6574) 3.3220 88
3 (0.2012, 0.1467, 0.4785, 0.2755, 0.3119, 0.6570) 3.3220 89
4 (0.2017, 0.1467, 0.4784, 0.2752, 0.3117, 0.6573) 3.3220 97
5 (0.2014, 0.1455, 0.4771, 0.2750, 0.3112, 0.6573) 3.3220 72
6 (0.2017, 0.1472, 0.4763, 0.2746, 0.3116, 0.6572) 3.3220 96
7 (0.2030, 0.1469, 0.4774, 0.2758, 0.3115, 0.6573) 3.3220 85
8 (0.2004, 0.1470, 0.4759, 0.2748, 0.3119, 0.6575) 3.3220 73
9 (0.2026, 0.1471, 0.4754, 0.2750, 0.3117, 0.6572) 3.3220 87
10 (0.2016, 0.1468, 0.4785, 0.2756, 0.3112, 0.6574) 3.3220 99
Fig. 4. The convergence history of Six-Hump.
Source: From [3].
Test 4: This example was given in [3]
min f(x)=
i=1sin(xi)+sin 2xi
The solution of this function is 1.21598D. The results of [3] and ICA for D=10 and D=100 are comparable in Tables 6–8
respectively. The variables in both algorithm were in (3, 13) and our number of countries are 300 as [3]. The ICA found the
optimal solution for D=10 before approximately 70 iterations and it found the optimal solution for D=100 before
approximately 700 iterations, much better than PPSO in [3].
See Figs. 6–8 too.
1900 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908
Fig. 5. The convergence history of Six-Hump with ICA (Test 3).
Table 6
Results of Test 4.
Source: From [3].
Variables Initial iteration After 100 iterations After 200 iterations After 300 iterations After 400 iterations After 500 iterations
x14.9203 5.3737 5.3667 5.3656 5.3626 5.3623
x24.6815 5.3564 5.3601 5.3618 5.3628 5.3624
x34.9207 5.3522 5.3651 5.3658 5.3627 5.3621
x45.5048 5.3846 5.3656 5.3648 5.3636 5.3633
x56.3685 5.3597 5.3628 5.3630 5.3607 5.3627
x66.7112 5.3520 5.3669 5.3640 5.3631 5.3625
x75.6790 5.3369 5.3621 5.3626 5.3613 5.3624
x86.3557 5.3420 5.3626 5.3629 5.3623 5.3622
x911.7889 5.3705 5.3574 5.3647 5.3627 5.3627
x10 10.3531 5.3515 5.3580 5.3592 5.3622 5.3616
f(x)7.690599 12.158781 12.159769 12.159797 12.15981 12.15982
Table 7
The results of ICA for Test 4 with D=10.
Variables Initial iteration After 100 iterations After 200 iterations After 300 iterations After 400 iterations After 500 iterations
x17.648336 5.362271 5.362271 5.362271 5.362271 5.362271
x26.285073 5.362749 5.362749 5.362749 5.362749 5.362749
x36.441411 5.362276 5.362276 5.362276 5.362276 5.362276
x45.521101 5.362543 5.362543 5.362543 5.362543 5.362543
x56.255337 5.363662 5.363662 5.363662 5.363662 5.363662
x69.860261 5.362470 5.362470 5.362470 5.362470 5.362470
x79.630791 5.362061 5.362061 5.362061 5.362061 5.362061
x84.882258 5.362417 5.362417 5.362417 5.362417 5.362417
x95.152085 5.363256 5.363256 5.363256 5.363256 5.363256
x10 5.451308 5.361964 5.361964 5.361964 5.361964 5.361964
f(x)7.36443399 12.15982 12.15982 12.15982 12.15982 12.15982
Table 8
The results of Test 4 with D=100.
After 1000
After 2000
After 3000
After 4000
After 5000
After 6000
PPSO [3] 54.103342 121.208321 121.554754 121.593659 121.596941 121.598050 121.598204
ICA 29.786871 121.598200 121.598200 121.598200 121.598200 121.598200 121.598200
5. Case study
Six standard systems are selected from the literature to demonstrate the efficiency of the ICA for solving systems of
nonlinear equations.
Case 1: Geometry size of thin wall rectangle girder section
f1(x)=bh (b2t)(h2t)=165,b=The width of the section
M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1901
Fig. 6. The convergence history of Test 4 with D=10.
Source: From [3].
Fig. 7. The convergence history of ICA for Test 4 with D=10.
Fig. 8. The convergence history of ICA for Test 4 with D=100.
1902 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908
Table 9
Results of Case 1.
Source: From [3].
Methods b h t f1(x)f2(x)f3(x)
PPSO (present study) 43.155566052654329 10.128950202278199 12.944048457756352 165 9369 6835
PPSO (present study) 7.602995198463455 24.541982377674739 11.576715672202731 165 9369 6835
Mo et al. [2] 8.943089 23.271482 12.912774 251.2378 9369 6835
Luo et al. [1] 12.5655 22.8949 2.7898 408.6488 9369 6835
Luo et al. [1]12.5655 22.8949 2.7898 408.6488 9369 6835
Luo et al. [1] 8.943089 23.271482 12.912774 251.2378 9369 6835
Luo et al. [1]8.943089 23.271482 12.912774 251.2378 9369 6835
Luo et al. [1]2.3637 35.7564 3.0151 334.0376 9369 6835
Luo et al. [1] 2.3637 35.7564 3.0151 334.0376 9369 6835
Table 10
Comparison results of ICA for Case 1 with [1–3].
Methods b h t f1(x)f2(x)f3(x)
ICA (present study) 8.943088778747601 23.271481879207862 12.912774291361677 165 9369 6835
PPSO [3] 43.155566052654329 10.128950202278199 12.944048457756352 709.2412 9369 6835
PPSO [3]7.602995198463455 24.541982377674739 11.576715672202731 208.1851 9369 6835
Mo et al. [2] 8.943089 23.271482 12.912774 165 9369 6835
Luo et al. [1] 12.5655 22.8949 2.7898 166.7229 9369 6835
Luo et al. [1]12.5655 22.8949 2.7898 166.7229 9369 6835
Luo et al. [1] 8.943089 23.271482 12.912774 165 9369 6835
Luo et al. [1]8.943089 23.271482 12.912774 165 9369 6835
Luo et al. [1]2.3637 35.7564 3.0151 165 9369 6835
Luo et al. [1] 2.3637 35.7564 3.0151 165 9369 6835
12 (b2t)(h2t)3
12 =9369,h=The height of the section
h+b2t=6835,t=The thickness of the section.
The results in [3] were printed incorrectly as shown in Table 9. The best solutions obtained by the ICA method have been
listed in Table 10 and compares them with correct results reported by Mo et al. [2] and Luo et al. [1]. It is obvious from
Table 10 that the results of the ICA method outperform other three results with the same 300 iterations and 250 countries
as population and other parameters are shown in Table 1.
Case 2:
The solution in [2,3] was (4, 3, 1). The ICA method got the same result but the convergence history of ICA is better with
300 iterations and 250 countries while [3] had been reached with 1000 iterations and 250 population to the answer. See
Figs. 9 and 10.
Case 3:
The solutions in [3,8] were
f(0.29051455550725,1.08421508149135)=4.686326815078573e 029
f(0.793700525984100,0.793700525984100)=1.577721810442024e 030
with 120 iterations and unknown number of population. The results of the ICA method are
f(1.084215081491351,0.290514555507251)=3.562200025138631e 030
f(0.793700525984100,0.793700525984100)=1.577721810442024e 030
f(0.290514555507251,1.084215081491351)=3.562200025138631e 030
with 50 iteration and 250 countries. Figs. 11 and 12 shows the convergence history of Case 3.
M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1903
Fig. 9. The convergence history of Case 2.
Source: From [3].
Fig. 10. The convergence history of Case 2 with ICA.
Case 4: Neurophysiology Application
10 xi10,1i6.
We considered the example proposed in [9,10]. The best known solution in [9] among 12 different solutions has been
shown in Table 11 beside the exact solution of ICA with the same 300 countries and 200 iterations in [9].
The convergence history of Case 4 is shown in Fig. 13.
Case 5: (Problem 2 in [11] and Test Problem 14.1.4 in [12])
0.5 sin(x1x2)0.25x20.5x1=0
0.25 x11,1.5x22π.
1904 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908
Fig. 11. The convergence history of Case 3.
Source: From [3].
Fig. 12. The convergence history of Case 3 with ICA.
Table 11
Comparison results of Case 4.
The best results of [9] The results of ICA
Variables values Functions values Variables values Functions values
0.8078668904 0.0050092197 0.041096050919063 0
0.9560562726 0.0366973076 0.041096050919063 0
0.5850998782 0.0124852708 0.999155200456294 0
0.2219439027 0.0276342907 0.999155200456294 0
0.0620152964 0.0168784849 0.098733550533454 0
0.0057942792 0.0248569233 0.098733550533454 0
The known solution of Case 5 in [11] is
f=(0.00023852,0.00014159)=7.693745216994211e 008.
The known solutions of Case 5 in [12] are (0.29945, 2.83693) and (0.5, 3.14159).
The results of the ICA method with 250 iterations and 250 countries as [11] are
f=(1.305289210051797e 012,2.284838984678572e 013)=5.631272867601562e 024
M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1905
Fig. 13. The convergence history of Case 4 with ICA.
Fig. 14. The convergence history of Case 5 with ICA.
2, π
Fig. 14 shows the convergence history of Case 5.
Case 6: (Problem 6 in [11] and Test Problem 14.1.6 in [12]).
This problem has been solved by the filled function method in [11] and proposed problem in [12].
4.731 ×103x1x30.3578x2x30.1238x1+x71.637 ×103x20.9338x40.3571 =0
0.2238x1x3+0.7623x2x3+0.2638x1x70.07745x20.6734x40.6022 =0
x6x8+0.3578x1+4.731 ×103x2=0
0.7623x1+0.2238x2+0.3461 =0
The known solution of Case 6 in [11,12] and our results are shown in Table 12 with 1000 iterations and 300 countries.
(See Fig. 15.)
1906 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908
Table 12
Comparison results of Case 6.
Method xVariables values fFunctions values
The best in [11]
x10.67154465 f10.00000375
x20.74097111 f20.00001537
x30.95189459 f30.00000899
x40.30643725 f40.00001084
x50.96381470 f50.00001039
x60.26657405 f60.00000709
x70.40463693 f70.00000049
x80.91447470 f80.00000498
The best in [12]
x10.1644 f18.8531e005
x20.9864 f23.5894e005
x30.9471 f36.6216e006
x40.3210 f42.1560e005
x50.9982 f51.2320e005
x60.0594 f63.9410e005
x70.4110 f76.8400e005
x80.9116 f86.4440e005
The best of ICA
x10.164431665854327 f12.775557561562891e016
x20.986388476850967 f21.110223024625157e016
x30.718452601027603 f31.734723475976807e018
x40.695575919707312 f41.665334536937735e016
x50.997964383970433 f50
x60.063773727557003 f60
x70.527809105283546 f70
x80.849363025083964 f80
Fig. 15. The convergence history of Case 6 with ICA.
5.1. Discussion
There is a diverse range of mathematical methods and evolutionary algorithms for optimization problems especially
for solving systems of nonlinear equations. In this paper, the efficiency of ICA for optimization of different examples
are compared to different methods such as Hybrid Approach with Chaos Optimization and Quasi-Newton [1], Conjugate
Direction Particle Swarm Optimization (CDPSO) [2], Proposed Particle Swarm Optimization (PPSO) [3], Genetic Algorithm
(GA) [9], A New Filled Function Method [11] and Homotopies Exploiting Newton Polytopes [10]. In all results, ICA
outperforms other mentioned methods with less iteration than the other discussed methods. For example, we reached the
exact solution of Test 1 with 400 iterations in comparison to [2] with 1000 iterations. Table 8 shows results for a large scale
problem which ICA performs well.
The efficiency of the proposed method is due to manipulation of the revolution policy of ICA. We implement the similar
strategy to mutation as a revolution in ICA [13]. This significantly improves the performance of ICA. The statistical results of
tests and cases with 30 independent runs in Table 13 show the stability and convergence of our proposed method. We use
One-Sample t-test for a comparison of the average of cases (observed averages) and the countries (expected averages) with
an adjustment for our five cases in the sample and the standard deviation of the average (See Table 14).
M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1907
Table 13
Statistical results.
Problem NMean Std. deviation Std. error mean Worst Best
Test 1 30 0.0 0.0 0.0 0.0 0.0
Test 2 30 3.305370000000002E0 5.172567660929779e2 9.443773293709911e33.1299 3.3220
Test 3 30 1.031628453489877E0 4.903400625422253e16 8.952343770095692e17 1.031628453489877 1.031628453489878
Test 4 (D=10) 30 1.2159820457168102E1 4.351616938118602e07 7.944929195430403e08 1.2159821619352311E1 1.2159820006065519E1
Test 4 (D=100) 30 1.215982007433365E2 1.131969231964622e6 2.066683609162738e7 1.215982058451825E2 1.215982000134926E2
Case 1 30 3.301176194526734e14 1.808127293357038e13 3.301173684708036e14 9.903521305104092e013 2.252128464646329e024
Case 2 30 8.948499236529537e18 4.580698621477245e17 8.363173213719686e18 2.513443969185863e016 0.0
Case 3 30 0.0 0.0 0.0 0.0 0.0
Case 4 30 2.970867386475955e18 1.615503654126465e17 2.949492643656963e18 8.850441823038988e017 0.0
Case 5 30 1.145605502924358e15 6.269216037417460e15 1.144597013855565e15 3.433890687251408e014 0.0
Case 6 30 5.560518602264908e25 2.527736286739731e24 4.614993945572325e25 1.378113375386532e023 1.170995498842820e031
Table 14
One-sample t-test results.
95% Confidence interval of the difference
Problems tdf Sig. (2-tailed) Mean difference Lower Upper H0in level α=0.05
Case 1 1.000 29 0.326 3.301176194526734e14 3.450482079265911e14 1.005283446831938e13 Accepted
Case 2 1.070 29 0.293 8.948499236529537e18 8.156110522458490e18 2.605310899551756e17 Accepted
Case 4 1.007 29 0.322 2.970867386475955e18 3.061522397583018e18 9.003257170534929e18 Accepted
Case 5 1.001 29 0.325 1.145605502924358e15 1.195358238109387e15 3.486569243958104e15 Accepted
Case 6 1.205 29 0.238 5.560518602264908e25 3.878203813481636e25 1.499924101801145e24 Accepted
1908 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908
6. Conclusions and future works
This paper proposes a new efficient approach for solving systems of nonlinear equations. The system of nonlinear
equations was transformed into a multi-objective optimization problem. The goal was to obtain values as close to zero
as possible for each of the involved objectives. Some well-known problems were presented to demonstrate the efficiency of
the Imperialist Competitive Algorithm (ICA) in comparison with other algorithms such as PPSO, CDPSO, GA, Filled Function
Method and Homotopies Exploiting Newton Polytopes. This paper aims to improve the revolution policy of ICA as mentioned
in Section 3. Therefore, the proposed method reached more accurate solutions than the other methods. As a future work,
we are planning to extend ICA on solving the boundary value problems such as Harmonic and Biharmonic equations.
Furthermore, the normal distribution can be used instead of uniform distribution to achieve better results. It is noteworthy
that the convergence speed could be raised by the use of chaos theory for θ[14].
The authors would like to acknowledge Mr. E. Atashpaz Gargari for the package of ICA used in this work, and specially
thank Miss S. Seifollahi for helping to put statistical data.
[1] Y.Z. Luo, G.J. Tang, L.N. Zhou, Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method, Appl.
Soft Comput. 8 (2008) 1068–1073.
[2] Y. Mo, H. Liu, Q. Wang, Conjugate direction particle swarm optimization solving systems of nonlinear equations, Comput. Math. Appl. 57 (2009)
[3] M. Jaberipour, E. Khorram, B. Karimi, Particle swarm algorithm for solving systems of nonlinear equations, Comput. Math. Appl. 62 (2011) 566–576.
[4] M. Melanie, An Introduction to Genetic Algorithms, MIT Press, Massachusetts, 1999.
[5] E. Atashpaz-Gargari, C. Lucas, Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition, in: IEEE Congress
on Evolutionary Computation, 2007, pp. 4661–4667.
[6] The Hulchinson Dictionary of World History, Helicon Publishing, Oxford, 1999.
[7] S. Nazari-Shirkouhi, H. Eivazy, R. Ghodsi, K. Rezaie, E. Atashpaz-Gargari, Solving the integrated product mix-outsourcing problem using the Imperialist
Competitive Algorithm, Expert Syst. Appl. 37 (2010) 7615–7626.
[8] Gyurhan H. Nedzhibov, A family of multi-point iterative methods for solving systems of nonlinear equations, J. Comput. Appl. Math. 222 (2008)
[9] C. Grosan, A. Abraham, A new approach for solving nonlinear equations systems, IEEE Trans. Syst. Man Cybern. A 38 (3) (2008) Senior Member, IEEE.
[10] J. Verschelde, P. Verlinden, R. Cools, Homotopies exploiting Newton polytopes for solving sparse polynomial systems, SIAM J. Numer. Anal. 31 (3)
(1994) 915–930.
[11] C. Wang, R. Luo, K. Wu, B. Han, A new filled function method for an unconstrained nonlinear equation, Comput. Appl. Math. 235 (2011) 1689–1699.
[12] C.A. Floudas, P.M. Pardalos, C.S. Adjiman, W.R. Esposito, Z.H. Gumus, S.T. Harding, J.L. Klepeis, C.A. Meyer, C.A. Schweiger, Handbook of Test Problems
in Local and Global Optimization, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999.
[13] L. Hansheng, K. Lishan, Balance between exploration and exploitation in genetic search, Wuhan Univ. J. Nat. Sci. 4 (1999) 28–32.
[14] H. Bahrami, K. Faez, M. Abdechiri, Imperialistic competitive algorithm using chaos theory for optimization, in: 12th International Conference on
Computer Modeling and Stimulation, 2010.
... The number of successes of obtaining the exact solution is considered in this system by 47%, 33% and 34% for our proposed algorithm, EFO and DMOA, respectively. The convergence history and violin plot for case 3 are shown in Fig. 8. Case study 4. Neurophysiology Application example is proposed in Abdollahi et al. (2013) and Tawhid and Ibrahim (2021). This example is utilized to test the effectiveness of our algorithm. ...
... In addition, as seen from Fig. 11, one should check the convergence curve and the violin/dot plot to see the CEFO performance. Case study 7. Geometry size of thin wall rectangle girder section (Turguta et al. 2014;Abdollahi et al. 2013;Jaberipour et al. 2011) is considered as the benchmark problem. The problem can be defined as Where b is the width of the section, h is the height of the section, and t is the thickness of the section. ...
... A = 165, I y = 9369 and I n = 6835 , 0 ≤ x i ≤ 25 . There are multiple solutions for this non-linear system, (Abdollahi et al. 2013;Jaberipour et al. 2011). Table 18 shows the obtained optimum results for CEFO and other compared algorithms. ...
Full-text available
The search process in population-based metaheuristic algorithms (MAs) can be classified into two primary behaviours: diversification and intensification. In diversification behaviour, the search space will be explored considerably based on randomization. Whereas intensification alludes to the search for a promising region locally. The success of MAs relies on the balance between two search behaviours. Nonetheless, it is strenuous to get the right balance between these behaviours due to the scholastic nature of MAs. Chaotic maps are proven an excellent tool to enhance both behaviours. This work incorporates the Logistic chaotic map into the recently proposed population-based MA called Electromagnetic field optimization (EFO). This suggested algorithm is named chaotic EFO (CEFO). An improved diversification step with chaos in EFO is presented to efficiently control the global search and convergence to the global best solution. CEFO is tested on different case studies, 40 unconstrained CEC 2014 and CEC 2019 benchmark functions, seven real-world nonlinear systems and three mechanical engineering design frameworks. All experiments are compared with other recent and improved algorithms in the literature to show the performance and effectiveness of the proposed algorithm. Two nonparametric statistical tests, the Wilcoxon rank-sum and the Friedman test, are performed on CEFO and other compared algorithms to determine the significance of the results and show the efficiency of CEFO over other algorithms.
... The first population starts this method as the first country-several of the best countries among the current inhabitants were regarded as the imperialists. The remaining population is considered colonies and is incorporated into the mentioned imperialists [47]. After that, imperialistic competition starts between all the empires. ...
... After that, imperialistic competition starts between all the empires. Among the empires, the weakest one that cannot raise its strength and is unable to excel in the tournament will be eliminated [47]. Thus, all colonies gravitate towards their imperialists due to the rivalry among empires. ...
... All the remaining countries are colonies of that single empire. Our most powerful empire would be our remedy [47]. The detailed features of ICA's parameters are summarized in Table 4. ...
Underground natural gas storage is a promising solution to lowering greenhouse gas emissions and attaining sustainable development goals. However, several issues prevent the application of storage projects on a global scale. An accurate estimation of the delivered amount of natural gas from each storage site might be used for supply and demand. Due to this fact, this study proposed hybrid intelligent models integrating the least square support vector machine (LSSVM), differential evolution (DE), imperialist competitive algorithm (ICA), cultural algorithm (CA), teaching learning-based optimization (TLBO), genetic algorithm (GA), and particle swarm optimization (PSO) for approximating the deliverability of underground natural gas storage in different geological formations. We have employed vast data sets of 782 reservoirs from depleted fields to train and validate the proposed intelligent models to predict underground natural gas storage deliverability in the USA. The visual and analytical assessments were used to investigate the performance of the developed intelligent systems. The predicted results showed that all of the intelligent models agreed with the recorded data. Moreover, the statistical indicators revealed that the LSSVM coupling TLBO model shows the highest accuracy in predicting the deliverability of natural gas storage in the depleted field among three intelligent models. Also, the optimal intelligent model accurately predicts 880 and 600 data measurements of saline aquifers and salt domes, respectively. The optimal intelligent model yields a root mean square error (RMSE) value of less than 0.022. The correlation factor (R²) is over 0.998, 0.999, and 0.906 for the depleted field, saline aquifers, and salt domes, respectively. The results highlight the importance of combining smart approaches with nature-inspired strategies in forecasting storage site deliverability. In light of these findings, researchers are better equipped to reduce petroleum energy usage and increase community acceptability of natural gas as part of the transition to green energy.
... Example 4. Neurophysiology Application example is studied in [1,41]. This example is utilized to test the effectiveness and robustness of our algorithm. ...
... This system is known as a robot kinematics application. This real-world example has been widely employed in the literature [1,31] that consists of eight non linear equations and is defined by ...
... Example 9. Geometry size of thin wall rectangle girder section [1,36,70] is considered as the benchmark problem. The problem can be defined as ...
A recently developed metaheuristic optimization algorithm, Salp Swarm Algorithm (SSA), has manifested its capability in solving various optimization problems and many real-life applications. SSA is based on salps’ swarming behaviour when finding their way and searching for food in the oceans. Nonetheless, like most metaheuristic algorithms, SSA experiences low convergence and stagnation in local optima and rate. There is a need to enhance SSA to speed its convergence and effectiveness to solve complex problems. In the present study, we will introduce chaos into SSA (CSSA) to increase its global search mobility for robust global optimization. Detailed studies are carried out on real-world nonlinear benchmark systems and CEC 2013 benchmark functions with chaotic map (Tent). Here, the algorithm utilizes a Tent map to tune the salp leaders’ attractive movement around food sources. The experimental results, considering both convergence and accuracy simultaneously, demonstrate the effectiveness of CSSA for 12 nonlinear systems and 28 unconstrained optimization problems CEC 2013. Two nonparametric statistical tests, the Friedman test and Wilcoxon Signed- Rank Test, are conducted to show the superiority of CSCA over other states of the art algorithms and our results’ significance.
... In general, it can be concluded that the combination of machine learning algorithms increases detection precision. Table 6 compares IPSO with the genetic algorithm (GA), artificial bee colony (ABC) [48], firefly algorithm (FA) [49], and imperialist competitive algorithm (ICA) [50]. e parameters included in the algorithms were set as follows: the maximum number of iterations is 500, and the population size is 50. ...
Full-text available
This paper employs machine learning algorithms to detect tax evasion and analyzes tax data. With the development of commercial businesses, traditional algorithms are not appropriate for solving the tax evasion detection problem. Hence, other algorithms with acceptable speed, precision, analysis, and data decisions must be used. In the case of assets and tax assessment, the integration of machine learning models with meta-heuristic algorithms increases accuracy due to optimal parameters. In this paper, intelligent machine learning algorithms are used to solve tax evasion detection. This research uses an improved particle swarm optimization (IPSO) algorithm to improve the multilayer perceptron neural network by finding the optimal weight and improving support vector machine (SVM) classifiers with optimal parameters. The IPSO-MLP and IPSO-SVM models using the IPSO algorithm are used as new models for tax evasion detection. Our proposed system applies the dataset collected from the general administration of tax affairs of West Azerbaijan province of Iran with 1500 samples for the tax evasion detection problem. The evaluations show that the IPSO-MLP model has a higher accuracy rate than the IPSO-SVM model and logistic regression. Moreover, the IPSO-MLP model has higher accuracy than SVM, Naive Bayes, k-nearest neighbor, C5.0 decision tree, and AdaBoost. The accuracy of IPSO-MLP and IPSO-SVM models is 93.68% and 92.24%, respectively.
... Many other metaphor-based metaheuristics have been used to solve SNEs, including invasive weed optimization [33,90], glowworm swarm optimization [161], hybrid artificial bee colony algorithm [162], imperialist competitive algorithms [163], modified firefly algorithm [164], harmony search [165,91], and a soccer league competition algorithm [166] among others. ...
Full-text available
This paper presents a comprehensive survey of methods which can be utilized to search for solutions to systems of nonlinear equations (SNEs). Our objectives with this survey are to synthesize pertinent literature in this field by presenting a thorough description and analysis of the known methods capable of finding one or many solutions to SNEs, and to assist interested readers seeking to identify solution techniques which are well suited for solving the various classes of SNEs which one may encounter in real world applications. To accomplish these objectives, we present a multi-part survey. In part one, we focused on root-finding approaches which can be used to search for solutions to a SNE without transforming it into an optimization problem. In part two, we introduce the various transformations which have been utilized to transform a SNE into an optimization problem, and we discuss optimization algorithms which can then be used to search for solutions. We emphasize the important characteristics of each method, and we discuss promising directions for future research. In part three, we will present a robust quantitative comparative analysis of methods capable of searching for solutions to SNEs.
Hydrogen is the primary carrier of renewable energy stored underground. Understanding the solubility of hydrogen in water is critical for subsurface storage. Accurately measuring the hydrogen solubility in water has implications for monitoring, control, and storage optimization. In this study, two intelligent systems of Radial Basis Function (RBF) and Least Square Support Vector Machine (LSSVM) were used to precisely predict hydrogen solubility in water. These models were optimized using metaheuristic algorithms, namely biogeography-based optimization (BBO), cultural algorithm (CA), imperialist competitive algorithm (ICA), and teaching-learning-based optimization (TLBO). Quantitative and illustrative evaluations revealed that the RBF paradigm optimized using the CA algorithm with a root mean square error of 0.000176 and a correlation coefficient of 0.9792 is the best model for predicting hydrogen solubility in water. Also, to estimate hydrogen solubility in water, the four well-known equations of state (EoSs) of Soave-Redlich-Kwong (SRK), Peng-Robinson (PR), Redlich-Kwong (RK), and Zudkevitch-Joffe (ZJ) were utilized. The results indicated that the SRK has the best performance among EoSs. However, the intelligent models outperformed the EoSs in terms of accuracy. Considering independent factors, pressure and temperature had the greatest effect on hydrogen solubility in water, respectively. The Leverage technique typified that the RBF + CA model has a good degree of validity for forecasting hydrogen solubility in pure and saline water. Finally, the findings of this investigation demonstrated that the RBF + CA model can have industrial applications and accurately predicts the solubility of hydrogen in pure water and saline water under underground storage conditions (high pressure and temperature).
The arithmetic optimization algorithm is a recently proposed metaheuristic algorithm. In this paper, an improved arithmetic optimization algorithm (IAOA) based on the population control strategy is introduced to solve numerical optimization problems. By classifying the population and adaptively controlling the number of individuals in the subpopulation, the information of each individual can be used effectively, which speeds up the algorithm to find the optimal value, avoids falling into local optimum, and improves the accuracy of the solution. The performance of the proposed IAOA algorithm is evaluated on six systems of nonlinear equations, ten integrations, and engineering problems. The results show that the proposed algorithm outperforms other algorithms in terms of convergence speed, convergence accuracy, stability, and robustness.
In a variety of engineering applications and numerical computation, system of nonlinear equations (SNLEs) is one of the greatest remarkable problems. Among successful metaheuristic algorithms, particle swarm optimization (PSO) and differential evolution (DE) effectively employed in different optimization areas due to their powerful search capacity and simple structure. However, in solving complex optimization problems, still they have some shortcomings such as premature convergence and low search efficiency. An innovative hybrid algorithm of PSO and DE (named ihPSODE) present in this paper, for finding the solution of SNLEs. Besides, a novel inertia weight, acceleration factor and position update structure is adopted in nPSO to increase the population diversity as well as a novel mutation approach and crossover rate is implemented in nDE to help particles escape away from local optima. After population calculation according the fitness function cost recognize the top half member with discard rest half and apply nPSO which help to sustain exploration and exploitation competency of the algorithm. Furthermore, to achieve rapid convergence and fine stability, apply nDE on offspring created by nPSO. The population resultant by nPSO and nDE are combined for repetition. The proficiency of the presented algorithms (nPSO, nDE and ihPSODE) is examined on 23 basic unconstrained benchmark function and 19 scalable high-dimensional continuous functions (200 and 500 dimensions) then solved 7 multifaceted SNLEs. The simulation and relative results have indicated that the presented algorithms offer significant and reasonable performances.
Full-text available
Although the growing number of synthetic aperture radar (SAR) satellites has increased their application in flood-extent mapping, predictive models for the analysis of flood dynamics that are independent of sensor characteristics must be developed to fully extract information from SAR images for flood mitigation. This study aimed to develop hybrid machine-learning models for flood mapping in the Ahvaz region, Iran, based on SAR data. Each hybrid model consists of a support vector machine (SVM) algorithm coupled with one of the following metaheuristic optimization procedures: grey wolf optimization (GWO), differential evolution, and the imperialist competitive algorithm. Sentinel-1 acquired SAR images before and during flooding between 20 March and 26 May of 2019. The goodness-of-fit level and predictive capability of each model were scrutinized using overall accuracy, producer accuracy, and user accuracy. The SVM-GWO approach yielded the highest accuracy with overall accuracies of 96.07% and 93.39% in the training and validation steps, respectively. Furthermore, this hybrid model provided the most accurate classification of water-inundation class based on producer accuracy (96.67%) and user accuracy (95.05%). The results highlight that wetland is the last land-use/land-cover type to return to normal conditions due to the many previously dry oxbow lakes that could trap water for a long time. Furthermore, the nine most suitable sites for flood-protection structures (e.g., embankments and levees) were identified based on floodwater distribution analysis. This work describes a robust, data-parsimonious approach that will benefit flood mitigation studies seeking to identify the most suitable locations for embankments based on spatio-temporal flood dynamics.
Conference Paper
Microarray or gene expression profiling is conducted in a single experiment with different types of cells or tissue samples to evaluate and compare the extent of gene expression. The role of classifying the sample in question is more promising as the microarray dataset is disrupted due to the dimensional component, with a limited number of samples together with incorrect and noise genes. This is most important for the screening and diagnosis of a sample of breast cancer. The method of gene selection is currently being formulated to classify a few numbers of significant genes that are correlated with the extremely predictive process in the classification field. To carry out this method, a revolutionary genetic selection algorithm, namely the minimal Redundancy Maximal Significance (mRMR), is devised and integrated with the Gene Weight Imperialist Competitive Algorithm (GWICA), mRMR-GWICA, to classify insightful genes from the microarray-containing profile. The ground-breaking approach relies on the parallel Progressive Inductive Subspace Ensemble Clustering (PPISEC) algorithm to measure the precision of the classification of the known genes. PPISEC-WICA algorithm has three key steps: Improved Support Vector Machine (ISVM) classifier and Incremental Ensemble Member Chosen (IEMC) by GWICA are used to pick the centroid values and to execute the gene expression data clustering a structured cut algorithm is suggested. Experimental findings reveal that the formulated PPISEC system performs well on data containing the expression of the gene for breast cancer relative to conventional clustering ensemble approaches. This design is applied in a dataset comprising the gene expression dataset (GSE45827) which is collected from breast-cancer-gene-expression-cumida. The outputs of the clustering method are determined based on certain metrics such as precision, recall, f-measurement, accuracy, Adjusted Rand Index (ARI), and Normalized Mutual Information (NMI). The clustering design is implemented by implementing the MATLAB simulation environment in the R2014a edition.
Conference Paper
Full-text available
The Imperialist Competitive Algorithm (ICA) that was recently introduced has shown its good performance in optimization problems. This novel optimization algorithm is inspired by socio-political process of imperialistic competition in the real world. In this paper a new Imperialist Competitive Algorithm using chaotic maps (CICA) is proposed. In the proposed algorithm, the chaotic maps are used to adapt the angle of colonies movement towards imperialist's position to enhance the escaping capability from a local optima trap. The ICA is easily stuck into a local optimum when solving high-dimensional multi-model numerical optimization problems. To overcome this shortcoming, we use four different chaotic map incorporated into ICA to enhance the exploration capability. Some famous unconstraint benchmark functions are used to test the CICA performance. Simulation results show this variant can improve the performance significantly.
Genetic search plays an important role in Evolutionary Computation (EC). There are two important issues in the evolution process of the genetic search: exploration and exploitation. Exploration is the creation of population diversity by exploring the search space; exploitation is the reduction of the diversity by focusing on the individuals of higher fitness, or exploiting the fitness information (or knowledge) represented within the population. We theoretically analyze the impact of the genetic operators on the balance. In order to further explain the impact, some results of our research on ESs are showed. Finally we conclude that to make the algorithm more efficient, it is important to strike a balance between these two factors.
This paper is concerned with the problem of finding all isolated solutions of a polynomial system. The BKK bound, defined as the mixed volume of the Newton polytopes of the polynomials in the system, is a sharp upper bound for the number of isolated solutions in $\mathbb{C}_0^n ,\mathbb{C}_0 = \mathbb{C} \backslash \{ 0\} $, of a polynomial system with a sparse monomial structure. First an algorithm is described for computing the BKK bound. Following the lines of Bernshtei˘n’s proof, the algorithmic construction of the cheater’s homotopy or the coefficient homotopy is obtained. The mixed homotopy methods can be combined with the random product start systems based on a generalized Bezout number. Applications illustrate the effectiveness of the new approach.
Preface. 1. Introduction. 2. Quadratic Programming Problems. 3. Quadratically Constrained Problems. 4. Univariate Polynomial Problems. 5. Bilinear Problems. 6. Biconvex and (D.C.) Problems. 7. Generalized Geometric Programming. 8. Twice Continuously Differentiable NLPs. 9. Bilevel Programming Problems. 10. Complementarity Problems. 11. Semidefinite Programming Problems. 12. Mixed-Integer Nonlinear Problems. 13. Combinatorial Optimization Problems. 14. Nonlinear Systems of Equations. 15. Dynamic Optimization Problems.
Conference Paper
This paper proposes an algorithm for optimization inspired by the imperialistic competition. Like other evolutionary ones, the proposed algorithm starts with an initial population. Population individuals called country are in two types: colonies and imperialists that all together form some empires. Imperialistic competition among these empires forms the basis of the proposed evolutionary algorithm. During this competition, weak empires collapse and powerful ones take possession of their colonies. Imperialistic competition hopefully converges to a state in which there exist only one empire and its colonies are in the same position and have the same cost as the imperialist. Applying the proposed algorithm to some of benchmark cost functions, shows its ability in dealing with different types of optimization problems.
The integrated product mix-outsourcing optimization is a major problem in manufacturing enterprise. Generally, heuristic or meta-heuristic solution approaches are used to optimize such problems. Heuristic approaches for these problems include Theory of Constraints (TOC) and Standard Accounting. Sometimes heuristic approaches are inefficient especially in large problems and instead, in these cases meta-heuristic algorithms have been applied extensively. In this paper a novel meta-heuristic algorithm “Imperialist Competitive Algorithm” (ICA) is applied to solve the integrated product mix-outsourcing optimization problem. Also, the results obtained from ICA are compared with the results of TOC and Standard Accounting approaches.
We extend to n-dimensional case a known multi-point family of iterative methods for solving nonlinear equations. This family includes as particular cases some well known and also some new methods. The main advantage of these methods is they have order three or four and they do not require the evaluation of any second or higher order Fréchet derivatives. A local convergence analysis and numerical examples are provided.
Solving systems of nonlinear equations is one of the most difficult numerical computation problems. The convergences of the classical solvers such as Newton-type methods are highly sensitive to the initial guess of the solution. However, it is very difficult to select good initial solutions for most systems of nonlinear equations. By including the global search capabilities of chaos optimization and the high local convergence rate of quasi-Newton method, a hybrid approach for solving systems of nonlinear equations is proposed. Three systems of nonlinear equations including the “Combustion of Propane” problem are used to test our proposed approach. The results show that the hybrid approach has a high success rate and a quick convergence rate. Besides, the hybrid approach guarantees the location of solution with physical meaning, whereas the quasi-Newton method alone cannot achieve this.