Content uploaded by Mahdi Abdollahi

Author content

All content in this area was uploaded by Mahdi Abdollahi on Dec 16, 2017

Content may be subject to copyright.

Computers and Mathematics with Applications 65 (2013) 1894–1908

Contents lists available at SciVerse ScienceDirect

Computers and Mathematics with Applications

journal homepage: www.elsevier.com/locate/camwa

Imperialist competitive algorithm for solving systems of

nonlinear equations

Mahdi Abdollahi a,∗, Ayaz Isazadeh b, Davoud Abdollahi c

aUniversity of Tabriz, Aras International Campus, Department of Computer Sciences, P. O. Box 51666-16471, Islamic Republic of Iran

bUniversity of Tabriz, Department of Computer Sciences, Islamic Republic of Iran

cUniversity College of Daneshvaran, Tabriz, Islamic Republic of Iran

article info

Article history:

Received 3 September 2012

Received in revised form 29 January 2013

Accepted 6 April 2013

Keywords:

ICA

Nonlinear equations

Root solvers

Evolutionary multi-objective optimization

Meta-heuristics

abstract

Solving systems of nonlinear equations is a relatively complicated problem in which arise

a diverse range of sciences. There are a number of different approaches that have been

proposed. In this paper, we employ the imperialist competitive algorithm (ICA) for solving

systems of nonlinear equations. Some well-known problems are presented to demonstrate

the efficiency of this new robust optimization method in comparison to other known

methods.

©2013 Elsevier Ltd. All rights reserved.

1. Introduction

Systems of nonlinear equations arise in a diverse range of sciences such as economics, engineering, chemistry, mechanics,

medicine and robotics. The problem is nondeterministic polynomial-time hard when the equations in the system do not

exhibit nice linear or polynomial properties. However, a number of different approaches have been proposed such as Luo

et al. [1] and Mo et al. [2] used a combination of chaos search and Newton type methods and a combination of the conjugate

direction method (CD) respectively. In the same way, M. Jaberipour [3] used particle swarm algorithm but there still exist

some obstacles in solving systems of nonlinear equations. The most widely used algorithms are Newton-type methods,

though their convergence and effective performance can be highly sensitive to the initial guess of the solution supplied to

the methods. So the algorithm would fail with the improper initial guess. For this reason, it is necessary to find an efficient

algorithm for solving systems of nonlinear equations. Let the form of systems of nonlinear equations be

f1(x1,x2,...,xn)=0

f2(x1,x2,...,xn)=0

.

.

.

fn(x1,x2,...,xn)=0.

(1)

In order to transform (1) to an optimization problem, we will use the auxiliary function:

min f(x)=

n

i=1

f2

i(x), x=(x1,x2,...,xn). (2)

∗Corresponding author. Tel.: +98 914 116 2612; fax: +98 411 669 6012.

E-mail addresses: abdollahi_mm@yahoo.com,m.abdollahi89@ms.tabrizu.ac.ir (M. Abdollahi), isazadeh@tabrizu.ac.ir (A. Isazadeh),

abdollahi_d@daneshvaran.ac.ir (D. Abdollahi).

0898-1221/$ – see front matter ©2013 Elsevier Ltd. All rights reserved.

http://dx.doi.org/10.1016/j.camwa.2013.04.018

M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1895

The equations system is reduced to the same form in the approach used in [3]. In Section 2, we describe the imperialist

competitive algorithm (ICA). In Section 3, some well-known systems are presented to demonstrate the effectiveness and

robustness of the proposed ICA. Then, in Section 4, we study some numerical tests. At the end, the conclusion is given in

Section 5.

2. Imperialist competitive algorithm

In this paper, we employ Imperialist competitive algorithm (ICA) to solve systems of nonlinear equations. Recently, a

number of methods have been proposed for solving systems of nonlinear equations such as genetic algorithms [4], particle

swarm algorithm [3]. ICA is a new evolutionary algorithm for optimizations which is inspired by imperialist competitive [5].

It is good to mention that ICA is a robust method based on imperialism which is the policy of extending the power and rule of

a government beyond its own borders [6]. In this algorithm, we start with an initial population as initial countries. Some of

the best countries among the population are selected to be the imperialists. The rest of the population is divided among the

mentioned imperialists as colonies. Then, the imperialistic competition begins among all the empires. The weakest empire

which cannot increase its power and is not able to succeed in this competition, will be eliminated from the competition. As a

result, all colonies move toward their relevant imperialists along with the competition among empires. Finally, the collapse

mechanism will hopefully cause all the countries to converge to a state in which there exists just one empire in the world

(in the domain of the problem), and all the other countries are colonies of that one empire. The robust empire would be our

solution.

2.1. Generating initial empires

Finding an optimal solution is the goal of optimization. We generate our countries which are the randomized solutions

as population [5]. In an N-dimensional problem, a country is an 1 ×Narray defined as follows:

country =(x1,x2,...,xn), xi∈R,1≤i≤N.(3)

We should generate Npop of them. The cost of each country is the cost of f(x)at the variables (x1,x2,...,xn). Then

cost =f(country)=f(x1,x2,...,xn). (4)

We select Nimp of the most powerful countries to form the empires. The remaining Ncol of the population will be the

colonies. As a result, we will have two types of countries: imperialist and colony. Now, we divide the Ncol colonies among

Nimp imperialists. We define the normalized cost of an imperialist by

Cn=cn−max

i{ci}(5)

where cnis the cost of nth imperialist and Cnis its normalized cost.

The normalized power of each imperialist is defined by

pn=

Cn

Nimp

i=1

Ci

(6)

So, the initial number of colonies of an empire will be

No.Cn=round(pn.Ncol)(7)

where No.Cnis the initial number of colonies of nth empire and Ncol is the number of all colonies. To divide the colonies for

imperialists, we randomly choose No.Cnof colonies to give them to the nth empire.

2.2. Moving the colonies of an empire toward the imperialist

Each colony that moves toward the imperialist by x-units in the direction is the vector from colony to imperialist. xwill

be a random variable with uniform distribution. Then

x∼U(0, β ×d), β > 1 (8)

where dis the distance between colony and imperialist. βcauses the colony to get closer to the imperialist. We have put

β=2 for all of our problems. (See Figs. 1 and 2.)

To get different points around the imperialist we have to add a random amount of deviation to the direction of movement

like θwhich is equal to 0.5 in this paper.

1896 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908

Fig. 1. Moving colonies toward their relevant imperialist.

Source: From [5].

Fig. 2. Moving colonies toward their relevant imperialist in a randomly deviated direction.

Source: From [5].

2.3. Revolution

In each iteration, a number of colonies in an empire are replaced with the same number of new generated countries. We

have done this by generating some new countries and replacing them with some colonies of that empire, randomly. This

action is called revolution which has a sensitive role in this paper. The number of colonies of the empire which is supposed

to be replaced with the same number of new generated countries is:

N.R.C=round{RevolutionRate ×No.(The colonies of empiren)}(9)

where N.R.Cis the number of revolutionary colonies. This will improve the global convergence of the ICA and prevent it

sticking to a local minimum [7].

2.4. Exchanging positions of the imperialist and a colony

While moving a colony may access to a better position than that of the imperialist. So, the imperialist moves to the

position of that colony and vise versa.

2.5. Total power of an empire

The total power of an empire depends on its own all colonies as follows:

T·Cn=cost(imperialistn)+ξ·mean(cost(colonies of empiren)) (10)

where ξis a position coefficient. We have used the value of 0.02 in all of our problems.

2.6. Imperialistic competition

All empires are in competition with each other to take possession of colonies of other empires and control them. As a

result, the power of the weaker empires gradually begins to decrease and the power of more powerful ones increases. To

get to this goal, we find the possession probability of each empire based on its total power. The normalized total cost is

N·T·Cn=T·Cn−max{T·Ci}(11)

where T·Cnand N·T·Cnare respectively the total cost and the normalized total cost of nth empire. Now we could be able

to calculate the possession probability of each empire by

ppn=

N·T·Cn

Nimp

i=1

N·T·Ci

.(12)

M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1897

Table 1

Used parameters in ICA for tests and cases.

Parameter Value

Empires 10

RevolutionRate 0.02

ξ0.02

θ0.5

β2

Divide the mentioned colonies among empires, based on the possession probability of them. The vector Pis formed as

P= [pp1,pp2,pp3,...,ppNimp ](13)

and also the vector Rwith uniformly distributed elements

R= [r1,r2,r3,...,rNimp ]r1,r2,r3,...,rNimp ∼U(0,1). (14)

Finally, we have vector Dby

D=P−R= [pp1−r1,pp2−r2,pp3−r3,...,ppNimp −rNimp ].(15)

The elements of Dwill hand the mentioned colonies to an empire whose relevant index in Dis maximum.

2.7. The eliminated empire

When an empire loses all of its colonies, it will collapse and become one of the rest colonies.

2.8. Convergence

At the end, we will have the most powerful empire with no any competitor and all colonies will be under the control of

this unique empire. So, all the colonies will have the same costs as the unique empire has. It means that there is no difference

between colonies and their unique empire. In this ideal world, we put an end to our algorithm.

3. Proposed method

Since the proposed RevolutionRate in [7] is fixed during each process, so, in some problems, especially in the systems

of nonlinear equations, ICA falls in the local optimum. In this paper, to improve the efficiency of the algorithm, a similar

behavior to mutation in GA is simulated. Therefore, in each process, a random number on (0, 1) is produced. If it was less

or equal to RevolutionRate, the position of a colony randomly changes. Otherwise, does not change. In each iteration, this is

applied on each colony of an empire. This method raises the efficiency of ICA significantly.

4. Experiment and results

In this section, we have investigated the performance of ICA with four benchmark functions.

Test 1: 10 dimensions Rastrigin function

f(x)=

10

i=1

[x2

i−10 cos(2πxi)+10] |xi| ≤ 5.2.

The solution is f(0,0,0,...,0)=0. Apply ICA to optimize it with 1000 iterations, and used parameters are shown in

Table 1 for the same 300 countries in [2].

The results of Mo et al. [2] and our final optimal results were given in Tables 2 and 3respectively. Fig. 3 shows the

convergence history of the ICA.

The results of ICA are better and we reached the optimized solution before 250 iterations with the same population size

used in [2].

Test 2: The Hartman’s function [2]

f(x)= −

4

i=1

ciexp −

6

j=1

aij(xj−pij )2,where 0 ≤xj≤1,c=(1 1.233.2),

pij =

0.1312 0.1696 0.5569 0.0124 0.8283 0.5886

0.2329 0.4135 0.8307 0.3736 0.1004 0.9991

0.2348 0.1415 0.3522 0.2883 0.3047 0.6650

0.4047 0.8828 0.8732 0.5743 0.1091 0.0381

,pij =

10 3 17 3.5 1.7 8

0.05 10 17 0.1 8 14

3 3.5 1.7 10 17 8

17 8 0.05 10 0.1 14

1898 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908

Table 2

Results of Mo et al.

Source: From [2].

Variables Initial iteration After 200 iterations After 400 iterations After 600 iterations After 800 iterations After 1000 iterations

x10.1431 −0.0001 −0.0007 0.0001 −0.0000 −0.0000

x22.1983 −0.0001 0.0000 0.0001 0.0001 0.0001

x31.9401 0.0000 0.0000 0.0001 −0.0000 0.0001

x4−1.7080 −0.0002 −0.0001 0.0000 −0.0000 −0.0000

x50.2261 −0.9950 −0.9962 −0.9948 0.0001 0.0001

x60.9392 0.9950 0.9941 0.9949 0.9950 0.9949

x7−0.1129 0.9949 0.9949 0.0001 −0.0001 0.0000

x8−0.1516 0.9950 0.9949 0.9949 0.9949 −0.0000

x9−2.1893 −0.0001 0.0000 0.0001 −0.0000 −0.0000

x10 4.9798 0.9950 0.0000 0.0001 −0.0000 −0.0000

Table 3

Results of Test 1 with ICA.

Variables Initial iteration After 200 iterations After 400 iterations After 600 iterations After 800 iterations After 1000 iterations

x1−2.883527 −0.1084e−007 −0.1404e−008 −0.1404e−008 −0.1404e−008 −0.1404e−008

x2−2.111072 0.1710e−007 0.0275e−008 0.0275e−008 0.0275e−008 0.0275e−008

x3−0.869045 −0.0048e−007 −0.0656e−008 −0.0656e−008 −0.0656e−008 −0.0656e−008

x41.985114 0.6947e−007 0.0855e−008 0.0855e−008 0.0855e−008 0.0855e−008

x5−1.156667 0.0328e−007 −0.1015e−008 −0.1015e−008 −0.1015e−008 −0.1015e−008

x63.083374 0.0356e−007 0.0899e−008 0.0899e−008 0.0899e−008 0.0899e−008

x73.093877 −0.0948e−007 0.0349e−008 0.0349e−008 0.0349e−008 0.0349e−008

x8−2.020172 0.1528e−007 0.1610e−008 0.1610e−008 0.1610e−008 0.1610e−008

x92.832951 −0.2304e−007 −0.0180e−008 −0.0180e−008 −0.0180e−008 −0.0180e−008

x10 −2.208695 −0.2454e−007 0.0147e−008 0.0147e−008 0.0147e−008 0.0147e−008

f(x)83.041615 1.3287e−012 0000

Fig. 3. The convergence history of Rastrigin function (Test 1).

where min f(x)= −3.3220. The ICA was run 10 times and the parameters were same to Test 1 with 300 iterations. The

results of Mo et al. [2] and ours are shown in Tables 4 and 5respectively with the same parameters.

Test 3: Six-Hump camelback.

The Six-Hump camelback has six local optima, two of which are global.

min f(x)=4x2

1−2.1x4

1+1

3x6

1+x1x2−4x2

2+4x4

2.

The global solutions in [3] were

f(0.08984,−0.71266)=f(−0.08984,0.71266)= −1.0316285

with the convergence history shown in Fig. 4.

The ICA reached the best result with the same parameters in Table 1 quicker than PSO in [3] as follows

f(x1,x2)=f(0.089842012773979,−0.712656402251958)= −1.031628453489878

and the convergence history is shown in Fig. 5 with the same 50 iterations.

M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1899

Table 4

Results of Mo et al.

Source: From [2].

Ordinal number Optimal solution (x1,x2,x3,x4,x5,x6)Optimal value Iteration iterations Mean iteration of 10 runs

1 (0.2031, 0.1479, 0.4767, 0.2753, 0.3116, 0.6573) −3.3220 79

137.5

2 (0.2030, 0.1469, 0.4758, 0.2756, 0.3120, 0.6572) −3.3220 74

3 (0.2019, 0.1455, 0.4766, 0.2754, 0.3112, 0.6573) −3.3220 227

4 (0.2022, 0.1475, 0.4772, 0.2752, 0.3115, 0.6568) −3.3220 86

5 (0.2018, 0.1468, 0.4774, 0.2755, 0.3122, 0.6582) −3.3220 221

6 (0.2031, 0.1479, 0.4767,0.2755, 0.3116, 0.6573) −3.3220 79

7 (0.2030, 0.1469, 0.4758, 0.2756, 0.3120, 0.6572) −3.3220 74

8 (0.2019, 0.5455, 0.4766, 0.2754, 0.3112, 0.6573) −3.3220 227

9 (0.2022, 0.1475, 0.4772, 0.2752, 0.3115, 0.6568) −3.3220 86

10 (0.2018, 0.1468, 0.4774, 0.2755, 0.3122, 0.6582) −3.3220 220

Table 5

Results of ICA.

Ordinal number Optimal solution (x1,x2,x3,x4,x5,x6)Optimal value Iteration iterations Mean iteration of 10 runs

1 (0.2023, 0.1458, 0.4753, 0.2754, 0.3118, 0.6574) −3.3220 47

83.3

2 (0.2021, 0.1475, 0.4756, 0.2760, 0.3115, 0.6574) −3.3220 88

3 (0.2012, 0.1467, 0.4785, 0.2755, 0.3119, 0.6570) −3.3220 89

4 (0.2017, 0.1467, 0.4784, 0.2752, 0.3117, 0.6573) −3.3220 97

5 (0.2014, 0.1455, 0.4771, 0.2750, 0.3112, 0.6573) −3.3220 72

6 (0.2017, 0.1472, 0.4763, 0.2746, 0.3116, 0.6572) −3.3220 96

7 (0.2030, 0.1469, 0.4774, 0.2758, 0.3115, 0.6573) −3.3220 85

8 (0.2004, 0.1470, 0.4759, 0.2748, 0.3119, 0.6575) −3.3220 73

9 (0.2026, 0.1471, 0.4754, 0.2750, 0.3117, 0.6572) −3.3220 87

10 (0.2016, 0.1468, 0.4785, 0.2756, 0.3112, 0.6574) −3.3220 99

Fig. 4. The convergence history of Six-Hump.

Source: From [3].

Test 4: This example was given in [3]

min f(x)=

D

i=1sin(xi)+sin 2xi

3.

The solution of this function is 1.21598D. The results of [3] and ICA for D=10 and D=100 are comparable in Tables 6–8

respectively. The variables in both algorithm were in (3, 13) and our number of countries are 300 as [3]. The ICA found the

optimal solution for D=10 before approximately 70 iterations and it found the optimal solution for D=100 before

approximately 700 iterations, much better than PPSO in [3].

See Figs. 6–8 too.

1900 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908

Fig. 5. The convergence history of Six-Hump with ICA (Test 3).

Table 6

Results of Test 4.

Source: From [3].

Variables Initial iteration After 100 iterations After 200 iterations After 300 iterations After 400 iterations After 500 iterations

x14.9203 5.3737 5.3667 5.3656 5.3626 5.3623

x24.6815 5.3564 5.3601 5.3618 5.3628 5.3624

x34.9207 5.3522 5.3651 5.3658 5.3627 5.3621

x45.5048 5.3846 5.3656 5.3648 5.3636 5.3633

x56.3685 5.3597 5.3628 5.3630 5.3607 5.3627

x66.7112 5.3520 5.3669 5.3640 5.3631 5.3625

x75.6790 5.3369 5.3621 5.3626 5.3613 5.3624

x86.3557 5.3420 5.3626 5.3629 5.3623 5.3622

x911.7889 5.3705 5.3574 5.3647 5.3627 5.3627

x10 10.3531 5.3515 5.3580 5.3592 5.3622 5.3616

f(x)−7.690599 −12.158781 −12.159769 −12.159797 −12.15981 −12.15982

Table 7

The results of ICA for Test 4 with D=10.

Variables Initial iteration After 100 iterations After 200 iterations After 300 iterations After 400 iterations After 500 iterations

x17.648336 5.362271 5.362271 5.362271 5.362271 5.362271

x26.285073 5.362749 5.362749 5.362749 5.362749 5.362749

x36.441411 5.362276 5.362276 5.362276 5.362276 5.362276

x45.521101 5.362543 5.362543 5.362543 5.362543 5.362543

x56.255337 5.363662 5.363662 5.363662 5.363662 5.363662

x69.860261 5.362470 5.362470 5.362470 5.362470 5.362470

x79.630791 5.362061 5.362061 5.362061 5.362061 5.362061

x84.882258 5.362417 5.362417 5.362417 5.362417 5.362417

x95.152085 5.363256 5.363256 5.363256 5.363256 5.363256

x10 5.451308 5.361964 5.361964 5.361964 5.361964 5.361964

f(x)−7.36443399 −12.15982 −12.15982 −12.15982 −12.15982 −12.15982

Table 8

The results of Test 4 with D=100.

f(x)Initial

iteration

After 1000

iterations

After 2000

iterations

After 3000

iterations

After 4000

iterations

After 5000

iterations

After 6000

iterations

PPSO [3] 54.103342 121.208321 121.554754 121.593659 121.596941 121.598050 121.598204

ICA 29.786871 121.598200 121.598200 121.598200 121.598200 121.598200 121.598200

5. Case study

Six standard systems are selected from the literature to demonstrate the efficiency of the ICA for solving systems of

nonlinear equations.

Case 1: Geometry size of thin wall rectangle girder section

f1(x)=bh −(b−2t)(h−2t)=165,b=The width of the section

1902 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908

Table 9

Results of Case 1.

Source: From [3].

Methods b h t f1(x)f2(x)f3(x)

PPSO (present study) 43.155566052654329 10.128950202278199 12.944048457756352 165 9369 6835

PPSO (present study) −7.602995198463455 −24.541982377674739 −11.576715672202731 165 9369 6835

Mo et al. [2] 8.943089 23.271482 12.912774 251.2378 9369 6835

Luo et al. [1] 12.5655 22.8949 2.7898 408.6488 9369 6835

Luo et al. [1]−12.5655 −22.8949 −2.7898 408.6488 9369 6835

Luo et al. [1] 8.943089 23.271482 12.912774 251.2378 9369 6835

Luo et al. [1]−8.943089 −23.271482 −12.912774 251.2378 9369 6835

Luo et al. [1]−2.3637 35.7564 3.0151 −334.0376 9369 6835

Luo et al. [1] 2.3637 −35.7564 −3.0151 −334.0376 9369 6835

Table 10

Comparison results of ICA for Case 1 with [1–3].

Methods b h t f1(x)f2(x)f3(x)

ICA (present study) 8.943088778747601 23.271481879207862 12.912774291361677 165 9369 6835

PPSO [3] 43.155566052654329 10.128950202278199 12.944048457756352 709.2412 9369 6835

PPSO [3]−7.602995198463455 −24.541982377674739 −11.576715672202731 208.1851 9369 6835

Mo et al. [2] 8.943089 23.271482 12.912774 165 9369 6835

Luo et al. [1] 12.5655 22.8949 2.7898 166.7229 9369 6835

Luo et al. [1]−12.5655 −22.8949 −2.7898 166.7229 9369 6835

Luo et al. [1] 8.943089 23.271482 12.912774 165 9369 6835

Luo et al. [1]−8.943089 −23.271482 −12.912774 165 9369 6835

Luo et al. [1]−2.3637 35.7564 3.0151 165 9369 6835

Luo et al. [1] 2.3637 −35.7564 −3.0151 165 9369 6835

f2(x)=bh3

12 −(b−2t)(h−2t)3

12 =9369,h=The height of the section

f3(x)=2t(h−t)2(b−t)2

h+b−2t=6835,t=The thickness of the section.

The results in [3] were printed incorrectly as shown in Table 9. The best solutions obtained by the ICA method have been

listed in Table 10 and compares them with correct results reported by Mo et al. [2] and Luo et al. [1]. It is obvious from

Table 10 that the results of the ICA method outperform other three results with the same 300 iterations and 250 countries

as population and other parameters are shown in Table 1.

Case 2:

xx2

1+xx1

2−5x1x2x3=85

x3

1−xx3

2−xx2

3=60

xx3

1+xx1

3−x2=2

3≤x1≤5,2≤x2≤4,0.5≤x3≤2.

The solution in [2,3] was (4, 3, 1). The ICA method got the same result but the convergence history of ICA is better with

300 iterations and 250 countries while [3] had been reached with 1000 iterations and 250 population to the answer. See

Figs. 9 and 10.

Case 3:

x3

1−3x1x2

2−1=0

3x2

1x2−x3

2+1=0.

The solutions in [3,8] were

f(−0.29051455550725,1.08421508149135)=4.686326815078573e −029

f(−0.793700525984100,−0.793700525984100)=1.577721810442024e −030

with 120 iterations and unknown number of population. The results of the ICA method are

f(1.084215081491351,−0.290514555507251)=3.562200025138631e −030

f(−0.793700525984100,−0.793700525984100)=1.577721810442024e −030

f(−0.290514555507251,1.084215081491351)=3.562200025138631e −030

with 50 iteration and 250 countries. Figs. 11 and 12 shows the convergence history of Case 3.

M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1903

Fig. 9. The convergence history of Case 2.

Source: From [3].

Fig. 10. The convergence history of Case 2 with ICA.

Case 4: Neurophysiology Application

x2

1+x2

3=1

x2

2+x2

4=1

x5x3

3+x6x3

4=0

x5x3

1+x6x3

2=0

x5x1x2

3+x6x2

4x2=0

x5x2

1x3+x6x2

2x4=0

−10 ≤xi≤10,1≤i≤6.

We considered the example proposed in [9,10]. The best known solution in [9] among 12 different solutions has been

shown in Table 11 beside the exact solution of ICA with the same 300 countries and 200 iterations in [9].

The convergence history of Case 4 is shown in Fig. 13.

Case 5: (Problem 2 in [11] and Test Problem 14.1.4 in [12])

0.5 sin(x1x2)−0.25x2/π −0.5x1=0

(1−0.25/π)(exp(2x1)−e)+ex2/π −2ex1=0

0.25 ≤x1≤1,1.5≤x2≤2π.

1904 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908

Fig. 11. The convergence history of Case 3.

Source: From [3].

Fig. 12. The convergence history of Case 3 with ICA.

Table 11

Comparison results of Case 4.

The best results of [9] The results of ICA

Variables values Functions values Variables values Functions values

−0.8078668904 0.0050092197 −0.041096050919063 0

−0.9560562726 0.0366973076 0.041096050919063 0

0.5850998782 0.0124852708 0.999155200456294 0

−0.2219439027 0.0276342907 −0.999155200456294 0

0.0620152964 0.0168784849 0.098733550533454 0

−0.0057942792 0.0248569233 0.098733550533454 0

The known solution of Case 5 in [11] is

x=(0.50043285,3.14186317)

f=(−0.00023852,0.00014159)=7.693745216994211e −008.

The known solutions of Case 5 in [12] are (0.29945, 2.83693) and (0.5, 3.14159).

The results of the ICA method with 250 iterations and 250 countries as [11] are

x=(0.299448692495720,2.836927770471037)

f=(1.305289210051797e −012,2.284838984678572e −013)=5.631272867601562e −024

M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1905

Fig. 13. The convergence history of Case 4 with ICA.

Fig. 14. The convergence history of Case 5 with ICA.

and

x=(0.500000000000000,3.141592653589794)=1

2, π

f=(0,0)=0.

Fig. 14 shows the convergence history of Case 5.

Case 6: (Problem 6 in [11] and Test Problem 14.1.6 in [12]).

This problem has been solved by the filled function method in [11] and proposed problem in [12].

4.731 ×10−3x1x3−0.3578x2x3−0.1238x1+x7−1.637 ×10−3x2−0.9338x4−0.3571 =0

0.2238x1x3+0.7623x2x3+0.2638x1−x7−0.07745x2−0.6734x4−0.6022 =0

x6x8+0.3578x1+4.731 ×10−3x2=0

−0.7623x1+0.2238x2+0.3461 =0

x2

1+x2

2−1=0

x2

3+x2

4−1=0

x2

5+x2

6−1=0

x2

7+x2

8−1=0

−1≤xi≤1,i=1,...,8.

The known solution of Case 6 in [11,12] and our results are shown in Table 12 with 1000 iterations and 300 countries.

(See Fig. 15.)

1906 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908

Table 12

Comparison results of Case 6.

Method xVariables values fFunctions values

The best in [11]

x10.67154465 f1−0.00000375

x20.74097111 f20.00001537

x30.95189459 f30.00000899

x4−0.30643725 f40.00001084

x50.96381470 f50.00001039

x6−0.26657405 f60.00000709

x70.40463693 f70.00000049

x80.91447470 f8−0.00000498

The best in [12]

x10.1644 f1−8.8531e−005

x2−0.9864 f23.5894e−005

x3−0.9471 f36.6216e−006

x4−0.3210 f42.1560e−005

x5−0.9982 f51.2320e−005

x6−0.0594 f63.9410e−005

x70.4110 f7−6.8400e−005

x80.9116 f8−6.4440e−005

The best of ICA

x10.164431665854327 f12.775557561562891e−016

x2−0.986388476850967 f2−1.110223024625157e−016

x30.718452601027603 f31.734723475976807e−018

x4−0.695575919707312 f41.665334536937735e−016

x50.997964383970433 f50

x60.063773727557003 f60

x7−0.527809105283546 f70

x8−0.849363025083964 f80

Fig. 15. The convergence history of Case 6 with ICA.

5.1. Discussion

There is a diverse range of mathematical methods and evolutionary algorithms for optimization problems especially

for solving systems of nonlinear equations. In this paper, the efficiency of ICA for optimization of different examples

are compared to different methods such as Hybrid Approach with Chaos Optimization and Quasi-Newton [1], Conjugate

Direction Particle Swarm Optimization (CDPSO) [2], Proposed Particle Swarm Optimization (PPSO) [3], Genetic Algorithm

(GA) [9], A New Filled Function Method [11] and Homotopies Exploiting Newton Polytopes [10]. In all results, ICA

outperforms other mentioned methods with less iteration than the other discussed methods. For example, we reached the

exact solution of Test 1 with 400 iterations in comparison to [2] with 1000 iterations. Table 8 shows results for a large scale

problem which ICA performs well.

The efficiency of the proposed method is due to manipulation of the revolution policy of ICA. We implement the similar

strategy to mutation as a revolution in ICA [13]. This significantly improves the performance of ICA. The statistical results of

tests and cases with 30 independent runs in Table 13 show the stability and convergence of our proposed method. We use

One-Sample t-test for a comparison of the average of cases (observed averages) and the countries (expected averages) with

an adjustment for our five cases in the sample and the standard deviation of the average (See Table 14).

M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908 1907

Table 13

Statistical results.

Problem NMean Std. deviation Std. error mean Worst Best

Test 1 30 0.0 0.0 0.0 0.0 0.0

Test 2 30 −3.305370000000002E0 5.172567660929779e−2 9.443773293709911e−3−3.1299 −3.3220

Test 3 30 −1.031628453489877E0 4.903400625422253e−16 8.952343770095692e−17 −1.031628453489877 −1.031628453489878

Test 4 (D=10) 30 1.2159820457168102E1 4.351616938118602e−07 7.944929195430403e−08 1.2159821619352311E1 1.2159820006065519E1

Test 4 (D=100) 30 1.215982007433365E2 1.131969231964622e−6 2.066683609162738e−7 1.215982058451825E2 1.215982000134926E2

Case 1 30 3.301176194526734e−14 1.808127293357038e−13 3.301173684708036e−14 9.903521305104092e−013 2.252128464646329e−024

Case 2 30 8.948499236529537e−18 4.580698621477245e−17 8.363173213719686e−18 2.513443969185863e−016 0.0

Case 3 30 0.0 0.0 0.0 0.0 0.0

Case 4 30 2.970867386475955e−18 1.615503654126465e−17 2.949492643656963e−18 8.850441823038988e−017 0.0

Case 5 30 1.145605502924358e−15 6.269216037417460e−15 1.144597013855565e−15 3.433890687251408e−014 0.0

Case 6 30 5.560518602264908e−25 2.527736286739731e−24 4.614993945572325e−25 1.378113375386532e−023 1.170995498842820e−031

Table 14

One-sample t-test results.

95% Confidence interval of the difference

Problems tdf Sig. (2-tailed) Mean difference Lower Upper H0in level α=0.05

Case 1 1.000 29 0.326 3.301176194526734e−14 −3.450482079265911e−14 1.005283446831938e−13 Accepted

Case 2 1.070 29 0.293 8.948499236529537e−18 −8.156110522458490e−18 2.605310899551756e−17 Accepted

Case 4 1.007 29 0.322 2.970867386475955e−18 −3.061522397583018e−18 9.003257170534929e−18 Accepted

Case 5 1.001 29 0.325 1.145605502924358e−15 −1.195358238109387e−15 3.486569243958104e−15 Accepted

Case 6 1.205 29 0.238 5.560518602264908e−25 −3.878203813481636e−25 1.499924101801145e−24 Accepted

1908 M. Abdollahi et al. / Computers and Mathematics with Applications 65 (2013) 1894–1908

6. Conclusions and future works

This paper proposes a new efficient approach for solving systems of nonlinear equations. The system of nonlinear

equations was transformed into a multi-objective optimization problem. The goal was to obtain values as close to zero

as possible for each of the involved objectives. Some well-known problems were presented to demonstrate the efficiency of

the Imperialist Competitive Algorithm (ICA) in comparison with other algorithms such as PPSO, CDPSO, GA, Filled Function

Method and Homotopies Exploiting Newton Polytopes. This paper aims to improve the revolution policy of ICA as mentioned

in Section 3. Therefore, the proposed method reached more accurate solutions than the other methods. As a future work,

we are planning to extend ICA on solving the boundary value problems such as Harmonic and Biharmonic equations.

Furthermore, the normal distribution can be used instead of uniform distribution to achieve better results. It is noteworthy

that the convergence speed could be raised by the use of chaos theory for θ[14].

Acknowledgments

The authors would like to acknowledge Mr. E. Atashpaz Gargari for the package of ICA used in this work, and specially

thank Miss S. Seifollahi for helping to put statistical data.

References

[1] Y.Z. Luo, G.J. Tang, L.N. Zhou, Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method, Appl.

Soft Comput. 8 (2008) 1068–1073.

[2] Y. Mo, H. Liu, Q. Wang, Conjugate direction particle swarm optimization solving systems of nonlinear equations, Comput. Math. Appl. 57 (2009)

1877–1882.

[3] M. Jaberipour, E. Khorram, B. Karimi, Particle swarm algorithm for solving systems of nonlinear equations, Comput. Math. Appl. 62 (2011) 566–576.

[4] M. Melanie, An Introduction to Genetic Algorithms, MIT Press, Massachusetts, 1999.

[5] E. Atashpaz-Gargari, C. Lucas, Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition, in: IEEE Congress

on Evolutionary Computation, 2007, pp. 4661–4667.

[6] The Hulchinson Dictionary of World History, Helicon Publishing, Oxford, 1999.

[7] S. Nazari-Shirkouhi, H. Eivazy, R. Ghodsi, K. Rezaie, E. Atashpaz-Gargari, Solving the integrated product mix-outsourcing problem using the Imperialist

Competitive Algorithm, Expert Syst. Appl. 37 (2010) 7615–7626.

[8] Gyurhan H. Nedzhibov, A family of multi-point iterative methods for solving systems of nonlinear equations, J. Comput. Appl. Math. 222 (2008)

244–250.

[9] C. Grosan, A. Abraham, A new approach for solving nonlinear equations systems, IEEE Trans. Syst. Man Cybern. A 38 (3) (2008) Senior Member, IEEE.

[10] J. Verschelde, P. Verlinden, R. Cools, Homotopies exploiting Newton polytopes for solving sparse polynomial systems, SIAM J. Numer. Anal. 31 (3)

(1994) 915–930.

[11] C. Wang, R. Luo, K. Wu, B. Han, A new filled function method for an unconstrained nonlinear equation, Comput. Appl. Math. 235 (2011) 1689–1699.

[12] C.A. Floudas, P.M. Pardalos, C.S. Adjiman, W.R. Esposito, Z.H. Gumus, S.T. Harding, J.L. Klepeis, C.A. Meyer, C.A. Schweiger, Handbook of Test Problems

in Local and Global Optimization, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999.

[13] L. Hansheng, K. Lishan, Balance between exploration and exploitation in genetic search, Wuhan Univ. J. Nat. Sci. 4 (1999) 28–32.

[14] H. Bahrami, K. Faez, M. Abdechiri, Imperialistic competitive algorithm using chaos theory for optimization, in: 12th International Conference on

Computer Modeling and Stimulation, 2010.