# A Hybrid of the Newton-GMRES and Electromagnetic MetaHeuristic Methods for Solving Systems of Nonlinear Equations

**ABSTRACT** Solving systems of nonlinear equations is perhaps one of the most difficult problems in all numerical computation. Although

numerous methods have been developed to attack this class of numerical problems, one of the simplest and oldest methods, Newton’s

method is arguably the most commonly used. As is well known, the convergence and performance characteristics of Newton’s method

can be highly sensitive to the initial guess of the solution supplied to the method. In this paper a hybrid scheme is proposed,

in which the Electromagnetic Meta-Heuristic method (EM) is used to supply a good initial guess of the solution to the finite

difference version of the Newton-GMRES method (NG) for solving a system of nonlinear equations. Numerical examples are given

in order to compare the performance of the hybrid of the EM and NG methods. Empirical results show that the proposed method

is an efficient approach for solving systems of nonlinear equations.

**0**Bookmarks

**·**

**100**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**This study proposes a novel chaotic quantum behaved particle swarm optimization algorithm for solving nonlinear system of equations. Different chaotic maps are introduced to enhance the effectiveness and robustness of the algorithm. Several benchmark studies are carried out. Logistic map gives the best results and is utilized in solving nonlinear equation sets. Nine well known problems are solved with our algorithm and results are compared with Quantum Behaved Particle Swarm Optimization, Intelligent Tuned Harmony Search, Gravitational Search Algorithm and literature studies. Comparison results reveal that the proposed algorithm can cope with the highly non-linear problems and outperforms many algorithms which exist in the literature.Computers & Mathematics with Applications 08/2014; · 2.00 Impact Factor - SourceAvailable from: Amir Rezaei
##### Dataset: 7ccb5cd9da471f226abd6211626b6991

- SourceAvailable from: Amir Rezaei[Show abstract] [Hide abstract]

**ABSTRACT:**This paper investigates the problems of kinematics, Jacobian, singularity and workspace analysis of a spatial type of 3-PSP parallel manipulator. First, structure and motion variables of the robot are addressed. Two operational modes, non-pure translational and coupled mixed-type are considered. Two inverse kinematics solutions, an analytical and a numerical, for the two operational modes are presented. The direct kinematics of the robot is also solved utilizing a new geometrical approach. It is shown, unlike most parallel robots, the direct kinematics problem of this robot has a unique solution. Next, analytical expressions for the velocity and acceleration relations are derived in invariant form. Auxiliary vectors are introduced to eliminate passive velocity and acceleration vectors. The three types of conventional singularities are analyzed. The notion of non-pure rotational and non-pure translational Jacobian matrices is introduced. The non-pure rotational and non-pure translational Jacobian matrices are combined to form the Jacobian of constraint matrix which is then used to obtain the constraint singularity. Finally, two methods, a discretization method and one based on direct kinematics are presented and robot non-pure translation and coupled mixed-type reachable workspaces are obtained. The influence of tool length on workspace is also studied.Robotics and Computer-Integrated Manufacturing 08/2013; 29(4):158–173. · 1.84 Impact Factor

Page 1

J Math Model Algor (2009) 8:425–443

DOI 10.1007/s10852-009-9117-1

A Hybrid of the Newton-GMRES and Electromagnetic

Meta-Heuristic Methods for Solving Systems

of Nonlinear Equations

F. Toutounian·J. Saberi-Nadjafi·S. H. Taheri

Received: 13 October 2008 / Accepted: 22 July 2009 / Published online: 24 September 2009

© Springer Science + Business Media B.V. 2009

Abstract Solving systems of nonlinear equations is perhaps one of the most difficult

problems in all numerical computation. Although numerous methods have been

developed to attack this class of numerical problems, one of the simplest and oldest

methods, Newton’s method is arguably the most commonly used. As is well known,

the convergence and performance characteristics of Newton’s method can be highly

sensitive to the initial guess of the solution supplied to the method. In this paper

a hybrid scheme is proposed, in which the Electromagnetic Meta-Heuristic method

(EM) is used to supply a good initial guess of the solution to the finite difference

version of the Newton-GMRES method (NG) for solving a system of nonlinear

equations. Numerical examples are given in order to compare the performance of

the hybrid of the EM and NG methods. Empirical results show that the proposed

method is an efficient approach for solving systems of nonlinear equations.

Keywords Systems of nonlinear equations·

Electromagnetism Meta-Heuristic method·Newton-GMRES method

AMS Subject Classifications 34A34·58C15·65H10·90C59

F. Toutounian · J. Saberi-Nadjafi · S. H. Taheri (B )

School of Mathematical Sciences, Ferdowsi University of Mashhad,

P.O. Box. 1159-91775, Mashhad, Iran

e-mail: taheri@math.um.ac.ir

F. Toutounian

e-mail: toutouni@math.um.ac.ir

J. Saberi-Nadjafi

e-mail: saberinajafi@math.um.ac.ir

S. H. Taheri

Department of Mathematics, University of Khayyam, P.O. Box. 9189-747178, Mashhad, Iran

Page 2

426 J Math Model Algor (2009) 8:425–443

1 Introduction

A nonlinear system of equations is defined as:

F(u) = 0,

(1)

where F = ( f1, f2, ..., fn)Tis a nonlinear map from a domain in ?n, that contains

the solution u∗, into ?n,and u ∈ ?n.

Such systems often arise in applied areas of physics, biology, engineering, geo-

physics, chemistry and industry. Numerous examples from all branches of the

sciences are given in [1–3].

Research on systems of nonlinear equations has widely expanded over the last

few decades, and reviews can be found in Broyden [4], Martinez [5], and Hribar

[6]. As is well known, Newton’s method and its variations [7, 8] coupled with some

direct solution technique such as Gaussian elimination are powerful solvers for these

nonlinear systems in case one has a sufficiently good initial guess u0and n is not

too large. When the Jacobian is large and sparse, inexact Newton methods [9–13] or

some kind of nonlinear block-iterative methods [14–16] may be used.

An Inexact Newton method is actually a two stage iterative method which has the

following general form:

Algorithm 1 Inexact Newton method

I. Choose an initial approximation u0.

II.

for k = 1, 2, ...until convergence do:

find xksatisfying

F?(uk)xk= −F(uk) − rk

||rk||2≤ ηk||F(uk)||2;

update uk+1= uk+ xk.

(2)

(3)

In Eq. 2, rk= −F(uk) − F?(uk)xkis the residual vector associated to xk. In Eq. 3,

the sequence [ηk] is used to control the level of accuracy needed on the computation

of the approximate solution xk.

The inner iteration is an iterative method for solving the Newton equations

F?(uk)xk= −F(uk) approximately with the residualrk. The stopping relative residual

control ||rk||2≤ ηk||F(uk)||2guarantees the local convergence of the method under

the usual assumptions for Newton’s method [9].

Recently with the development of Krylov subspace projection methods, this class

of methods such as Arnoli’s method [17] and the generalized minimum residual

method (GMRES) [18] is widely used as the inner iteration for inexact Newton

methods [10, 11]. This combined method is called inexact Newton-Krylov methods

or nonlinear Krylov subspace projection methods. The Krylov methods have the

virtue of requiring almost no matrix storage, resulting in a distinct advantage over

direct methods for solving the large Newton equations. In particular, the product of

a Jacobian and some fixed vector (F?(u)x) is only utilized in a Krylov method for

solving F?(uk)xk= −F(uk), and the product can be approximated by the quotient

F?(uk)x ≈F(uk+ σ x) − F(uk)

σ

,

(4)

Page 3

J Math Model Algor (2009) 8:425–443 427

where σ is a scalar. So, the Jacobian need not be computed explicitly. In [10] Peter N.

Brown gave the local convergence results for inexact Newton-Krylov methods with

the difference approximations of the Jacobian.

For solving nonlinear system (1), we consider the finite difference version of the

Newton-GMRES method, which was given in [19], and can be described as follows:

Algorithm 2 Newton-GMRES method(NG)

I.Choose an initial approximation solution u0of the nonlinear system; set k = 0;

choose a tolerance ε0.

II.Solve the linear system Ax = b, where A = F?(uk) and b = −F(uk).

(1) Initializtion:

choose an initial approximation solution ˆ x0;

compute q0= (F(uk+ σ0ˆ x0) − F(uk))/σ0;

ˆβ = ||ˆ r0||2;

(2) Arnoldi process:

for j = 1 to m do:

(a) qj+1= (F(uk+ σjˆbj) − F(u))/σj;

for i = 1 to j do:

ˆhij= (ˆbi, ˆ ω);

end for;

ˆhi+1j= ||ˆ ω||2;

(b)compute an estimation

F(uk))/σ ||2;

If ρj≤ εkset m = j and go to (3);

end for.

(3) Update the solution ˆ xm:

computeˆdmas the solution of mind∈?m ?ˆβe(m+1)

compute ˆ xm= ˆ x0+ˆBmˆdm.

(4) GMRES restart:

compute an estimation of ρm= ||b − (F(uk+ σ ˆ xm) − F(uk))/σ ||2;

If ρm> εkset ˆ x0= ˆ xmand go to (1);

III. Compute uk+1= uk+ ˆ xm.

IV.

If ||F(uk+1)||2is small enough or k ≥ kmaxthen stop;

else set k = k + 1, choose a new tolerance εkand go to II.

ˆ r0= b − q0;

ˆb1= ˆ r0/ˆβ ; q1=ˆb1.

ˆ ω = qj+1;

ˆ ω = ˆ ω −ˆhijˆbi;

ˆbj+1= ˆ ω/ˆhi+1j;

of

ρj= ||b − (F(uk+ σ ˆ xj) −

1

−˜ˆHmd?2;

The local convergence of this algorithm has been studied in [19]. The performance

and convergence characteristics of this algorithm are highly dependant on the initial

guesses with which it begins and it is very important to have a good starting value

u0. Several techniques exist to remedy the difficulties associated with choice of the

initial guess when solving nonlinear systems of equations. Most of these techniques

fall into two categories, Line search methods and trust region methods. For these

two categories, there are several theoretical results when combined with Newton-

type methods that make them robust and hence, attractive [20–22].

In this paper, we show how, by using the Electromagnetic Meta-Heuristic (EM)

method [23], one can obtain the sufficiently good initial guesses u0. The results of

Page 4

428J Math Model Algor (2009) 8:425–443

comparative study of the hybrid of the EM algorithm and NG method (called EM-

NGmethod)andtrustregionmethodbasedonsmoothCGSalgorithm(calledQCGS

algorithm) [26] show that EM-NG method is effective and represents an efficient

approach for solving nonlinear systems of equations.

This paper is organized as follows. In Section 2 we give a brief description of

Electromagnetic Meta-Heuristic method [23]. In Section 3, we present a hybrid of

Newton-GMRES method and Electromagnetic Meta-Heuristic method. Numerical

experiments are given in Section 4. Finally, we give some concluding remarks in

Section 5.

2 Electromagnetic Meta-Heuristic Method

In this section we will briefly review the EM method of Birbil and Fang [23] and

discuss its main properties. Consider a special class of problems with bounded

variables in the form of

Min

f (x)

?˜l, ˜ u

s.t.x ∈

?

,

(5)

where

In a multi-dimensional solution space where each point represents a solution, a

charge is associated with each point. This charge is related to the objective function

value associated with the solution point. As in evolutionary search algorithms, a

population, or set of solutions of size NS, is created, in which each solution point will

exert attraction or repulsion on other points, the magnitude of which is proportional

to the product of the charges and inversely proportional to the distance between the

points. The charge of the point i is calculated according to the relative efficiency of

the objective function values in the current population, i.e.,

?˜l, ˜ u

?

=

?

x ∈ ?n???˜lk≤ xk≤ ˜ uk,

k = 1, 2, ..., n

?

.

qi= exp

?

−n

f (xi) − f (xbest)

?NS

i=1( f (xi) − f (xbest))

?

,

i = 1, ..., NS,

(6)

where n is the dimension of the problem and xbestrepresents the point that has the

best objective function value among all the points at the current iteration. In this way,

the points that have better objective function values possess higher charges. Note

that, unlike electrical charges, no signs are attached to the charge of an individual

point in the Eq. 6; instead, the direction of a particular force between two points

will be determined after comparing their objective function values. The principle

behind of this algorithm is that inferior solution points will prevent a move in their

direction by repelling other points in the population, and that attractive points will

facilitate moves in their direction. This can be seen as a form of local search in

Euclidian space in a population-based framework. The main difference of these

existing methods is that the moves are governed by forces that obey the rules

Page 5

J Math Model Algor (2009) 8:425–443429

of electromagnetism. Birbil and Fang provide a generic pseudo-code for the EM

algorithm:

Algorithm 3 Electromagnetic Meta-Heuristic method (EM)

I.Initialize( )

II.

While termination criteria are not satisfied do

Local( )

CalcF( )

Move( )

end while

The first procedure, Initialize, is used for sampling NS points randomly from the

feasible region and assigning them their initial function values. Each coordinate of

a point is assumed to be uniformly distributed between the corresponding upper

bound and lower bound. Local is a neighborhood search procedure, which can be

applied to one or many points for local refinements at each iteration. As mentioned

in [23], the selections of these two procedures do not affect the convergence result

of the EM method. The total force vector exerted on each point by all other points

is calculated in the CalcF procedure and total force F exerted on point i is computed

by the following equation:

⎧

⎪⎪⎪⎪⎩

Fi=

NS

?

j = 1

j ?= i

⎪⎪⎪⎪⎨

(xj− xi)

qiqj

??xj− xi

??xj− xi

??2, if f(xj) < f(xi) (Attraction)

??2, if f(xi) ≤ f(xj) (Repulsion)

(xi− xj)

qiqj

, i = 1, 2, ..., NS.

(7)

As explained in [23], between two points, the point that has a better objective

function value attracts the other one. Contrarily, the point with a worse objective

function value repels the other. So, xbestwhich has the minimum objective function

value, attracts all other points in the population. After evaluating the total force

vector Fiin CalcF procedure, the point i is moved in the direction of the force by

a random step in the Move procedure.

Finally, Birbil and Fang showed that when the number of iterations is large

enough, one of the points in the current population moves into the ε- neighborhood

of the global optimum. More details of EM algorithm can be found in [23].

In Section 3, we propose an efficient algorithm for solving the systems of nonlinear

equations in which the EM method is used to supply the good initial guesses to the

NG method.

3 A Hybrid Method of Newton-GMRES and EM Methods

In this section, we present a hybrid method for solving the systems of nonlinear

equations. The idea of the method is to transform the system of nonlinear equations

(1) to an unconstrained minimization problem and at each iteration of the EM

method to use the current best point as the initial guess for the NG Algorithm.

Page 6

430 J Math Model Algor (2009) 8:425–443

For solving the system of the nonlinear equations (1), we consider the

minimization problem (5) with the objective function f (x) = ? F (x)?2and use

the following hybrid method which is named EM-NG method.

Algorithm 4 EM_NG method

I.Initialize( )

II.

While termination criteria are not satisfied do

Local( )

CalcF( )

Move( )

III. Apply the NG method to the nonlinear system (1) using the current best

solution as an initial guess. If the computed solution is not better than

the current best solution, apply the NG method to the nonlinear system

(1) using the second best solution as an initial guess and set Length =

Length × α with α > 1.

end while

In the EM-NG Algorithm, the procedures CalcF and Move are the same as

those of EM Algorithm [23]. The Local procedure which is a neighborhood search

procedure and its selection does not affect the convergence result of EM method, is

defined as follows:

Algorithm 5 Local(Length,α)

I.Length = Length × α

II.

for i = 1 to NS do

for l = 1 to LSITER do

y = xi

for k = 1 to n do

z = y(k)

λ1= U(0,1)

λ2= U(0,1)

if λ1> 0.5 then

yk= yk+ λ2(Length)

else

yk= yk− λ2(Length)

end if

if abs(y(k))

y(k) = z

end if

end for

if f(y) < f(xi) then

xi= z

end if

end for

end for

> abs(z) then

Page 7

J Math Model Algor (2009) 8:425–443431

In Local procedure, in the neighborhood of each point solution xiwe generate

a point solution with a norm smaller than that of xi, and replace xiwith this new

point solution if it is better than xi. In this manner, the optimum solution with a

minimum norm will be obtained in the given interval. Here, the parameters, Length,

α, and LSITER that are passed to this procedure, represent the maximum feasible

step length, the multiplier and number of iterations for the neighborhood search,

respectively.

In step III of Algorithm 4, we apply the NG method to the nonlinear system

(1) using the current best solution as an initial guess. Using the fact that the NG

algorithm is locally convergent [19], we replace xbestwith the solution obtained by the

NG algorithm if it is better than xbest. For the other case, when the NG method does

not converge, and the solution obtained by this algorithm is not better than xbest, we

conclude that xbestis not close enough to the exact solution and another initial guess

must be chosen. In this case, we use the second best solution as an initial guess and

set Length = Length× α, where the multiplier α > 1 will be defined by the user (for

example α = 10), in order to change the very small step length and to furnish the

situation in which the Local procedure can give a substantial reduction of f (x) =

? F (x)?2. Our experiments show that, in many problems, this choice and this change

prevent the norm of residuals from oscillation and stagnation (see Example 2).

In Section 4, the numerical results show that with the EM-NG algorithm, it is

possible, in a given interval to obtain the solution of the nonlinear systems with

desired accuracy and the cost of computation is comparable with those of QCGS

algorithm [26] and NG method when the latter converges.

4 Computational Results

In this section, we compare the performance of EM-NG method with that of Newton,

NG, Evolutionary Method for NSEs (called EMO method) [24], Effati [25] and

QCGS [26] methods. The algorithms were written in MATLAB and were tested for

the examples given in [26] and [27–32]. All the problems were run on a PC with

Pentium IV processor with 512 MB of RAM, and CPU 2.80 GHz. The algorithm

used for random-number generation is an implementation of the Mersenne Twister

algorithm described in [33]. In the NG Algorithm, as [19], we used the tolerance

εk= ηk?F(uk?2,withηk= (0.5)kforstoppingtheGMRESmethod.Amaximalvalue

of m(mmax) is used. If m = mmaxbut ρm> εk, we restart GMRES once. A maximum

number of 60 iterations of stage II (kmax= 60) was allowed. In EM-NG Algorithm,

we used Algorithm 2 (the NG method) as a procedure (in step III) with the above

parameters and kmax= 15. In addition, the parameters NS = 3, 6, 12, Length =

max(˜ uk−˜lk)/2, δ = 0.5, LSITER = 2, and α = 10 are used. A maximum number of

15 iterations was allowed in EM-NG Algorithm. As a stopping criterion to determine

whether uksolves the nonlinear problem F(u) = 0, we used ?F (uk)?2< ε?F (u0)?2,

where ε will be defined for each problem.

Example 1 We consider the nonlinear system of equations

?

f1(u1,u2) = eu1+ u1u2− 1 = 0,

f2(u1,u2) = sin(u1u2) + u1+ u2− 1 = 0.

Page 8

432J Math Model Algor (2009) 8:425–443

Table 1 The results obtained for Example 1

Method

Newton’s method

Effati’s method

EMO method

NG method

EM-NG method

Solution

(0.026, 0.862)

(0.0096, 0.9976)

(0.00138, 1.0027)

(−0.000000000000195, 0.999999999999899)

(0.000000000000000, 1.000000000000000)

Function value

(0.0256, 0.0565)

(0.019223, 0.016776)

(0.00276, 0.0000637)

(−0.389-12, 0.490-12)

(0.0, 0.0)

which was described in [24]. In order to compare our results with those of other

methods,wecollectinTable1,theresultspresentedinreference[24](whichwereob-

tained by the methods presented in Table 1 and the initial guess u0= (0.09,0.09)T),

the results of NG method with this initial guess, and the results of EM-NG method

withmmax= 2, NS = 3,ε=10E-10.Forthelattermethod,thepointsofthepopulation

were chosen randomly in the interval [0, 1]. As we observe, the best result which is

the exact solution is obtained by EM-NG method.

Example 2 (Generalized function of Rosenbrock) This example is given in [19]:

⎧

⎪⎪⎩

The Jacobian matrix F?(u) is tridiagonal. The nonlinear system F(u) = 0 has the

unique solution u∗= (1, 1, ...,1)T. We consider a system of size n = 5000 and we

take ζ = 10. We have considered two intervals [−4,4] and [−8,8] and three initial

approximate solutions u0, denoted by rand1,rand2and rand3. For the NG method,

each component of the initial guess u0was chosen randomly in these intervals. For

the EM-NG method, we considered the initial guess u0 as the first point of the

population and the other points of the population are also chosen randomly in these

intervals. The results obtained with mmax= 10 and ε=10E-8, are presented in Table 2.

For each initial guess u0, the final iteration number of EM-NG method It_EM, the

⎪⎪⎨

f1(u1,u2,...,un) = −4ζ?u2− u2

fn(u1,u2,...,un) = 2ζ?un− u2

1

?u1− 2(1 − u1) = 0,

?= 0.

fi(u1,u2,...,un) = 2ζ(ui− ui−1) − 4?ui+1− u2

i

?ui

−2(1 − ui−1) = 0,

n−1

i = 2,3,...,n − 1,

Table2 Resultsobtained forExample 2withNGand EM-NGmethods anddifferent initial intervals

[˜lj, ˜ uj] = [−4,4]

It_EMNFE

?F(u)?2

u0= rand1

EM-NG(3)3 1262.02E-6

EM-NG(6)3 1512.42E-5

EM-NG(12)3 2256.99E-5

u0= rand2

EM-NG(3)3141 2.71E-5

EM-NG(6)3 156 1.00E-4

EM-NG(12)5 6033.82E-5

u0= rand3

EM-NG(3)31343.56E-5

EM-NG(6)3152 6.92E-5

EM-NG(12)5 5885.87E-5

Algorithm

[˜lj, ˜ uj] = [−8,8]

It_EM

–

3

3

3

–

7

3

5

–

5

5

4

CPU

2.94

3.36

4.10

6.69

3.88

3.80

4.28

16.00

27.91

3.64

4.16

15.60

NFE

1244

144

146

214

1237

678

146

503

1242

353

400

472

?F(u)?2

2.09

4.22E-4

5.34E-4

6.15E-4

2.02

3.43E-4

2.35E-4

5.81E-4

2.01

7.59E-5

5.58E-4

4.71E-4

CPU

28.75

3.69

4.01

6.52

28.88

15.70

3.99

14.00

27.93

8.71

9.98

12.65

NG–1251.02E-4

NG–1714.18E-7

NG–12602.09

Page 9

J Math Model Algor (2009) 8:425–443433

number of function evaluations NFE, the final norm ?F(u)?2, and CPU time needed

to obtain the solution are given for different values of NS = 3, 6, 12.

In this table EM-NG(s) denotes the EM-NG algorithm with NS = s. The results

show that, in all the cases, the EM-NG method could obtain the solution with desired

accuracy, but there are the cases in which the NG method did not converge and the

norm of residual after 60 iterations is more than 2. When the two methods converge

(the cases u0= rand1, u0= rand2, and [˜lj, ˜ uj] = [−4, 4]), the convergence behavior

of NG method and EM-NG(3) method are similar with respect to the number of

function evaluations NFE and CPU time. In addition, we observe that, in all the

cases, except u0= rand2and [˜lj, ˜ uj] = [−8,8], the results of EM-NG method with

NS = 3 (EM-NG(3)) is better than those of the others. So we can conclude that, for

this problem a population of size NS = 3 is sufficient for obtaining good results.

We also plot the values of ? F (u)?2as a function of the number of updating uk.

Figure 1 (right) shows the case in which the NG method is not able to reduce the

residuals and the norm of residuals oscillates. Figure 1 (left) shows that when the

procedureNGisnotabletoimprovethesolutionandthenormofresidualsoscillates,

the EM-NG method furnishes another initial guess (the second best solution of

population) for procedure NG and prevents the norm of residuals from oscillation.

Finally, the convergence cases are plotted in Fig. 2.

Example 3 (Bratu test [19]) The nonlinear problem is obtained after discretization

(by 5-point finite differencing) of the following nonlinear partial equation over the

unit square of ?2with Dirichlet boundary conditions:

−?u + α ux + λ eu= f.

Fig. 1 Plots for Example 2 with EM-NG method (left) and NG method (right) for [˜lj, ˜ uj] = [−8,8],

u0= rand1, and NS = 3

Page 10

434J Math Model Algor (2009) 8:425–443

Fig. 2 Plots for Example 2 with EM-NG method (left) and NG method (right) when[˜lj, ˜ uj] = [−4,4],

u0= rand1, and NS = 3

The size of the nonlinear system is n = nxny, where nx+ 2, ny+ 2 are the numbers of

mesh points in each direction, including mesh points on the boundary. The function

f is chosen so that the solution of the discretized problem is known to be the constant

unity e = (1, 1, ..., 1)T. For this problem, it is known that for λ ≥ 0, there is always

a unique solution, while this is not the case when λ < 0. As [19], in our experiments,

we took nx= ny= 50 (n = 2500), α = 100 and λ = −10. We have considered two

intervals [−2,2] and [−6,6] and eachcomponentof u0and other points of population

are chosenrandomly in these intervals. The results obtained with ε = 10E-11, mmax=

5, 10, 15, NS = 3, and three initial guesses rand1,rand2,rand3 are presented in

Table 3. In this table EM-NG(s) denotes the EM-NG algorithm with mmax= s. The

results show that, the two methods converge and the solution was obtained with

desired accuracy. The convergence rate of the NG method and EM-NG method with

Table 3 Results obtained for example 3 with NG and EM-NG methods and different initial intervals

[˜lj, ˜ uj] = [−2,2]

It_EMNFE

?F(u)?2

u0= rand1

EM-NG(5)3 2671.11E-6

EM-NG(10)2 2591.67E-6

EM-NG(15)1 2594.98E-7

u0= rand2

EM-NG(5)3 2649.87E-7

EM-NG(10)2 2443.09E-6

EM-NG(15)1 2353.68E-6

u0= rand3

EM-NG(5) 3 2683.74E-7

EM-NG(10)2 2483.04E-6

EM-NG(15)1 242 2.59E-6

Algorithm

[˜lj, ˜ uj] = [−6,6]

It_EM

–

3

2

1

–

3

2

1

–

3

2

1

CPU

7.12

7.65

7.29

7.24

7.63

7.72

6.85

6.59

7.10

7.73

7.01

6.83

NFE

229

259

249

237

277

270

249

239

273

266

274

259

?F(u)?2

4.09E-6

8.45E-7

2.54E-6

5.38E-6

9.14E-6

6.84E-6

9.15E-6

1.50E-5

4.26E-6

5.49E-6

1.75E-6

2.29E-6

CPU

6.39

7.57

7.01

6.61

7.74

7.66

7.01

6.70

7.06

7.57

7.72

7.31

NG– 2513.50E-7

NG– 2731.19E-6

NG– 2514.71E-6

Page 11

J Math Model Algor (2009) 8:425–443435

different values of mmax, in term of CPU time and the number of function evaluations

are close. The results of this example and Example 2 show that if the NG method

converges, then the EM-NG method converges too, and its convergence rate is close

to that of NG method. In addition, we observe that, by increasing the parameter

mmax, the number of function evaluations and CPU time decrease a little, except in

the case of u0= rand3, [˜lj, ˜ uj] = [−6, 6], and mmax= 10. Our experiments showed

that the choice mmax= 10 furnishes the good results.

4.1 A Set of Problems

In this section, we present the results of a comparative study of three methods,

NG, EM-NG methods, and trust region method based on smooth CGS algorithm

(called QCGS algorithm) [26], for large sparse systems of nonlinear equations. All

test results were obtained by means of 20 problems given in Appendix. All problems

were considered with 100 variables, except problem 19 which has 99 variables and

problem 20 with 10 variables. The QCGS algorithm contains several parameters. As

[26], we have used the values

β1= 0.05

β2= 0.75,γ1= 2

γ2= 106,ρ1= 0.1,ρ2= 0.9

τ0= 10−3,ω0= 0.4,? = 103,ε = 10−16

,

k = 1000,

l = 20

in all numerical experiments. The elements of Jacobian matrix are computed by

the formula

?fj(x + δ ei) − fj(x)?

where eiis the ith column of the unit matrix and δ = 10E-8. If the Jacobian matrix is

sparse, only the nonzero elements are computed by the formula (8).

In the EM-NG and NG methods, for computing the vector qjin stage II (step (2),

(a)), we also used the Jacobian matrix. The parameter mmax= 10 is used for NG and

EM-NG methods. The stopping criterion ?F(uk)?2< ε ?F(u0)?2with ε = 10E-8 was

used for three methods. The parameter kmax= 100 and kmax= 15 were used for NG

and EM-NG methods, respectively. Finally, a maximum number of 50 iterations was

allowed in the EM-NG Algorithm.

First, we have applied the three methods to these problems with an initial guess

x0given for each problem in the Appendix and the initial guess u0, for which each

component is chosen randomly in the interval [−2,2]. The results obtained are

presented in Table 4. The rows of this table correspond to the individual problems;

the columns contain the number of function evaluations (NFE) of EM-NG, QCGS,

and NG methods. The symbol ‘ * ’ indicates that the method did not converge

after allowable iterations. In the last line, Nsucdenotes the number of successfully

solved problems by each method. As expected, in many cases, the results of the three

methods obtained with the good initial guess x0is better, in term of the number of

function evaluations, than those obtained with random initial guess u0. The last line

shows that the EM-NG method is better, measured in the number of successfully

solved problems, than the QCGS and NG methods, but in the case of convergence it

does not give an advantage to the other methods.

Jji=

δ

,

(8)

Page 12

436J Math Model Algor (2009) 8:425–443

Table 4 The results of three

methods for 20 test problems

with initial guesses x0and u0

ProbGiven x0

NFE

EM-NG

141

53

29

39

71

115

199

123

74

35

109

79

43

59

735

39

32

35

907

334

20

u0∈ [−2,2]

NFE

EM-NG

*

*

78

44

904

140

362

148

1233

738

116

*

*

60

320

*

36

576

1492

390

15

QCGS

67

314

19

34

65

178

99

194

82

123

46

89

37

37

65

25

312

*

*

*

17

NG

*

43

19

29

61

93

151

265

64

25

46

58

33

43

805

29

34

25

117

883

19

QCGS

*

*

73

58

*

57

91

298

*

*

49

*

*

67

33

*

149

*

*

123

10

NG

*

*

151

37

*

473

265

1137

*

*

100

*

*

49

537

*

28

*

465

*

10

P1

P2

P3

P4

P5

P6

P7

P8

P9

P10

P11

P12

P13

P14

P15

P16

P17

P18

P19

P20

Nsuc

Next, we have tested each problem with 100 initial guesses. Each component of

each initial guess is chosen randomly in the interval [−2,2] and NS = 3 has been

taken for all tests in EM-NG method. The results are given in Table 5. For each

Table 5 The results of three

methods for 20 test problems

with 100 random initial

guesses u0

ProbEM-NG

mNFE

671

1499

78

40

852

80

102

132

596

287

93

75

44

60

280

656

33

87

172

386

QCGS

mNFE

254

3240

61

41

4296

57

85

137

82

3129

46

3065

37

55

33

4305

82

3179

4314

111

NG

mNFE

826

598

85

33

793

85

115

161

73

601

4

520

801

55

457

797

28

262

185

111

perc

33

percperc

P1

P2

P3

P4

P5

P6

P7

P8

P9

P10

P11

P12

P13

P14

P15

P16

P17

P18

P19

P20

9

0

1

01

100

100

15

100

100

100

30

29

100

45

40

100

100

13

100

96

90

100

98

100

89

98

00

97

94

100

62

87

69

5

0

3

0

100100

00

016

100

100

100

100

00

10028

28

99

85

0

0

35

Page 13

J Math Model Algor (2009) 8:425–443437

Table 6 EM-NG method with

larger NS

Prob NS = 25

NFE

773

1473

616

773

336

157

132

326

172

318

NS = 50

NFE

978

1763

665

978

568

251

232

617

257

505

NS = 100

NFE

1388

4123

1003

1388

565

460

436

460

448

885

perc

41

26

60

43

87

63

46

36

99

93

perc

77

45

100

93

95

73

61

100

100

95

perc

79

80

100

95

98

81

73

100

100

97

P1

P2

P5

P9

P10

P12

P13

P16

P18

P19

methodwereportedtheminimumnumberoffunctionevaluationsneededforsolving

the problem and obtained in 100 randomly generated guesses (mNFE) and the

number of successes of the method in these tests (perc). Here, a “success” means

that the convergence to the exact solution of the system of equations. Table 5 shows

that, for all problems, EM-NG method is much better, in term of the number of

successes, than QCGS and NG methods.

Finally, we have again applied the EM-NG method with larger NS, NS =

25,50,100, to the problems for which the number of successes of EM-NG method

is less than 100 (Problems 1, 2, 5, 9, 10, 12, 13, 16, 18, 19). The results are given in

Table 6. As we observe, by increasing NS (the size of population in the EM-NG

method), the number of successes of EM-NG method increases. Table 6 shows that

with NS = 100 (which is equal to the dimension of problems), for 16 problems, the

percentage of successes of EM-NG method is over 95%. From these results, we can

conclude that the EM-NG method is an efficient approach for solving the systems of

nonlinear equations.

5 Conclusion

We have proposed a hybrid of Newton-GMRES and Electromagnetism meta-

heuristic method for solving a system of nonlinear equations. In the proposed

method, the Electromagnetic Meta-Heuristic method (EM) is used to supply the

good initial guesses of the solution to the finite difference version of the Newton-

GMRES method (NG) for solving the system of nonlinear equations. We observed

that the EM-NG method is able to obtain the solution with desired accuracy in

a reasonable number of iterations. The experiments showed that in the case of

convergence of the NG method, the convergence behavior of the EM-NG method

and NG method are similar with respect to the number of function evaluations NFE

and CPU time. The advantage of the EM-NG algorithm is that when the procedure

NG is not able to improve the solution and the norm of residuals oscillates, step

III of the algorithm furnishes another initial guess (the second best solution of

population) for procedure NG and prevents the norm of residuals from oscillation

Page 14

438J Math Model Algor (2009) 8:425–443

and stagnation. The experiments show that, the EM-NG method is much better,

measured in number of successfully solved problems, than the QCGS and NG

methods. In addition, we observe that, the percentage of success of EM-NG methods

increases when NS (the size of population in the EM-NG method) increases. Conse-

quently, the EM-NG method is an efficient approach for solving systems of nonlinear

equations.

Acknowledgements

J. MacGregor Smith for their comments which substantially improved the quality of this paper.

The authors are grateful to the anonymous referees and editor Professor

Appendix

Our test problems consist of searching for a solution to the system of nonlinear

equations

fk(x) = 0,

1 ≤ k ≤ n.

For each problem an initial guess ¯ xl,1 ≤ l ≤ n, is given. We use the functions div

(integer division) and mod (remainder after integer division). These problems are

given in [26–32].

Problem 1 Countercurrent Reactor Problem 1 [27]:

α = 0.5

fk(x) = α − (1 − α)xk+2− xk(1 + 4xk+1)

fk(x) = −(2 − α)xk+2− xk(1 + 4xk−1)

fk(x) = αxk−2− (1 − α)xk+2− xk(1 + 4xk+1)

fk(x) = α xk−2− (2 − α)xk+2− xk(1 + 4xk−1)

fk(x) = α xk−2− xk(1 + 4xk+1)

fk(x) = α xk−2− (2 − α) − xk(1 + 4xk−1)

¯ xl= 0.1,

¯ xl= 0.2,

¯ xl= 0.3,

¯ xl= 0.4,

¯ xl= 0.5,

k = 1,

k = 2,

mod(k,2) = 1,

mod(k, 2) = 1,

k = n − 1,

k = n,

2 < k < n − 1

2 < k < n − 1

mod(l, 8) = 1,

mod(l, 8) = 2, or mod (l, 8) = 0,

mod(l, 8) = 3, or mod (l, 8) = 7,

mod(l, 8) = 4, or mod (l, 8) = 6,

mod(l, 8) = 5.

Problem 2 Extended Powell Badly Scaled Function [28]:

fk(x) = 10000 xkxk+1− 1,

fk(x) = exp(−xk−1) + exp(−xk) − 1.0001,

¯ xl= 0,

¯ xl= 1,

mod(k, 2) = 1,

mod(k, 2) = 0,

mod(l, 2) = 1,

mod(l, 2) = 0.

Page 15

J Math Model Algor (2009) 8:425–443439

Problem 3 Trigonometric System [29]:

i = div(k − 1, 5),

fk(x) = 5 − (i + 1)(1 − cos xk) − sin xk−?5i+5

j=5i+1cos xj,

¯ xl= 1/n,

l ≥ 1.

Problem 4 Trigonometric-Exponential System, Trigexp 1 [29]:

fk(x) = 3x3

fk(x) = 3x3

k+ 2xk+1− 5 + sin(xk− xk+1)sin(xk+ xk+1),

k+ 2xk+1− 5 + sin(xk− xk+1)sin(xk+ xk+1) + 4xk

−xk−1exp(xk−1− xk) − 3,

fk(x) = 4xk− xk−1exp(xk−1− xk) − 3,

¯ xl= 0,

k = 1,

1 < k < n,

k = n,

l ≥ 1.

Problem 5 Singular Broyden Problem [30]:

fk(x) = ((3 − 2xk)xk− 2xk+1+ 1)2,

fk(x) = ((3 − 2xk)xk− xk−1− 2xk+1+ 1)2,

fk(x) = ((3 − 2xk)xk− xk−1+ 1)2,

¯ xl= −1,

k = 1,

1 < k < n,

k = n,

l ≥ 1.

Problem 6 Tridiagonal System [31]:

fk(x) = 4?xk− x2

fk(x) = 8xk

¯ xl= 12

k+1

?,

k = 1,

1 < k < n,

k = n,

fk(x) = 8xk

?x2

k− xk−1

?x2

?− 2(1 − xk) + 4?xk− x2

k+1

?,

k− xk−1

?− 2(1 − xk),

l ≥ 1.

Problem 7 Five-Diagonal System [31]:

fk(x) = 4?xk− x2

+ xk+1− x2

fk(x) = 8xk

+ x2

fk(x) = 8xk

+ x2

fk(x) = 8xk

¯ xl= −2,

k+1

?+ xk+1− x2

k+2,

k− xk−1

k−1− xk−2+ xk+1− x2

?x2

?x2

k+2,

k = 1,

fk(x) = 8xk

?x2

?x2

k− xk−1

?− 2(1 − xk) + 4?xk− x2

?− 2(1 − xk) + 4?xk− x2

?− 2(1 − xk) + 4?xk− x2

?− 2(1 − xk) + x2

k+1

?

?

?

k = 2,

k+1

k+2,

2 < k < n − 1,

k− xk−1

k−1− xk−2,

k− xk−1

l ≥ 1.

k+1

k = n − 1,

k = n,

k−1− xk−2,