Content uploaded by Auwal Bala Abubakar

Author content

All content in this area was uploaded by Auwal Bala Abubakar on Mar 05, 2020

Content may be subject to copyright.

ISSN 1686-0209

Thai Journal of Mathematics

Vol. 18, No. 1 (2020),

Pages 211 - 231

DERIVATIVE-FREE RMIL CONJUGATE GRADIENT

ALGORITHM FOR CONVEX CONSTRAINED

EQUATIONS

Abdulkarim Hassan Ibrahim1,2∗, Garba Abor Isa3, Halima Usman3, Jamilu Abubakar1,2,3, Auwal Bala

Abubakar1,2,4

1Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi

(KMUTT), 126 Pracha Uthit Road, Bang Mod,Thung Khru, Bangkok 10140, Thailand

2KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and ApplicationsResearch Group,

Faculty of Science, King Mongkuts University of Technology Thonburi (KMUTT),126 Pracha-Uthit Road, Bang

Mod, Thrung Khru, Bangkok 10140, Thailand

3Department of Mathematics, Faculty of Science, Usmanu Danfodiyo University Sokoto, PMB 2346, Nigeria.

4Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano. Kano, Nigeria.

Email addresses: ibrahimkarym@gmail.com (A. H. Ibrahim), garba.isa@udusok.edu.ng (G. A. Isa),

halima.usman@udusok.edu.ng (H. Usman), abubakar.jamilu@udusok.edu.ng (J. Abubakar),

ababubakar.mth@buk.edu.ng (A.B. Abubakar)

Abstract An eﬃcient method for solving large scale unconstrained optimization problems is the conju-

gate method. Based on the conjugate gradient algorithm proposed by Rivaie, Mohd, et al. (”A new class

of nonlinear conjugate gradient coeﬃcients with global convergence properties.” Applied Mathematics

and Computation 218.22 (2012): 11323-11332.), we propose a spectral conjugate gradient algorithm for

solving nonlinear equations with convex constraints which generate suﬃcient descent direction at each

iteration. Under the Lipschitz continuity assumption, the global convergence of the algorithm is estab-

lished. Furthermore, the propose algorithm is shown to be linearly convergent under some appropriate

conditions. Numerical experiments are reported to show the eﬃciency of the algorithm.

MSC: 47H05; 47J20; 47J25; 65K15

Keywords: Projection Method, Subgradient extragradient method, Inertial type algorithm, Monotone

operator, Variational inequality.

Submission date: 30.11.2019 / Acceptance date: 18.01.2020

1. Introduction

In the last decade, several articles have been written on the subject of iterative methods.

These articles have focused on methods for solving nonlinear system of equations. This

is due to the numerous problems encountered in the ﬁelds of science and engineering

resulting in the appearance of nonlinear equations in vast applications. For instance, in

[1] the subproblem in generalized proximal algorithms with Bergman distances. Also, in

real-world applications such as Nash equilibrium problem in economics [2] and the signal

*Corresponding author. Published by The Mathematical Society of Thailand.

212

processing problem in [3], it can be seen that both problems need to be reformulated

into a nonlinear system of equations. It is therefore essential to solve these problems of

nonlinear equations arising in these ﬁelds by developing numerous algorithms.

Let C be a nonempty closed and convex subset of Rnand F:Rn→Rnbe a monotone

mapping. That is,

(x−y)T(F(x)−F(y)) ≥0,∀x, y ∈Rn.(1.1)

The focus of this work is on the nonlinear equation:

F(x) = 0, x ∈C. (1.2)

Several iterative algorithms have been proposed for solving the nonlinear problem (1.2).

A few includes the trust-region [4], the Levenberg-Marquardt method [5], the TPRP

method [6] and the Gauss-Newton methods [7,8]. However, the methods mentioned have

appear to be typically unsuitable for handling large scale nonlinear equations because at

each iteration, computation and storage of matrix is required. Nevertheless, one of the

preferable method for solving this problem is the conjugate gradient (CG) method- The

CG-method is a popular iterative method developed with the sole aim of solving large-

scale unconstrained optimization problems. For an excellent survey on the CG-methods,

see [9].

Following the well known projection scheme of Solodov and Svaiter in [10], the CG-

method have been extended by many aurthors to solve (1.2). One among many of such

extentions is the method of Cheng et.al in [11], where they extended the PRP method

[12] to solve unconstrained monotone equations. Recently, the spectral gradient projec-

tion (SP) method [13] was extended to solve monotone nonlinear convex constrained equa-

tions. Numerical experiment indicates that the proposed method is suitable for large scale

problems. Thus, CG-methods for solving unconstrained optimization problems have been

extended by various authors in solving convex constrained monotone nonlinear equations.

For more related articles, we refer reader to [14–17,17–25,25,26,26–34] and references

therein).

Motivated by the results of Rivaie et al. [35], we propose a derivative-free spectral

gradient-type iterative projection method for solving (1.2). The global convergence of the

method is proved under some conditions. Furthermore, the linearly convergent rate of

the proposed method is proved under some assumptions.

The remaining part of this paper is presented as follows: In section 2, we introduce our

algorithm and the method for unconstrained optimization problems posed in [35]. We

establish the global convergence of the method in section 3. We report the results of the

numerical experiments conducted on benchmark test problems in section 4. Finally, we

end up the paper with the conclusion in section 5.

2. Algorithm

We begin this section by presenting our proposed algorithm for solving (1.2). We

assume that the readers are familiar with the conjugate gradient method. Motivated

by the RMIL conjugate gradient algorithm proposed by Rivaie et al. [35] for solving

large-scale unconstrained optimization problems, we propose an eﬃcient derivative-free

algorithm for solving nonlinear monotone equations with convex constraints (1.2) by using

the projection technique in Solodov and Svaiter [10]. Firstly, we deﬁne the search direction

as follows:

213

dk=−vkF(xk) + βERM I L

kdk−1,if k > 0,

−F(xk),if k= 0,(2.1)

where

βERM I L

k=F(xk)T(F(xk)−F(xk−1))

∥dk−1∥2.(2.2)

For convenience, we refer to (2.1) and (2.2) as MRMIL algorithm. We note that If Fis

the gradient of a real-valued function f:Rn→R, then the suﬃcient descent condition

dT

kF(xk)≤ −c∥F(xk)∥2,(2.3)

where cis a positive constant means that dkis a direction of suﬃcient descent fat xk.

We obtain vkto satisfy (2.3). In the following, We abbreviate F(xk) as Fk.

For k= 0,(2.3) obviously holds. For k∈N, we have

dT

kFk=−vk−∥yk−1∥

∥dk−1∥∥Fk∥2.

To satisfy (2.3), it on;y need that

vk≥c+∥yk−1∥

∥dk−1∥.

Without loss of generality, in this paper, we choose vkas

vk=c+∥yk−1∥

∥dk−1∥.(2.4)

Next, we recall the projection operator, which is deﬁned as a mapping PC:Rn→C,

where Cis a non empty closed convex set such that

PC(x) = arg min{∥x−y∥ |y∈C}.(2.5)

Throughout this article, we will denote ∥·∥,to be the Euclidean norm. A well known

characterization of the projection operator is its nonexpansive property. That is, for any

x, y ∈Rn,

∥PC(x)−PC(y)∥ ≤ ∥x−y∥.

Consequently,

∥PC(x)−y∥ ≤ ∥x−y∥,∀y∈C. (2.6)

In the remainder of this paper, we always assume that Fsatisﬁes the following assump-

tions

Assumption 2.1. The mapping F:Rn→Rnis Lipschiz continuous, that is there exists

a positive Lsuch that

∥F(x)−F(y)∥ ≤ L∥x−y∥,∀x, y ∈Rn(2.7)

Assumption 2.2. Let C∗be a solution set, for any solution x∗∈C∗,there exist a

nonnegative constant γsatisfying

γdist(x, C ∗)≤ ∥F(x)∥2,∀x∈N(γ, x∗),(2.8)

where dist(x, C∗) is the distance from xto C∗and N(x∗, C ) := {x∈Rn|∥x−x∗∥ ≤ γ}.

We state the steps of the algorithm as follow

214

Algorithm 2.3. RMIL

Input. Set an initial point x0∈Rn, the positive constants: T ol > 0, ϖ∈(0,2),

ρ∈(0,1), κ > 0, σ > 0,Set k= 0.

Step 0. If ∥Fk∥ ≤ T ol , stop. Otherwise, generate the search direction by

dk=−vkF(xk) + βERM I L

kdk−1,if k > 0,

−F(xk),if k= 0,(2.9)

Step 1. Let tk= max{κρi|i= 0,1,2,· · · }, we set zk=xk+tkdk,to satisfy

−F(zk)Tdk≥σtk∥dk∥2.(2.10)

Step 2. If zk∈Cand ∥F(zk)∥= 0,stop. Otherwise, compute the next iterate by

xk+1 =PC[xk−ϖξkF(zk)],(2.11)

where

ξk=F(zk)T(xk−zk)

∥F(zk)∥2

Step 3. Finally we set k=k+ 1 and return to step 1.

Lemma 2.4. Let dkbe a search direction generated by Algorithm 2.3 then, dkalways

satisﬁes (2.3).

Proof. The proof follows from (2.4).

3. Convergence Analysis

In order to establish the convergence of Algorithm 2.3, we need the following lemmas.

Lemma 3.1. Let {dk}and {xk}be two sequences generated by Algorithm 2.3. Then,

there exists a step size tksatisfying the line search (2.10)for all k≥0

Proof. For any i≥0,suppose (2.10) does not hold for the iterate k0−th, then we have

−⟨F(xk0+κρidk0), dk0⟩< σκρi∥dk0∥2.

Thus, by the continuity of Fand with 0 <ρ<1,it follows that by letting i→ ∞,we

have

−F(xk0)Tdk0≤0,

which contradicts (2.3).

Lemma 3.2. Suppose that Assumption 2.1 holds. Let the sequences {xk}and {zk}be

generated by Algorithm 2.3, then

tk≥max κ, ρc∥Fk∥2

(L+σ)∥dk∥2.(3.1)

215

Proof. From the line search (2.10), if tk=κ, then t∗

k=tk

ρdoes not satisfy (2.10), that is

−F(xk+tk

ρdk)Tdk< σ tk

ρ· ∥dk∥2.

It follows from (2.3) and (2.7) that

c∥Fk∥2=−FT

kdk

= (F(xk+tk

ρdk)−Fk)Tdk−F(xk+tk

ρdk)Tdk

≤tk

ρ(L+σ)∥dk∥2.

This gives the desired inequality (3.1).

Lemma 3.3. Suppose that Assumption 2.1holds. Let {xk}and {zk}be sequences gen-

erated by Algorithm 2.3, then for any x∗∈C∗the inequality

∥xk+1 −x∗∥2≤ ∥xk−x∗∥2−ϖ(2 −ϖ)σ2∥xk−zk∥4

∥G(zk)∥2.(3.2)

holds. In addition, {xk}is bounded and

∞

k=0

∥xk−zk∥4<+∞.(3.3)

Proof. First, we begin by using the monotonicity of the mapping F. Thus, for any solution

x∗∈C∗,

F(zk)T(xk−x∗)≥F(zk)T(xk−zk).

The above inequality together with (2.10) gives

F(xk+tkdk)T(xk−zk)≥σt2

k∥dk∥2≥0.(3.4)

We have the following from (2.6) and (3.4),

∥xk+1 −x∗∥2=∥PC(xk−ϖξkF(xk+tkdk)) −x∗∥2(3.5)

≤ ∥xk−ϖξkF(xk+tkdk)−x∗∥2

=∥xk−x∗∥2−2ϖξkF(xk+tkdk)T(xk−x∗) + ∥ϖξkF(xk+tkdk)∥2

≤ ∥xk−x∗∥2−2ϖξkF(xk+tkdk)T(xk−zk) + ∥ϖξkF(xk+tkdk)∥2

=∥xk−x∗∥2−ϖ(2 −ϖ)G(zk)T(xk−zk)

∥G(zk)∥2

≤ ∥xk−x∗∥2−ϖ(2 −ϖ)σ2∥xk−zk∥4

∥G(zk)∥2(3.6)

Thus, the sequence {∥xk−x∗∥} is a decreasing sequence, which implies that {xk}is

bounded. That is

∥xK∥ ≤ ς, ∀k≥0.(3.7)

Furthermore, using the continuity of Fwe know that there exists a constant K1>0 such

that

∥F(xk)∥ ≤ I1,∀k≥0.

216

Since (F(xk)−F(zk))T(xk−zk)≥0,by Cauchy-Schwarz inequality, we have

∥F(xk)∥∥xk−zk∥ ≥ F(xk)T(xk−zk)≥F(zk)T(xk−zk)≥σ∥xk−zk∥2.

From the line search, the last inequality can be implied. So we have

σ∥xk−zk∥ ≤ ∥F(xk)∥ ≤ I1

which implies that {zk}is bounded. By continuity of F, we know that there exists a

constant K2>0, such that

∥F(zk)∥ ≤ I2,∀k≥0.

the above combined with (3.6) yields

ϖ(2 −ϖ)σ2

I2

2

∥xk−zk∥4≤ ∥xk−x∗∥2− ∥xk+1 −x∗∥2.(3.8)

Now, by taking the summation of (3.8), for k≥0, we have

ϖ(2 −ϖ)σ2

I2

2

∞

k=0

∥xk−zk∥4≤

∞

k=0

(∥xk−x∗∥2− ∥xk+1 −x∗∥2)≤ ∥x0−x∗∥2<∞.

(3.9)

(3.9) implies that

lim

k→∞

∥xk−zk∥= 0.(3.10)

The proof is complete.

Theorem 3.4. Suppose that Assumption 2.1 hold and let {xk}be the sequence generated

by Algorithm 2.3. Then, we have

lim inf

k→∞

∥Fk∥= 0.(3.11)

Proof. Suppose (3.11) is not valid, that is, there exist a constant say r > 0 such that

r≤ ∥|Fk∥, k ≥0.Then this along with (2.3) implies that

∥dk∥ ≥ cr, ∀k≥0.(3.12)

Since {∥Fk∥} and {∥F(zk)∥} are bounded, it follows from (2.1)-(2.4) that for all k≥1,

∥dk∥ ≤ c∥Fk∥+∥Fk∥ · ∥yk−1∥

∥dk−1∥+∥Fk∥ · ∥yk−1∥

∥dk−1∥2∥dk−1∥

=c∥Fk∥+ 2∥Fk∥∥yk−1∥

∥dk−1∥

≤c∥Fk∥+ 2L∥Fk∥∥xk−xk−1∥

∥dk−1∥

≤cI1+4I1Lς

cr

≜Γ.

217

Note that, by Cauchy Schwarz inequality, the ﬁrst inequality is easily obtained. Similarly,

from (2.7) and (3.12),the second inequality follows. Now, from (3.1), we have

tk∥dk∥ ≥ max κ, ρc∥Fk∥2

(L+σ)∥dk∥2∥dk∥

≥max κcr, ρcr2

(L+σ)Γ>0,

which contradicts (3.10).Hence (3.11) is valid.

Theorem 3.5. Let xkbe the sequence generated by Algorithm 2.3 under Assumption

2.1−2.2.Then the sequence dist{xk, C∗}Q−linearly converges to zero.

Proof. Lets set µk= arg min{∥xk−h∥ |h∈C∗}.This implies that

∥xk−tk∥=dist(xk, C∗).

From (3.2),for µk∈C∗we obtain

d(xk+1, C ∗)2≤ ∥xk+1 −tk∥2

≤dist(xk, C∗)2−σ2∥tkdk∥4

≤dist(xk, C∗)2−σ2c4t4

k∥Fk∥4

≤dist(xk, C∗)2−σ2γ2c4t4

kd(xk, C∗)2

= (1 −σ2γ2c4t4

k)d(xk, C∗)2,

Note that, from the inequality in Assumption 2.2,we obtain the fourth inequality. Let

the parameter 1

γσ ≥c2, then, 1 −σ2γ2c4t4

k∈(0,1) holds. Finally, we see that d(xk, C∗)

Q−linearly converges to zero.

4. Numerical Experiments

An insight of the proposed algorithm is presented in this section. We test the com-

putational performance of Algorithm 2.3 with existing method in literature using some

benchmark test problems. Precisely, we compare our algorithm with the PDY algorithm

[36] designed for solving same problem (1.2). The numerical experiments are carried out

on a set of seven diﬀerent problems with dimension ranging from n= 5000 to 100,000

and initial points set as follow:

x1= (0.1,0.1,· · · ,0.1)T, x2= (0.2,0.2,· · · ,0.2)T, x3= (0.5,0.5,· · · ,0.5)T, x4= (1.2,1.2,· · · ,1.2)T,

x5= (1.5,1.5,· · · 1.5)T, x6= (2,2,· · · ,2)T, x7=rand(n, 1).

Throughout, we set parameters for PDY algorithm as in [36]. For Algorithm 1, the values

of our parameters were set as follows: c= 1, ρ = 0.5, σ = 0.001. ϖ = 1.8. For each test

problem, the iterative process is stopped when the inequality

∥Fk∥ ≤ 10−6

is satisﬁed. Again, failure is declared after a thousand iteration. All algorithms were

written in Matlab and run on a HP personal computer with system speciﬁcations as

follows Intel(R) Core (TM) i3-7100U CPU 2.40GHZ, 8GB memory and Windows 10

operating system.

218

We give a list of the benchmark test problems used in our experiment. Note that in

this article, we take the mapping Fas F(x) = (f1(x), f2(x),· · · , fn(x))T.

Problem 1. This problem is the Exponential function [37] with constraint set C=Rn

+,

that is,

f1(x) = ex1−1,

fi(x) = exi+xi−1,for i= 2,3, ..., n.

Problem 2. Modiﬁed Logarithmic function [15] with constraint set C={x∈Rn:

n

i=1 xi≤n, xi>−1, i = 1,2, . . . , n},that is,

fi(x) = ln(xi+ 1) −xi

n, i = 2,3, ..., n.

Problem 3. The Nonsmooth Function [38] with constraint set C=Rn

+.

fi(x) = 2xi−sin |xi|, i = 1,2,3, ..., n.

Problem 4. The Strictly convex function [39], with constraint set C=Rn

+,that is,

fi(x) = exi−1, i = 2,3,· · · , n

Problem 5. Tridiagonal Exponential function [40] with constraint set C=Rn

+,that is,

f1(x) = x1−ecos(h(x1+x2)),

fi(x) = xi−ecos(h(xi−1+xi+xi+1)) ,for 2 ≤i≤n−1,

fn(x) = xn−ecos(h(xn−1+xn)),where h=1

n+ 1

Problem 6. Nonsmooth function [41] with with constraint set C={x∈Rn:n

i=1 xi≤

n, xi≥ −1,1≤i≤n}.

fi(x) = xi−sin |xi−1|, i = 2,3,· · · , n

Problem 7. The Trig exp function [37] with constraint set C=Rn

+,that is,

f1(x) = 3x3

1+ 2x2−5 + sin(x1−x2) sin(x1+x2)

fi(x) = 3x3

i+ 2xi+1 −5 + sin(xi−xi+1) sin(xi+xi+1 )+4xi−xi−1exi−1−xi−3

for i= 2,3, ..., n −1

fn(x) = xn−1exn−1−xn−4xn−3,where h = 1

n+ 1.

In order to visualize the behavior of Algorithm 1, we adopt the performance proﬁles pro-

posed by Dolan and More in [42] to compare the performance among the tested methods.

The performance proﬁle seeks to ﬁnd how well the solvers perform relative to the other

solvers on a set of problems based on the total number of iterations, total number of

function evaluations, and the running time of each method. The details of our numerical

test are presented in the Appendix section. We denote by ”Iter.” the number of iterations,

”Fval.” the number of function evaluations and ”Time.” the CPU time in seconds.

219

0 0.5 1 1.5 2 2.5 3 3.5

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

( )

MRMIL

PDY

Figure 1. Performance proﬁles with respect to the number of iterates

The ﬁgures in this section show the performance proﬁles of our method versus other recent

existing method. The performance of the methods are measured based on the number of

iterations, the number of function Fevaluations and the CPU time. It is not diﬃcult to

see that both methods solved all the test problems successfully. However, the MRMIL

algorithm highly performs better on a whole based on these measures compared to PDY

algorithm.

In detail, Figure 1illustrates the performance proﬁle of our method, where the perfor-

mance index is the total number of iterations. It can be seen that the MRMIL algorithm

is the best solver with probability around 79% while the probability of the compared

method of solving the same problem as the best solver is around 31%. Figure 2.5 and 3

illustrates the performance proﬁles of the total number of function evaluation and CPU

time. Similar results as Figure 1can be derived from these ﬁgures.

220

0 0.5 1 1.5 2 2.5 3 3.5

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

( )

MRMIL

PDY

Figure 2. Performance proﬁles with respect to the number of iterates

221

0 0.5 1 1.5 2 2.5 3 3.5 4

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

( )

MRMIL

PDY

Figure 3. Performance proﬁles with respect to CPU time

5. Conclusion

In this article, the authors proposed a modiﬁed conjugate gradient algorithm for solving

monotone nonlinear equations with convex constraints. This work can be regarded as an

extension of the method in [35]. Using some technical conditions, we established the

global convergence of the proposed method. We present numerical results to illustrate

that our method is stable and eﬃcient for the monotone nonlinear equations, especially

for the large-scale problems with convex constraints.

6. Acknowledgements

This project was supported by Theoretical and Computational Science (TaCS) Cen-

ter under Computational and Applied Science for Smart research Innovation Cluster

(CLASSIC), Faculty of Science, KMUTT. The ﬁrst author was supported by the Petchra

Pra Jom Klao Doctoral Scholarship, Academic for Ph.D. Program at KMUTT (Grant

No.16/2561).

222

Appendix

Table 1. Numerical results for problem 1

MRMIL PDY

DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM

1000

x15 19 0.017596 0.00E+00 16 64 0.040997 3.45E-07

x25 19 0.020114 0.00E+00 16 64 0.033587 7.03E-07

x38 32 0.014192 2.11E-07 17 68 0.011419 6.22E-07

x42 7 0.006337 0.00E+00 18 72 0.02491 4.54E-07

x52 7 0.007319 0.00E+00 18 72 0.038672 3.65E-07

x65 19 0.008786 0.00E+00 18 72 0.013774 3.80E-07

x78 32 0.006044 4.80E-07 17 68 0.019283 7.05E-07

5000

x15 19 0.012849 0.00E+00 16 64 0.076967 7.61E-07

x25 19 0.011326 0.00E+00 17 68 0.07326 5.15E-07

x310 40 0.03188 1.49E-07 18 72 0.051817 4.63E-07

x42 7 0.00821 0.00E+00 19 76 0.059926 3.38E-07

x52 7 0.007023 0.00E+00 18 72 0.078845 8.12E-07

x67 27 0.020875 0.00E+00 18 72 0.072337 8.10E-07

x78 32 0.01868 7.97E-07 18 72 0.062592 5.38E-07

10000

x112 48 0.048563 1.28E-08 17 68 0.097567 3.55E-07

x26 24 0.027456 2.05E-07 17 68 0.085576 7.27E-07

x38 32 0.02695 1.85E-07 18 72 0.10317 6.55E-07

x42 7 0.007457 0.00E+00 19 76 0.092351 4.77E-07

x52 7 0.012742 0.00E+00 20 80 0.13829 4.52E-07

x66 23 0.032861 0.00E+00 19 76 0.093402 5.51E-07

x79 36 0.033609 9.08E-08 18 72 0.084444 7.55E-07

50000

x110 40 0.18505 2.60E-07 17 68 0.43588 7.93E-07

x28 32 0.11334 7.28E-07 18 72 0.32887 5.44E-07

x38 32 0.13543 7.35E-08 19 76 0.36732 4.86E-07

x42 7 0.033502 0.00E+00 20 80 0.41552 9.70E-07

x52 7 0.059006 0.00E+00 22 88 0.57589 8.63E-07

x66 23 0.1 0.00E+00 23 92 0.49481 8.62E-07

x79 36 0.17921 1.92E-07 19 76 0.43672 5.62E-07

100000

x117 68 0.46137 5.26E-09 18 72 0.61655 3.76E-07

x217 68 0.46751 6.68E-07 18 72 0.81072 7.69E-07

x38 31 0.20832 0.00E+00 19 76 0.64764 6.88E-07

x42 7 0.080694 0.00E+00 23 92 1.0145 3.63E-07

x52 7 0.090344 0.00E+00 23 92 1.043 9.61E-07

x611 44 0.27679 8.73E-08 26 104 1.0696 3.39E-07

x79 36 0.27043 2.48E-07 20 80 0.9056 7.78E-07

223

Table 2. Numerical results for problem 2

MRMIL PDY

DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM

1000

x17 23 0.004347 1.42E-08 13 51 0.077345 7.68E-07

x27 23 0.00593 1.44E-08 15 59 0.013322 3.49E-07

x38 26 0.008602 1.11E-08 16 63 0.010509 6.98E-07

x48 26 0.00346 6.52E-09 18 71 0.029102 3.52E-07

x57 23 0.005348 1.18E-08 18 71 0.014308 5.13E-07

x68 26 0.00715 5.03E-09 18 71 0.01597 8.59E-07

x716 63 0.015263 5.75E-07 17 67 0.051946 4.52E-07

5000

x17 24 0.014743 5.75E-07 14 55 0.050839 5.44E-07

x27 24 0.016902 5.75E-07 15 59 0.036741 7.63E-07

x38 27 0.018674 4.90E-07 17 67 0.10101 5.12E-07

x48 27 0.016456 3.19E-07 18 71 0.073292 7.73E-07

x57 24 0.019151 4.59E-07 19 75 0.074561 3.75E-07

x68 26 0.028861 4.98E-10 19 75 0.045617 6.27E-07

x715 59 0.042091 8.23E-07 17 67 0.17854 9.89E-07

10000

x19 35 0.043477 4.65E-07 14 55 0.096961 7.66E-07

x29 34 0.034886 4.65E-07 16 63 0.068215 3.55E-07

x310 38 0.048411 4.04E-07 17 67 0.097321 7.23E-07

x410 38 0.051665 2.72E-07 19 75 0.17663 3.63E-07

x59 35 0.038634 3.75E-07 19 75 0.1389 5.29E-07

x610 38 0.048532 1.63E-07 19 76 0.084686 9.51E-07

x716 63 0.066321 5.89E-07 18 71 0.18208 4.65E-07

50000

x110 39 0.1595 1.04E-07 15 59 0.60225 5.78E-07

x210 39 0.13852 1.03E-07 16 63 0.38193 7.92E-07

x310 38 0.25958 9.06E-07 18 71 1.1323 5.36E-07

x410 38 0.15314 6.13E-07 21 84 0.48105 3.43E-07

x59 35 0.13068 8.30E-07 21 84 0.65056 4.72E-07

x610 38 0.15521 3.60E-07 21 84 0.49099 4.77E-07

x716 63 0.38185 8.50E-07 19 75 0.46664 3.46E-07

100000

x110 39 0.34962 1.46E-07 15 59 0.79437 8.17E-07

x210 39 0.43655 1.46E-07 17 67 0.86905 3.76E-07

x311 42 0.31355 1.28E-07 18 72 0.92806 9.65E-07

x410 38 0.38263 8.68E-07 22 88 1.0076 8.28E-07

x510 39 0.28794 1.17E-07 22 88 1.542 8.18E-07

x610 38 0.2989 5.08E-07 22 88 1.3244 7.87E-07

x717 67 0.68179 5.25E-07 20 80 1.0409 5.45E-07

224

Table 3. Numerical results for problem 3

MRMIL PDY

DIM INP ITER FVAL TIME NORM ITER2 FVAL3 TIME4 NORM5

1000

x111 44 0.006867 9.73E-07 15 60 0.077797 4.96E-07

x212 48 0.008871 9.29E-07 16 64 0.017838 3.39E-07

x311 44 0.005343 6.21E-07 16 64 0.012914 9.24E-07

x413 52 0.008538 7.43E-07 17 68 0.010386 8.94E-07

x510 40 0.006854 7.08E-07 18 72 0.013949 3.60E-07

x614 56 0.009688 4.75E-07 18 72 0.026721 3.47E-07

x717 68 0.018595 3.94E-07

5000

x113 52 0.023164 5.44E-07 16 64 0.036111 3.74E-07

x214 56 0.023803 5.19E-07 16 64 0.045275 7.58E-07

x312 48 0.024052 6.95E-07 17 68 0.060382 6.84E-07

x414 56 0.02763 4.15E-07 18 72 0.11091 6.68E-07

x511 44 0.014771 3.96E-07 18 72 0.049381 8.05E-07

x615 60 0.025569 2.66E-07 18 72 0.065425 7.46E-07

x717 68 0.06995 8.75E-07

10000

x113 52 0.037055 7.69E-07 16 64 0.12557 5.28E-07

x214 56 0.034567 7.34E-07 17 68 0.092694 3.55E-07

x312 48 0.056047 9.82E-07 17 68 0.079822 9.67E-07

x414 56 0.055368 5.87E-07 18 72 0.24877 9.44E-07

x511 44 0.038703 5.60E-07 20 80 0.080159 3.38E-07

x615 60 0.037159 3.76E-07 19 76 0.084531 3.50E-07

x718 72 0.1757 4.10E-07

50000

x114 56 0.23139 8.60E-07 17 68 0.37534 3.91E-07

x215 60 0.16087 8.21E-07 17 68 0.24801 7.93E-07

x314 56 0.17539 5.49E-07 18 72 0.26549 7.25E-07

x415 60 0.18294 3.28E-07 20 80 0.46666 6.42E-07

x512 48 0.15805 3.13E-07 21 84 0.32816 5.20E-07

x615 60 0.23569 8.40E-07 21 84 0.48755 3.51E-07

x718 72 0.50034 9.18E-07

100000

x115 60 0.28424 6.08E-07 17 68 0.73834 5.53E-07

x216 64 0.30566 5.80E-07 18 72 0.75733 3.76E-07

x314 56 0.30675 7.77E-07 19 76 0.54971 3.40E-07

x415 60 0.46637 4.64E-07 22 88 1.353 6.92E-07

x512 48 0.26464 4.43E-07 22 88 0.69186 6.17E-07

x616 64 0.31211 2.97E-07 22 88 1.0329 5.81E-07

x720 80 1.1918 4.62E-07

225

Table 4. Numerical results for problem 4

MRMIL PDY

DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM

1000

x111 44 0.005463 5.14E-07 15 60 0.008008 5.13E-07

x210 40 0.006904 7.26E-07 16 64 0.013881 3.59E-07

x32 7 0.002464 0.00E+00 16 64 0.024003 9.42E-07

x42 7 0.002327 0.00E+00 15 60 0.008609 6.44E-07

x52 7 0.00394 0.00E+00 17 68 0.016199 3.91E-07

x62 7 0.003647 0.00E+00 17 68 0.073791 7.89E-07

x710 40 0.004219 3.71E-07 17 68 0.017215 4.89E-07

5000

x112 48 0.019223 5.75E-07 16 64 0.038264 3.86E-07

x211 44 0.015049 8.12E-07 16 64 0.032691 8.02E-07

x32 7 0.006792 0.00E+00 17 68 0.030878 7.00E-07

x42 7 0.006842 0.00E+00 16 64 0.026864 4.74E-07

x52 7 0.006989 0.00E+00 17 68 0.067797 8.74E-07

x62 7 0.006434 0.00E+00 19 76 0.031626 5.11E-07

x710 40 0.017007 1.66E-07 18 72 0.030529 3.71E-07

10000

x112 48 0.026133 8.13E-07 16 64 0.046997 5.46E-07

x212 48 0.029647 5.74E-07 17 68 0.07771 3.76E-07

x32 7 0.008574 0.00E+00 17 68 0.0702 9.90E-07

x42 7 0.011381 0.00E+00 19 76 0.058083 3.70E-07

x52 7 0.008156 0.00E+00 18 72 0.097611 4.15E-07

x62 7 0.01211 0.00E+00 19 76 0.13803 7.22E-07

x713 52 0.065447 5.08E-07 18 72 0.075793 5.07E-07

50000

x113 52 0.11185 9.09E-07 17 68 0.18421 4.04E-07

x213 52 0.17585 6.42E-07 17 68 0.19435 8.40E-07

x32 7 0.028215 0.00E+00 18 72 0.22118 7.39E-07

x42 7 0.040046 0.00E+00 20 80 0.29846 6.25E-07

x52 7 0.036865 0.00E+00 20 80 0.24516 8.13E-07

x62 7 0.034726 0.00E+00 22 88 0.45415 9.65E-07

x713 52 0.15831 3.24E-07 19 76 0.32983 6.75E-07

100000

x114 56 0.26401 6.43E-07 17 68 0.56127 5.71E-07

x213 52 0.32171 9.08E-07 18 72 0.5503 3.98E-07

x32 7 0.060471 0.00E+00 19 76 0.42901 9.57E-07

x42 7 0.084635 0.00E+00 22 88 0.53544 3.99E-07

x52 7 0.081715 0.00E+00 24 96 0.95585 3.66E-07

x62 7 0.059423 0.00E+00 26 104 0.7676 3.55E-07

x713 52 0.2707 4.50E-07 19 76 0.56768 9.53E-07

226

Table 5. Numerical results for problem 5

MRMIL PDY

DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM

1000

x128 112 0.02166 6.16E-07 18 72 0.064493 4.82E-07

x228 112 0.036568 5.92E-07 18 72 0.017572 4.64E-07

x328 112 0.021776 5.22E-07 18 72 0.025552 4.08E-07

x427 108 0.017528 7.14E-07 17 68 0.027679 8.34E-07

x527 108 0.01623 5.73E-07 17 68 0.02813 6.69E-07

x626 104 0.048145 6.76E-07 17 68 0.013746 3.94E-07

x728 112 0.019109 5.26E-07 18 72 0.016604 4.11E-07

5000

x129 116 0.10404 6.90E-07 19 76 0.15354 3.58E-07

x229 116 0.077534 6.63E-07 19 76 0.1347 3.44E-07

x329 116 0.075795 5.84E-07 18 72 0.061833 9.14E-07

x428 112 0.077236 8.00E-07 18 72 0.14442 6.26E-07

x528 112 0.086837 6.42E-07 18 72 0.058368 5.02E-07

x627 108 0.074944 7.57E-07 17 68 0.080337 8.83E-07

x729 116 0.088769 5.90E-07 18 72 0.060071 9.21E-07

10000

x129 116 0.13172 9.75E-07 21 84 0.13537 4.00E-07

x229 116 0.13083 9.38E-07 21 84 0.13596 3.85E-07

x329 116 0.18213 8.26E-07 20 80 0.2194 5.83E-07

x429 116 0.12873 5.66E-07 18 72 0.14363 8.85E-07

x528 112 0.15234 9.08E-07 18 72 0.16376 7.10E-07

x628 112 0.15848 5.35E-07 18 72 0.099046 4.19E-07

x729 116 0.13706 8.34E-07 20 80 0.20036 5.88E-07

50000

x131 124 0.64489 5.45E-07 24 96 0.73376 7.08E-07

x231 124 0.70682 5.24E-07 24 96 0.81236 6.81E-07

x330 120 0.55822 9.24E-07 23 92 0.6838 7.26E-07

x430 120 0.53198 6.32E-07 21 84 0.57411 5.18E-07

x530 120 0.54466 5.07E-07 21 84 0.66594 4.16E-07

x629 116 0.53353 5.98E-07 18 72 0.47458 9.36E-07

x730 120 0.53253 9.32E-07 23 92 0.78547 7.33E-07

100000

x131 124 1.2364 7.71E-07 29 116 3.4129 5.93E-07

x231 124 1.5374 7.42E-07 28 112 2.232 6.09E-07

x331 124 1.3392 6.53E-07 26 104 1.9924 6.39E-07

x430 120 1.2903 8.94E-07 23 92 1.6393 7.03E-07

x530 120 1.3408 7.18E-07 22 88 1.4593 3.66E-07

x629 116 1.3172 8.46E-07 20 80 1.5262 5.97E-07

x731 124 1.3756 6.59E-07 26 104 2.0768 6.44E-07

227

Table 6. Numerical results for problem 6

MRMIL PDY

DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM

1000

x18 32 0.007545 2.09E-07 17 68 0.047472 6.92E-07

x28 32 0.007651 1.30E-07 17 68 0.01164 4.34E-07

x37 28 0.0044 4.82E-07 5 20 0.035276 4.50E-08

x49 36 0.006288 7.10E-08 18 72 0.031358 8.82E-07

x59 36 0.004769 2.98E-07 19 76 0.016064 8.09E-07

x69 35 0.005865 4.30E-07 18 71 0.019573 5.23E-07

x715 60 0.011421 2.47E-07 19 76 0.034046 4.32E-07

5000

x18 32 0.10747 4.68E-07 18 72 0.060315 5.59E-07

x28 32 0.013586 2.90E-07 17 68 0.043677 9.70E-07

x38 32 0.02763 6.88E-08 5 20 0.020451 1.01E-07

x49 36 0.013456 1.59E-07 19 76 0.067458 7.14E-07

x59 36 0.021637 6.67E-07 20 80 0.048031 6.56E-07

x69 35 0.01803 9.62E-07 19 75 0.072431 4.22E-07

x717 68 0.067433 2.62E-07 19 76 0.072684 9.09E-07

10000

x18 32 0.035531 6.62E-07 18 72 0.17816 7.90E-07

x28 32 0.024332 4.10E-07 18 72 0.12132 4.95E-07

x38 32 0.025852 9.73E-08 5 20 0.017535 1.42E-07

x49 36 0.022198 2.24E-07 20 80 0.14969 3.66E-07

x59 36 0.023586 9.43E-07 20 80 0.20198 9.28E-07

x610 39 0.027458 8.69E-08 21 84 0.09774 4.36E-07

x715 60 0.061535 7.77E-07 20 80 0.15572 4.75E-07

50000

x19 36 0.091915 9.46E-08 19 76 0.30923 6.42E-07

x28 32 0.086014 9.17E-07 19 76 0.33924 4.02E-07

x38 32 0.10021 2.18E-07 5 20 0.073014 3.18E-07

x49 36 0.10806 5.02E-07 21 84 0.41239 8.23E-07

x510 40 0.10572 1.35E-07 21 84 0.57454 7.14E-07

x610 39 0.17226 1.94E-07 21 84 0.3827 9.75E-07

x718 72 0.2558 6.43E-07 21 84 0.93238 3.82E-07

100000

x19 36 0.26898 1.34E-07 20 80 0.79946 7.45E-07

x29 36 0.17345 8.28E-08 19 76 1.0298 5.69E-07

x38 32 0.25039 3.08E-07 5 20 0.14119 4.50E-07

x49 36 0.21676 7.10E-07 22 88 1.0177 4.22E-07

x510 40 0.27836 1.91E-07 22 88 0.81176 7.50E-07

x610 39 0.19144 2.75E-07 22 88 0.93483 5.00E-07

x720 80 0.54314 3.45E-07 20 80 0.73771 6.67E-07

228

Table 7. Numerical results for problem 7

MRMIL PDY

DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM

1000

x125 100 0.073028 4.63E-07 36 144 0.20315 6.34E-07

x225 100 0.11462 9.07E-07 35 140 0.2928 9.13E-07

x322 88 0.059525 9.01E-07 35 140 0.18604 7.34E-07

x425 100 0.085671 4.88E-07 33 132 0.16095 2.30E-07

x525 100 0.060418 1.67E-07 31 124 0.13438 8.06E-07

x625 100 0.080534 8.31E-07 24 96 0.10379 9.72E-07

x726 104 0.088063 5.63E-07 29 116 0.1692 3.15E-07

5000

x128 112 0.35075 7.48E-07 34 136 0.71146 8.36E-07

x224 96 0.38934 8.05E-07 34 136 0.69158 7.93E-07

x325 100 0.30709 7.27E-07 34 136 0.63571 6.18E-07

x425 100 0.337 5.12E-07 31 124 0.66455 3.90E-07

x525 100 0.39557 5.84E-07 30 120 0.59363 8.11E-07

x623 92 0.27766 5.03E-07 24 96 0.54085 7.51E-07

x728 112 0.50805 6.90E-07 25 100 0.79827 2.93E-07

10000

x130 120 0.71688 6.74E-07 34 136 1.8057 6.78E-07

x230 120 0.66154 7.68E-07 34 136 1.3939 6.42E-07

x325 100 0.55277 5.63E-07 33 132 1.3301 7.57E-07

x429 113 0.7388 9.13E-07 30 120 1.3051 3.94E-07

x525 100 0.57022 7.39E-07 30 120 1.1445 5.57E-07

x625 100 0.56951 8.67E-07 24 96 0.8758 7.21E-07

x729 116 0.72454 6.65E-07 25 100 0.89229 4.07E-07

50000

x128 112 2.8081 8.07E-07 34 136 7.9299 6.35E-07

x230 120 2.9855 7.96E-07 33 132 6.6438 6.12E-07

x326 104 2.6057 9.25E-07 32 128 7.4126 7.22E-07

x45 17 0.40473 NaN 24 96 5.4526 3.36E-07

x57 25 0.6025 NaN 29 116 6.8103 5.83E-07

x628 112 2.9107 4.55E-07 31 124 6.0871 7.91E-07

x729 116 3.0354 2.75E-07 27 108 5.43 3.65E-07

100000

x130 119 6.2617 4.81E-07 33 132 19.5575 8.00E-07

x228 112 5.7996 8.27E-07 33 132 17.3005 7.49E-07

x329 116 6.1519 8.53E-07 40 160 21.0229 9.75E-07

x45 17 0.83758 NaN 30 120 12.1478 9.85E-07

x526 104 5.5499 7.56E-07 28 112 10.8844 9.46E-07

x633 131 7.1703 6.89E-07 26 104 9.8098 9.05E-07

x731 124 6.7745 5.21E-07 27 108 9.8646 4.03E-07

229

References

[1] N.A. Iusem, V.M. Solodov, Newton-type methods with generalized distances for

constrained optimization, Optimization, 41(3) (1997), 257–278.

[2] S. Huang, Z. Wan, A new nonmonotone spectral residual method for nonsmooth non-

linear equations, Journal of Computational and Applied Mathematics, 313 (2017),

82–101.

[3] Z. Wan, J. Guo, J. Liu, W. Liu, A modiﬁed spectral conjugate gradient projection

method for signal recovery, Signal, Image and Video Processing, 12(8) (2018), 1455–

1462.

[4] X.J. Tong, L. Qi, On the convergence of a trust-region method for solving constrained

nonlinear equations with degenerate solutions, Journal of optimization theory and

applications, 123(1) (2004), 187–211.

[5] C. Kanzow, N. Yamashita, M. Fukushima, Levenberg–marquardt methods with

strong local convergence properties for solving nonlinear equations with convex con-

straints, Journal of Computational and Applied Mathematics, 172(2) (2004), 375–

397.

[6] Q. Li, D.H. Li, A class of derivative-free methods for large-scale nonlinear monotone

equations, IMA Journal of Numerical Analysis, 31(4) (2011), 1625–1635.

[7] D. Li, M. Fukushima, A globally and superlinearly convergent gauss–newton-based

bfgs method for symmetric nonlinear equations, SIAM Journal on Numerical Anal-

ysis, 37(1) (1999), 152–172.

[8] D. Li, M, Fukushima, A globally and superlinearly convergent gauss-newton based

bfgs method for symmetric equations, Toclmical Report, (1998), 98006.

[9] W.W. Hager, H. Zhang, A survey of nonlinear conjugate gradient methods, Paciﬁc

journal of Optimization, 2(1) (2006), 35–58.

[10] M.V. Solodov, B.F. Svaiter, A globally convergent inexact newton method for sys-

tems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth,

Semismooth and Smoothing Methods, Springer, (1998), 355–369.

[11] W. Cheng, A prp type method for systems of monotone equations, Mathematical

and Computer Modelling, 50(1-2) (2009), 15–20.

[12] B.T. Polyak, The conjugate gradient method in extremal problems, USSR Compu-

tational Mathematics and Mathematical Physics, 9(4) (1969), 94–112.

[13] Z. Yu, J. Lin, J. Sun, Y. Xiao, L. Liu, Z. Li, Spectral gradient projection method

for monotone nonlinear equations with convex constraints, Applied numerical math-

ematics, 59(10) (2009), 2416–2423.

[14] W.L. Cruz, M. Raydan, Nonmonotone spectral methods for large-scale nonlinear

systems, Optimization Methods and Software, 18(5) ( 2003), 583–599.

[15] W.L. Cruz, J. Mart´ınez, M. Raydan, Spectral residual method without gradient

information for solving large-scale nonlinear systems of equations, Mathematics of

Computation, 75(255) (2006), 1429–1448.

[16] L. Zhang, W. Zhou, Spectral gradient projection method for solving nonlinear mono-

tone equations, Journal of Computational and Applied Mathematics, 196(2) (2006),

478–484.

[17] A.B. Abubakar, P. Kumam, A descent dai-liao conjugate gradient method for non-

linear equations, Numerical Algorithms, 81(1) (2019), 197–210.

230

[18] A.B. Abubakar, P. Kumam, A.M. Awwal, P. Thounthong, A modiﬁed self-adaptive

conjugate gradient method for solving convex constrained monotone nonlinear equa-

tions for signal reovery problems, Mathematics, 7(8) (2019), 693.

[19] A. Padcharoen, P. Sukprasert, Nonlinear Operators as Concerns Convex Program-

ming and Applied to Signal Processing, Mathematics, 7(9) (2019), 866.

[20] A. Padcharoen, P. Kumam, Y.J. Cho, Split common ﬁxed point problems for demi-

contractive operators, Numerical Algorithms 82(1) (2019), 297–320.

[21] D. Kitkuan, P. Kumam, A. Padcharoen, W. Kumam, P. Thounthong, Algorithms

for zeros of two accretive operators for solving convex minimization problems and its

application to image restoration problems, Journal of Computational and Applied

Mathematics, 354 (2019), 471–495.

[22] A. Padcharoen, P. Kumam, J. Mart´ınez-Moreno, Augmented Lagrangian method

for TV-l1-l2based colour image restoration, Journal of Computational and Applied

Mathematics, 354 (2019), 507–519.

[23] Adaptive algorithm for solving the SCFPP of demicontractive operators without a

priori knowledge of operator norms, Analele Universitatii “Ovidius” Constanta-Seria

Matematica, 27(3) (2019), 153–175.

[24] A. Padcharoen, D. Kitkuan, P. Kumam, J. Rilwan, W. Kumam, Accelerated alter-

nating minimization algorithm for poisson noisy image recovery, Inverse Problems in

Science and Engineering, (2020).

[25] A.B. Abubakar, P. Kumam, H. Mohammad, A.M. Awwal, K. Sitthithakerngkiet, A

modiﬁed ﬂetcher–reeves conjugate gradient method for monotone nonlinear equations

with some applications, Mathematics, 7(8) (2019), 745.

[26] A.B. Abubakar, P. Kumam, H. Mohammad, and A.M. Awwal, An eﬃcient con-

jugate gradient method for convex constrained monotone nonlinear equations with

applications, Mathematics, 7(9) (2019), 767.

[27] H. Mohammad and A.B. Abubakar, A positive spectral gradient-like method for

large-scale nonlinear monotone equations, Bulletin of Computational and Applied

Mathematics, 5(1) (2017), 99–115.

[28] A.M. Awwal, P. Kumam, A.B. Abubakar, A. Wakili, A projection hestenes-stiefel-

like method for monotone nonlinear equations with convex constraints, Thai Journal

of Mathematics, (2019), 181–199.

[29] A.B. Abubakar, P. Kumam, A.M. Awwal, A descent dai-liao projection method for

convex constrained nonlinear monotone equations with applications, Thai Journal of

Mathematics, (2019), 128–152.

[30] A.B. Abubakar, P. Kumam, A.M. Awwal, Global convergence via descent modi-

ﬁed three-term conjugate gradient projection algorithm with applications to signal

recovery, Results in Applied Mathematics, 4 (2019), 100069.

[31] A.B. Abubakar, P. Kumam, A.M. Awwal, An inexact conjugate gradient method for

symmetric nonlinear equations, Computational and Mathematical Methods, (2019).

[32] A.M. Awwal, P. Kumam, A.B. Abubakar, Spectral modiﬁed polak–ribi´ere–polyak

projection conjugate gradient method for solving monotone systems of nonlinear

equations, Applied Mathematics and Computation, 362 (2019), 124514.

[33] A.M. Awwal, P. Kumam, A.B. Abubakar, A modiﬁed conjugate gradient method for

monotone nonlinear equations with convex constraints, Applied Numerical Mathe-

matics, (2019).

231

[34] A.B. Abubakar, P. Kumam, An improved three-term derivative-free method for solv-

ing nonlinear equations, Computational and Applied Mathematics, 37(5) (2018),

6760–6773.

[35] M. Rivaie, M. Mamat, L.W. June, I. Mohd, A new class of nonlinear conjugate

gradient coeﬃcients with global convergence properties, Applied Mathematics and

Computation, 218(22) (2012), 11323–11332.

[36] J. Liu, Y. Feng, A derivative-free iterative method for nonlinear monotone equations

with convex constraints, Numerical Algorithms, (2018), 1–18.

[37] W.L. Cruz, J. Mart´ınez, M. Raydan, Spectral residual method without gradient

information for solving large-scale nonlinear systems of equations, Mathematics of

Computation, 75(255) (2006), 1429–1448.

[38] W. Zhou, D. Li, Limited memory bfgs method for nonlinear monotone equations,

Journal of Computational Mathematics, (2007), 89–96.

[39] C. Wang, Y. Wang, C. Xu, A projection method for a system of nonlinear monotone

equations with convex constraints, Mathematical Methods of Operations Research,

66(1) (2007), 33–46.

[40] Y. Bing, G. Lin, An eﬃcient implementation of merrills method for sparse or par-

tially separable systems of nonlinear equations, SIAM Journal on Optimization, 1(2)

(1991), 206–221.

[41] G. Yu, S. Niu, J. Ma, Multivariate spectral gradient projection method for nonlinear

monotone equations with convex constraints, Journal of Industrial and Management

Optimization, 9(1) (2013), 117–129.

[42] E.D. Dolan, J.J. Mor´e, Benchmarking optimization software with performance pro-

ﬁles, Mathematical programming, 91(2) (2002), 201–213.