ArticlePDF Available

Abstract and Figures

An efficient method for solving large scale unconstrained optimization problems is the conjugate method. Based on the conjugate gradient algorithm proposed by Rivaie, Mohd, et al. ("A new class of nonlinear conjugate gradient coefficients with global convergence properties." Applied Mathematics and Computation 218.22 (2012): 11323-11332.), we propose a spectral conjugate gradient algorithm for solving nonlinear equations with convex constraints which generate sufficient descent direction at each iteration. Under the Lipschitz continuity assumption, the global convergence of the algorithm is established. Furthermore, the propose algorithm is shown to be linearly convergent under some appropriate conditions. Numerical experiments are reported to show the efficiency of the algorithm. MSC: 47H05; 47J20; 47J25; 65K15
Content may be subject to copyright.
ISSN 1686-0209
Thai Journal of Mathematics
Vol. 18, No. 1 (2020),
Pages 211 - 231
DERIVATIVE-FREE RMIL CONJUGATE GRADIENT
ALGORITHM FOR CONVEX CONSTRAINED
EQUATIONS
Abdulkarim Hassan Ibrahim1,2, Garba Abor Isa3, Halima Usman3, Jamilu Abubakar1,2,3, Auwal Bala
Abubakar1,2,4
1Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi
(KMUTT), 126 Pracha Uthit Road, Bang Mod,Thung Khru, Bangkok 10140, Thailand
2KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and ApplicationsResearch Group,
Faculty of Science, King Mongkuts University of Technology Thonburi (KMUTT),126 Pracha-Uthit Road, Bang
Mod, Thrung Khru, Bangkok 10140, Thailand
3Department of Mathematics, Faculty of Science, Usmanu Danfodiyo University Sokoto, PMB 2346, Nigeria.
4Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano. Kano, Nigeria.
Email addresses: ibrahimkarym@gmail.com (A. H. Ibrahim), garba.isa@udusok.edu.ng (G. A. Isa),
halima.usman@udusok.edu.ng (H. Usman), abubakar.jamilu@udusok.edu.ng (J. Abubakar),
ababubakar.mth@buk.edu.ng (A.B. Abubakar)
Abstract An efficient method for solving large scale unconstrained optimization problems is the conju-
gate method. Based on the conjugate gradient algorithm proposed by Rivaie, Mohd, et al. (”A new class
of nonlinear conjugate gradient coefficients with global convergence properties.” Applied Mathematics
and Computation 218.22 (2012): 11323-11332.), we propose a spectral conjugate gradient algorithm for
solving nonlinear equations with convex constraints which generate sufficient descent direction at each
iteration. Under the Lipschitz continuity assumption, the global convergence of the algorithm is estab-
lished. Furthermore, the propose algorithm is shown to be linearly convergent under some appropriate
conditions. Numerical experiments are reported to show the efficiency of the algorithm.
MSC: 47H05; 47J20; 47J25; 65K15
Keywords: Projection Method, Subgradient extragradient method, Inertial type algorithm, Monotone
operator, Variational inequality.
Submission date: 30.11.2019 / Acceptance date: 18.01.2020
1. Introduction
In the last decade, several articles have been written on the subject of iterative methods.
These articles have focused on methods for solving nonlinear system of equations. This
is due to the numerous problems encountered in the fields of science and engineering
resulting in the appearance of nonlinear equations in vast applications. For instance, in
[1] the subproblem in generalized proximal algorithms with Bergman distances. Also, in
real-world applications such as Nash equilibrium problem in economics [2] and the signal
*Corresponding author. Published by The Mathematical Society of Thailand.
212
processing problem in [3], it can be seen that both problems need to be reformulated
into a nonlinear system of equations. It is therefore essential to solve these problems of
nonlinear equations arising in these fields by developing numerous algorithms.
Let C be a nonempty closed and convex subset of Rnand F:RnRnbe a monotone
mapping. That is,
(xy)T(F(x)F(y)) 0,x, y Rn.(1.1)
The focus of this work is on the nonlinear equation:
F(x) = 0, x C. (1.2)
Several iterative algorithms have been proposed for solving the nonlinear problem (1.2).
A few includes the trust-region [4], the Levenberg-Marquardt method [5], the TPRP
method [6] and the Gauss-Newton methods [7,8]. However, the methods mentioned have
appear to be typically unsuitable for handling large scale nonlinear equations because at
each iteration, computation and storage of matrix is required. Nevertheless, one of the
preferable method for solving this problem is the conjugate gradient (CG) method- The
CG-method is a popular iterative method developed with the sole aim of solving large-
scale unconstrained optimization problems. For an excellent survey on the CG-methods,
see [9].
Following the well known projection scheme of Solodov and Svaiter in [10], the CG-
method have been extended by many aurthors to solve (1.2). One among many of such
extentions is the method of Cheng et.al in [11], where they extended the PRP method
[12] to solve unconstrained monotone equations. Recently, the spectral gradient projec-
tion (SP) method [13] was extended to solve monotone nonlinear convex constrained equa-
tions. Numerical experiment indicates that the proposed method is suitable for large scale
problems. Thus, CG-methods for solving unconstrained optimization problems have been
extended by various authors in solving convex constrained monotone nonlinear equations.
For more related articles, we refer reader to [1417,1725,25,26,2634] and references
therein).
Motivated by the results of Rivaie et al. [35], we propose a derivative-free spectral
gradient-type iterative projection method for solving (1.2). The global convergence of the
method is proved under some conditions. Furthermore, the linearly convergent rate of
the proposed method is proved under some assumptions.
The remaining part of this paper is presented as follows: In section 2, we introduce our
algorithm and the method for unconstrained optimization problems posed in [35]. We
establish the global convergence of the method in section 3. We report the results of the
numerical experiments conducted on benchmark test problems in section 4. Finally, we
end up the paper with the conclusion in section 5.
2. Algorithm
We begin this section by presenting our proposed algorithm for solving (1.2). We
assume that the readers are familiar with the conjugate gradient method. Motivated
by the RMIL conjugate gradient algorithm proposed by Rivaie et al. [35] for solving
large-scale unconstrained optimization problems, we propose an efficient derivative-free
algorithm for solving nonlinear monotone equations with convex constraints (1.2) by using
the projection technique in Solodov and Svaiter [10]. Firstly, we define the search direction
as follows:
213
dk=vkF(xk) + βERM I L
kdk1,if k > 0,
F(xk),if k= 0,(2.1)
where
βERM I L
k=F(xk)T(F(xk)F(xk1))
dk12.(2.2)
For convenience, we refer to (2.1) and (2.2) as MRMIL algorithm. We note that If Fis
the gradient of a real-valued function f:RnR, then the sufficient descent condition
dT
kF(xk)≤ −cF(xk)2,(2.3)
where cis a positive constant means that dkis a direction of sufficient descent fat xk.
We obtain vkto satisfy (2.3). In the following, We abbreviate F(xk) as Fk.
For k= 0,(2.3) obviously holds. For kN, we have
dT
kFk=vkyk1
dk1Fk2.
To satisfy (2.3), it on;y need that
vkc+yk1
dk1.
Without loss of generality, in this paper, we choose vkas
vk=c+yk1
dk1.(2.4)
Next, we recall the projection operator, which is defined as a mapping PC:RnC,
where Cis a non empty closed convex set such that
PC(x) = arg min{∥xy∥ |yC}.(2.5)
Throughout this article, we will denote ∥·∥,to be the Euclidean norm. A well known
characterization of the projection operator is its nonexpansive property. That is, for any
x, y Rn,
PC(x)PC(y)∥ ≤ ∥xy.
Consequently,
PC(x)y∥ ≤ ∥xy,yC. (2.6)
In the remainder of this paper, we always assume that Fsatisfies the following assump-
tions
Assumption 2.1. The mapping F:RnRnis Lipschiz continuous, that is there exists
a positive Lsuch that
F(x)F(y)∥ ≤ Lxy,x, y Rn(2.7)
Assumption 2.2. Let Cbe a solution set, for any solution xC,there exist a
nonnegative constant γsatisfying
γdist(x, C )≤ ∥F(x)2,xN(γ, x),(2.8)
where dist(x, C) is the distance from xto Cand N(x, C ) := {xRn|∥xx∥ ≤ γ}.
We state the steps of the algorithm as follow
214
Algorithm 2.3. RMIL
Input. Set an initial point x0Rn, the positive constants: T ol > 0, ϖ(0,2),
ρ(0,1), κ > 0, σ > 0,Set k= 0.
Step 0. If Fk∥ ≤ T ol , stop. Otherwise, generate the search direction by
dk=vkF(xk) + βERM I L
kdk1,if k > 0,
F(xk),if k= 0,(2.9)
Step 1. Let tk= max{κρi|i= 0,1,2,· · · }, we set zk=xk+tkdk,to satisfy
F(zk)Tdkσtkdk2.(2.10)
Step 2. If zkCand F(zk)= 0,stop. Otherwise, compute the next iterate by
xk+1 =PC[xkϖξkF(zk)],(2.11)
where
ξk=F(zk)T(xkzk)
F(zk)2
Step 3. Finally we set k=k+ 1 and return to step 1.
Lemma 2.4. Let dkbe a search direction generated by Algorithm 2.3 then, dkalways
satisfies (2.3).
Proof. The proof follows from (2.4).
3. Convergence Analysis
In order to establish the convergence of Algorithm 2.3, we need the following lemmas.
Lemma 3.1. Let {dk}and {xk}be two sequences generated by Algorithm 2.3. Then,
there exists a step size tksatisfying the line search (2.10)for all k0
Proof. For any i0,suppose (2.10) does not hold for the iterate k0th, then we have
−⟨F(xk0+κρidk0), dk0< σκρidk02.
Thus, by the continuity of Fand with 0 <ρ<1,it follows that by letting i→ ∞,we
have
F(xk0)Tdk00,
which contradicts (2.3).
Lemma 3.2. Suppose that Assumption 2.1 holds. Let the sequences {xk}and {zk}be
generated by Algorithm 2.3, then
tkmax κ, ρcFk2
(L+σ)dk2.(3.1)
215
Proof. From the line search (2.10), if tk=κ, then t
k=tk
ρdoes not satisfy (2.10), that is
F(xk+tk
ρdk)Tdk< σ tk
ρ· ∥dk2.
It follows from (2.3) and (2.7) that
cFk2=FT
kdk
= (F(xk+tk
ρdk)Fk)TdkF(xk+tk
ρdk)Tdk
tk
ρ(L+σ)dk2.
This gives the desired inequality (3.1).
Lemma 3.3. Suppose that Assumption 2.1holds. Let {xk}and {zk}be sequences gen-
erated by Algorithm 2.3, then for any xCthe inequality
xk+1 x2≤ ∥xkx2ϖ(2 ϖ)σ2xkzk4
G(zk)2.(3.2)
holds. In addition, {xk}is bounded and
k=0
xkzk4<+.(3.3)
Proof. First, we begin by using the monotonicity of the mapping F. Thus, for any solution
xC,
F(zk)T(xkx)F(zk)T(xkzk).
The above inequality together with (2.10) gives
F(xk+tkdk)T(xkzk)σt2
kdk20.(3.4)
We have the following from (2.6) and (3.4),
xk+1 x2=PC(xkϖξkF(xk+tkdk)) x2(3.5)
≤ ∥xkϖξkF(xk+tkdk)x2
=xkx22ϖξkF(xk+tkdk)T(xkx) + ϖξkF(xk+tkdk)2
≤ ∥xkx22ϖξkF(xk+tkdk)T(xkzk) + ϖξkF(xk+tkdk)2
=xkx2ϖ(2 ϖ)G(zk)T(xkzk)
G(zk)2
≤ ∥xkx2ϖ(2 ϖ)σ2xkzk4
G(zk)2(3.6)
Thus, the sequence {∥xkx∥} is a decreasing sequence, which implies that {xk}is
bounded. That is
xK∥ ≤ ς, k0.(3.7)
Furthermore, using the continuity of Fwe know that there exists a constant K1>0 such
that
F(xk)∥ ≤ I1,k0.
216
Since (F(xk)F(zk))T(xkzk)0,by Cauchy-Schwarz inequality, we have
F(xk)∥∥xkzk∥ ≥ F(xk)T(xkzk)F(zk)T(xkzk)σxkzk2.
From the line search, the last inequality can be implied. So we have
σxkzk∥ ≤ ∥F(xk)∥ ≤ I1
which implies that {zk}is bounded. By continuity of F, we know that there exists a
constant K2>0, such that
F(zk)∥ ≤ I2,k0.
the above combined with (3.6) yields
ϖ(2 ϖ)σ2
I2
2
xkzk4≤ ∥xkx2− ∥xk+1 x2.(3.8)
Now, by taking the summation of (3.8), for k0, we have
ϖ(2 ϖ)σ2
I2
2
k=0
xkzk4
k=0
(xkx2− ∥xk+1 x2)≤ ∥x0x2<.
(3.9)
(3.9) implies that
lim
k→∞
xkzk= 0.(3.10)
The proof is complete.
Theorem 3.4. Suppose that Assumption 2.1 hold and let {xk}be the sequence generated
by Algorithm 2.3. Then, we have
lim inf
k→∞
Fk= 0.(3.11)
Proof. Suppose (3.11) is not valid, that is, there exist a constant say r > 0 such that
r≤ ∥|Fk, k 0.Then this along with (2.3) implies that
dk∥ ≥ cr, k0.(3.12)
Since {∥Fk∥} and {∥F(zk)∥} are bounded, it follows from (2.1)-(2.4) that for all k1,
dk∥ ≤ cFk+Fk∥ · yk1
dk1+Fk∥ · yk1
dk12dk1
=cFk+ 2Fkyk1
dk1
cFk+ 2LFkxkxk1
dk1
cI1+4I1
cr
Γ.
217
Note that, by Cauchy Schwarz inequality, the first inequality is easily obtained. Similarly,
from (2.7) and (3.12),the second inequality follows. Now, from (3.1), we have
tkdk∥ ≥ max κ, ρcFk2
(L+σ)dk2dk
max κcr, ρcr2
(L+σ>0,
which contradicts (3.10).Hence (3.11) is valid.
Theorem 3.5. Let xkbe the sequence generated by Algorithm 2.3 under Assumption
2.12.2.Then the sequence dist{xk, C}Qlinearly converges to zero.
Proof. Lets set µk= arg min{∥xkh∥ |hC}.This implies that
xktk=dist(xk, C).
From (3.2),for µkCwe obtain
d(xk+1, C )2≤ ∥xk+1 tk2
dist(xk, C)2σ2tkdk4
dist(xk, C)2σ2c4t4
kFk4
dist(xk, C)2σ2γ2c4t4
kd(xk, C)2
= (1 σ2γ2c4t4
k)d(xk, C)2,
Note that, from the inequality in Assumption 2.2,we obtain the fourth inequality. Let
the parameter 1
γσ c2, then, 1 σ2γ2c4t4
k(0,1) holds. Finally, we see that d(xk, C)
Qlinearly converges to zero.
4. Numerical Experiments
An insight of the proposed algorithm is presented in this section. We test the com-
putational performance of Algorithm 2.3 with existing method in literature using some
benchmark test problems. Precisely, we compare our algorithm with the PDY algorithm
[36] designed for solving same problem (1.2). The numerical experiments are carried out
on a set of seven different problems with dimension ranging from n= 5000 to 100,000
and initial points set as follow:
x1= (0.1,0.1,· · · ,0.1)T, x2= (0.2,0.2,· · · ,0.2)T, x3= (0.5,0.5,· · · ,0.5)T, x4= (1.2,1.2,· · · ,1.2)T,
x5= (1.5,1.5,· · · 1.5)T, x6= (2,2,· · · ,2)T, x7=rand(n, 1).
Throughout, we set parameters for PDY algorithm as in [36]. For Algorithm 1, the values
of our parameters were set as follows: c= 1, ρ = 0.5, σ = 0.001. ϖ = 1.8. For each test
problem, the iterative process is stopped when the inequality
Fk∥ ≤ 106
is satisfied. Again, failure is declared after a thousand iteration. All algorithms were
written in Matlab and run on a HP personal computer with system specifications as
follows Intel(R) Core (TM) i3-7100U CPU 2.40GHZ, 8GB memory and Windows 10
operating system.
218
We give a list of the benchmark test problems used in our experiment. Note that in
this article, we take the mapping Fas F(x) = (f1(x), f2(x),· · · , fn(x))T.
Problem 1. This problem is the Exponential function [37] with constraint set C=Rn
+,
that is,
f1(x) = ex11,
fi(x) = exi+xi1,for i= 2,3, ..., n.
Problem 2. Modified Logarithmic function [15] with constraint set C={xRn:
n
i=1 xin, xi>1, i = 1,2, . . . , n},that is,
fi(x) = ln(xi+ 1) xi
n, i = 2,3, ..., n.
Problem 3. The Nonsmooth Function [38] with constraint set C=Rn
+.
fi(x) = 2xisin |xi|, i = 1,2,3, ..., n.
Problem 4. The Strictly convex function [39], with constraint set C=Rn
+,that is,
fi(x) = exi1, i = 2,3,· · · , n
Problem 5. Tridiagonal Exponential function [40] with constraint set C=Rn
+,that is,
f1(x) = x1ecos(h(x1+x2)),
fi(x) = xiecos(h(xi1+xi+xi+1)) ,for 2 in1,
fn(x) = xnecos(h(xn1+xn)),where h=1
n+ 1
Problem 6. Nonsmooth function [41] with with constraint set C={xRn:n
i=1 xi
n, xi≥ −1,1in}.
fi(x) = xisin |xi1|, i = 2,3,· · · , n
Problem 7. The Trig exp function [37] with constraint set C=Rn
+,that is,
f1(x) = 3x3
1+ 2x25 + sin(x1x2) sin(x1+x2)
fi(x) = 3x3
i+ 2xi+1 5 + sin(xixi+1) sin(xi+xi+1 )+4xixi1exi1xi3
for i= 2,3, ..., n 1
fn(x) = xn1exn1xn4xn3,where h = 1
n+ 1.
In order to visualize the behavior of Algorithm 1, we adopt the performance profiles pro-
posed by Dolan and More in [42] to compare the performance among the tested methods.
The performance profile seeks to find how well the solvers perform relative to the other
solvers on a set of problems based on the total number of iterations, total number of
function evaluations, and the running time of each method. The details of our numerical
test are presented in the Appendix section. We denote by ”Iter.” the number of iterations,
”Fval.” the number of function evaluations and ”Time.” the CPU time in seconds.
219
0 0.5 1 1.5 2 2.5 3 3.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
( )
MRMIL
PDY
Figure 1. Performance profiles with respect to the number of iterates
The figures in this section show the performance profiles of our method versus other recent
existing method. The performance of the methods are measured based on the number of
iterations, the number of function Fevaluations and the CPU time. It is not difficult to
see that both methods solved all the test problems successfully. However, the MRMIL
algorithm highly performs better on a whole based on these measures compared to PDY
algorithm.
In detail, Figure 1illustrates the performance profile of our method, where the perfor-
mance index is the total number of iterations. It can be seen that the MRMIL algorithm
is the best solver with probability around 79% while the probability of the compared
method of solving the same problem as the best solver is around 31%. Figure 2.5 and 3
illustrates the performance profiles of the total number of function evaluation and CPU
time. Similar results as Figure 1can be derived from these figures.
220
0 0.5 1 1.5 2 2.5 3 3.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
( )
MRMIL
PDY
Figure 2. Performance profiles with respect to the number of iterates
221
0 0.5 1 1.5 2 2.5 3 3.5 4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
( )
MRMIL
PDY
Figure 3. Performance profiles with respect to CPU time
5. Conclusion
In this article, the authors proposed a modified conjugate gradient algorithm for solving
monotone nonlinear equations with convex constraints. This work can be regarded as an
extension of the method in [35]. Using some technical conditions, we established the
global convergence of the proposed method. We present numerical results to illustrate
that our method is stable and efficient for the monotone nonlinear equations, especially
for the large-scale problems with convex constraints.
6. Acknowledgements
This project was supported by Theoretical and Computational Science (TaCS) Cen-
ter under Computational and Applied Science for Smart research Innovation Cluster
(CLASSIC), Faculty of Science, KMUTT. The first author was supported by the Petchra
Pra Jom Klao Doctoral Scholarship, Academic for Ph.D. Program at KMUTT (Grant
No.16/2561).
222
Appendix
Table 1. Numerical results for problem 1
MRMIL PDY
DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM
1000
x15 19 0.017596 0.00E+00 16 64 0.040997 3.45E-07
x25 19 0.020114 0.00E+00 16 64 0.033587 7.03E-07
x38 32 0.014192 2.11E-07 17 68 0.011419 6.22E-07
x42 7 0.006337 0.00E+00 18 72 0.02491 4.54E-07
x52 7 0.007319 0.00E+00 18 72 0.038672 3.65E-07
x65 19 0.008786 0.00E+00 18 72 0.013774 3.80E-07
x78 32 0.006044 4.80E-07 17 68 0.019283 7.05E-07
5000
x15 19 0.012849 0.00E+00 16 64 0.076967 7.61E-07
x25 19 0.011326 0.00E+00 17 68 0.07326 5.15E-07
x310 40 0.03188 1.49E-07 18 72 0.051817 4.63E-07
x42 7 0.00821 0.00E+00 19 76 0.059926 3.38E-07
x52 7 0.007023 0.00E+00 18 72 0.078845 8.12E-07
x67 27 0.020875 0.00E+00 18 72 0.072337 8.10E-07
x78 32 0.01868 7.97E-07 18 72 0.062592 5.38E-07
10000
x112 48 0.048563 1.28E-08 17 68 0.097567 3.55E-07
x26 24 0.027456 2.05E-07 17 68 0.085576 7.27E-07
x38 32 0.02695 1.85E-07 18 72 0.10317 6.55E-07
x42 7 0.007457 0.00E+00 19 76 0.092351 4.77E-07
x52 7 0.012742 0.00E+00 20 80 0.13829 4.52E-07
x66 23 0.032861 0.00E+00 19 76 0.093402 5.51E-07
x79 36 0.033609 9.08E-08 18 72 0.084444 7.55E-07
50000
x110 40 0.18505 2.60E-07 17 68 0.43588 7.93E-07
x28 32 0.11334 7.28E-07 18 72 0.32887 5.44E-07
x38 32 0.13543 7.35E-08 19 76 0.36732 4.86E-07
x42 7 0.033502 0.00E+00 20 80 0.41552 9.70E-07
x52 7 0.059006 0.00E+00 22 88 0.57589 8.63E-07
x66 23 0.1 0.00E+00 23 92 0.49481 8.62E-07
x79 36 0.17921 1.92E-07 19 76 0.43672 5.62E-07
100000
x117 68 0.46137 5.26E-09 18 72 0.61655 3.76E-07
x217 68 0.46751 6.68E-07 18 72 0.81072 7.69E-07
x38 31 0.20832 0.00E+00 19 76 0.64764 6.88E-07
x42 7 0.080694 0.00E+00 23 92 1.0145 3.63E-07
x52 7 0.090344 0.00E+00 23 92 1.043 9.61E-07
x611 44 0.27679 8.73E-08 26 104 1.0696 3.39E-07
x79 36 0.27043 2.48E-07 20 80 0.9056 7.78E-07
223
Table 2. Numerical results for problem 2
MRMIL PDY
DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM
1000
x17 23 0.004347 1.42E-08 13 51 0.077345 7.68E-07
x27 23 0.00593 1.44E-08 15 59 0.013322 3.49E-07
x38 26 0.008602 1.11E-08 16 63 0.010509 6.98E-07
x48 26 0.00346 6.52E-09 18 71 0.029102 3.52E-07
x57 23 0.005348 1.18E-08 18 71 0.014308 5.13E-07
x68 26 0.00715 5.03E-09 18 71 0.01597 8.59E-07
x716 63 0.015263 5.75E-07 17 67 0.051946 4.52E-07
5000
x17 24 0.014743 5.75E-07 14 55 0.050839 5.44E-07
x27 24 0.016902 5.75E-07 15 59 0.036741 7.63E-07
x38 27 0.018674 4.90E-07 17 67 0.10101 5.12E-07
x48 27 0.016456 3.19E-07 18 71 0.073292 7.73E-07
x57 24 0.019151 4.59E-07 19 75 0.074561 3.75E-07
x68 26 0.028861 4.98E-10 19 75 0.045617 6.27E-07
x715 59 0.042091 8.23E-07 17 67 0.17854 9.89E-07
10000
x19 35 0.043477 4.65E-07 14 55 0.096961 7.66E-07
x29 34 0.034886 4.65E-07 16 63 0.068215 3.55E-07
x310 38 0.048411 4.04E-07 17 67 0.097321 7.23E-07
x410 38 0.051665 2.72E-07 19 75 0.17663 3.63E-07
x59 35 0.038634 3.75E-07 19 75 0.1389 5.29E-07
x610 38 0.048532 1.63E-07 19 76 0.084686 9.51E-07
x716 63 0.066321 5.89E-07 18 71 0.18208 4.65E-07
50000
x110 39 0.1595 1.04E-07 15 59 0.60225 5.78E-07
x210 39 0.13852 1.03E-07 16 63 0.38193 7.92E-07
x310 38 0.25958 9.06E-07 18 71 1.1323 5.36E-07
x410 38 0.15314 6.13E-07 21 84 0.48105 3.43E-07
x59 35 0.13068 8.30E-07 21 84 0.65056 4.72E-07
x610 38 0.15521 3.60E-07 21 84 0.49099 4.77E-07
x716 63 0.38185 8.50E-07 19 75 0.46664 3.46E-07
100000
x110 39 0.34962 1.46E-07 15 59 0.79437 8.17E-07
x210 39 0.43655 1.46E-07 17 67 0.86905 3.76E-07
x311 42 0.31355 1.28E-07 18 72 0.92806 9.65E-07
x410 38 0.38263 8.68E-07 22 88 1.0076 8.28E-07
x510 39 0.28794 1.17E-07 22 88 1.542 8.18E-07
x610 38 0.2989 5.08E-07 22 88 1.3244 7.87E-07
x717 67 0.68179 5.25E-07 20 80 1.0409 5.45E-07
224
Table 3. Numerical results for problem 3
MRMIL PDY
DIM INP ITER FVAL TIME NORM ITER2 FVAL3 TIME4 NORM5
1000
x111 44 0.006867 9.73E-07 15 60 0.077797 4.96E-07
x212 48 0.008871 9.29E-07 16 64 0.017838 3.39E-07
x311 44 0.005343 6.21E-07 16 64 0.012914 9.24E-07
x413 52 0.008538 7.43E-07 17 68 0.010386 8.94E-07
x510 40 0.006854 7.08E-07 18 72 0.013949 3.60E-07
x614 56 0.009688 4.75E-07 18 72 0.026721 3.47E-07
x717 68 0.018595 3.94E-07
5000
x113 52 0.023164 5.44E-07 16 64 0.036111 3.74E-07
x214 56 0.023803 5.19E-07 16 64 0.045275 7.58E-07
x312 48 0.024052 6.95E-07 17 68 0.060382 6.84E-07
x414 56 0.02763 4.15E-07 18 72 0.11091 6.68E-07
x511 44 0.014771 3.96E-07 18 72 0.049381 8.05E-07
x615 60 0.025569 2.66E-07 18 72 0.065425 7.46E-07
x717 68 0.06995 8.75E-07
10000
x113 52 0.037055 7.69E-07 16 64 0.12557 5.28E-07
x214 56 0.034567 7.34E-07 17 68 0.092694 3.55E-07
x312 48 0.056047 9.82E-07 17 68 0.079822 9.67E-07
x414 56 0.055368 5.87E-07 18 72 0.24877 9.44E-07
x511 44 0.038703 5.60E-07 20 80 0.080159 3.38E-07
x615 60 0.037159 3.76E-07 19 76 0.084531 3.50E-07
x718 72 0.1757 4.10E-07
50000
x114 56 0.23139 8.60E-07 17 68 0.37534 3.91E-07
x215 60 0.16087 8.21E-07 17 68 0.24801 7.93E-07
x314 56 0.17539 5.49E-07 18 72 0.26549 7.25E-07
x415 60 0.18294 3.28E-07 20 80 0.46666 6.42E-07
x512 48 0.15805 3.13E-07 21 84 0.32816 5.20E-07
x615 60 0.23569 8.40E-07 21 84 0.48755 3.51E-07
x718 72 0.50034 9.18E-07
100000
x115 60 0.28424 6.08E-07 17 68 0.73834 5.53E-07
x216 64 0.30566 5.80E-07 18 72 0.75733 3.76E-07
x314 56 0.30675 7.77E-07 19 76 0.54971 3.40E-07
x415 60 0.46637 4.64E-07 22 88 1.353 6.92E-07
x512 48 0.26464 4.43E-07 22 88 0.69186 6.17E-07
x616 64 0.31211 2.97E-07 22 88 1.0329 5.81E-07
x720 80 1.1918 4.62E-07
225
Table 4. Numerical results for problem 4
MRMIL PDY
DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM
1000
x111 44 0.005463 5.14E-07 15 60 0.008008 5.13E-07
x210 40 0.006904 7.26E-07 16 64 0.013881 3.59E-07
x32 7 0.002464 0.00E+00 16 64 0.024003 9.42E-07
x42 7 0.002327 0.00E+00 15 60 0.008609 6.44E-07
x52 7 0.00394 0.00E+00 17 68 0.016199 3.91E-07
x62 7 0.003647 0.00E+00 17 68 0.073791 7.89E-07
x710 40 0.004219 3.71E-07 17 68 0.017215 4.89E-07
5000
x112 48 0.019223 5.75E-07 16 64 0.038264 3.86E-07
x211 44 0.015049 8.12E-07 16 64 0.032691 8.02E-07
x32 7 0.006792 0.00E+00 17 68 0.030878 7.00E-07
x42 7 0.006842 0.00E+00 16 64 0.026864 4.74E-07
x52 7 0.006989 0.00E+00 17 68 0.067797 8.74E-07
x62 7 0.006434 0.00E+00 19 76 0.031626 5.11E-07
x710 40 0.017007 1.66E-07 18 72 0.030529 3.71E-07
10000
x112 48 0.026133 8.13E-07 16 64 0.046997 5.46E-07
x212 48 0.029647 5.74E-07 17 68 0.07771 3.76E-07
x32 7 0.008574 0.00E+00 17 68 0.0702 9.90E-07
x42 7 0.011381 0.00E+00 19 76 0.058083 3.70E-07
x52 7 0.008156 0.00E+00 18 72 0.097611 4.15E-07
x62 7 0.01211 0.00E+00 19 76 0.13803 7.22E-07
x713 52 0.065447 5.08E-07 18 72 0.075793 5.07E-07
50000
x113 52 0.11185 9.09E-07 17 68 0.18421 4.04E-07
x213 52 0.17585 6.42E-07 17 68 0.19435 8.40E-07
x32 7 0.028215 0.00E+00 18 72 0.22118 7.39E-07
x42 7 0.040046 0.00E+00 20 80 0.29846 6.25E-07
x52 7 0.036865 0.00E+00 20 80 0.24516 8.13E-07
x62 7 0.034726 0.00E+00 22 88 0.45415 9.65E-07
x713 52 0.15831 3.24E-07 19 76 0.32983 6.75E-07
100000
x114 56 0.26401 6.43E-07 17 68 0.56127 5.71E-07
x213 52 0.32171 9.08E-07 18 72 0.5503 3.98E-07
x32 7 0.060471 0.00E+00 19 76 0.42901 9.57E-07
x42 7 0.084635 0.00E+00 22 88 0.53544 3.99E-07
x52 7 0.081715 0.00E+00 24 96 0.95585 3.66E-07
x62 7 0.059423 0.00E+00 26 104 0.7676 3.55E-07
x713 52 0.2707 4.50E-07 19 76 0.56768 9.53E-07
226
Table 5. Numerical results for problem 5
MRMIL PDY
DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM
1000
x128 112 0.02166 6.16E-07 18 72 0.064493 4.82E-07
x228 112 0.036568 5.92E-07 18 72 0.017572 4.64E-07
x328 112 0.021776 5.22E-07 18 72 0.025552 4.08E-07
x427 108 0.017528 7.14E-07 17 68 0.027679 8.34E-07
x527 108 0.01623 5.73E-07 17 68 0.02813 6.69E-07
x626 104 0.048145 6.76E-07 17 68 0.013746 3.94E-07
x728 112 0.019109 5.26E-07 18 72 0.016604 4.11E-07
5000
x129 116 0.10404 6.90E-07 19 76 0.15354 3.58E-07
x229 116 0.077534 6.63E-07 19 76 0.1347 3.44E-07
x329 116 0.075795 5.84E-07 18 72 0.061833 9.14E-07
x428 112 0.077236 8.00E-07 18 72 0.14442 6.26E-07
x528 112 0.086837 6.42E-07 18 72 0.058368 5.02E-07
x627 108 0.074944 7.57E-07 17 68 0.080337 8.83E-07
x729 116 0.088769 5.90E-07 18 72 0.060071 9.21E-07
10000
x129 116 0.13172 9.75E-07 21 84 0.13537 4.00E-07
x229 116 0.13083 9.38E-07 21 84 0.13596 3.85E-07
x329 116 0.18213 8.26E-07 20 80 0.2194 5.83E-07
x429 116 0.12873 5.66E-07 18 72 0.14363 8.85E-07
x528 112 0.15234 9.08E-07 18 72 0.16376 7.10E-07
x628 112 0.15848 5.35E-07 18 72 0.099046 4.19E-07
x729 116 0.13706 8.34E-07 20 80 0.20036 5.88E-07
50000
x131 124 0.64489 5.45E-07 24 96 0.73376 7.08E-07
x231 124 0.70682 5.24E-07 24 96 0.81236 6.81E-07
x330 120 0.55822 9.24E-07 23 92 0.6838 7.26E-07
x430 120 0.53198 6.32E-07 21 84 0.57411 5.18E-07
x530 120 0.54466 5.07E-07 21 84 0.66594 4.16E-07
x629 116 0.53353 5.98E-07 18 72 0.47458 9.36E-07
x730 120 0.53253 9.32E-07 23 92 0.78547 7.33E-07
100000
x131 124 1.2364 7.71E-07 29 116 3.4129 5.93E-07
x231 124 1.5374 7.42E-07 28 112 2.232 6.09E-07
x331 124 1.3392 6.53E-07 26 104 1.9924 6.39E-07
x430 120 1.2903 8.94E-07 23 92 1.6393 7.03E-07
x530 120 1.3408 7.18E-07 22 88 1.4593 3.66E-07
x629 116 1.3172 8.46E-07 20 80 1.5262 5.97E-07
x731 124 1.3756 6.59E-07 26 104 2.0768 6.44E-07
227
Table 6. Numerical results for problem 6
MRMIL PDY
DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM
1000
x18 32 0.007545 2.09E-07 17 68 0.047472 6.92E-07
x28 32 0.007651 1.30E-07 17 68 0.01164 4.34E-07
x37 28 0.0044 4.82E-07 5 20 0.035276 4.50E-08
x49 36 0.006288 7.10E-08 18 72 0.031358 8.82E-07
x59 36 0.004769 2.98E-07 19 76 0.016064 8.09E-07
x69 35 0.005865 4.30E-07 18 71 0.019573 5.23E-07
x715 60 0.011421 2.47E-07 19 76 0.034046 4.32E-07
5000
x18 32 0.10747 4.68E-07 18 72 0.060315 5.59E-07
x28 32 0.013586 2.90E-07 17 68 0.043677 9.70E-07
x38 32 0.02763 6.88E-08 5 20 0.020451 1.01E-07
x49 36 0.013456 1.59E-07 19 76 0.067458 7.14E-07
x59 36 0.021637 6.67E-07 20 80 0.048031 6.56E-07
x69 35 0.01803 9.62E-07 19 75 0.072431 4.22E-07
x717 68 0.067433 2.62E-07 19 76 0.072684 9.09E-07
10000
x18 32 0.035531 6.62E-07 18 72 0.17816 7.90E-07
x28 32 0.024332 4.10E-07 18 72 0.12132 4.95E-07
x38 32 0.025852 9.73E-08 5 20 0.017535 1.42E-07
x49 36 0.022198 2.24E-07 20 80 0.14969 3.66E-07
x59 36 0.023586 9.43E-07 20 80 0.20198 9.28E-07
x610 39 0.027458 8.69E-08 21 84 0.09774 4.36E-07
x715 60 0.061535 7.77E-07 20 80 0.15572 4.75E-07
50000
x19 36 0.091915 9.46E-08 19 76 0.30923 6.42E-07
x28 32 0.086014 9.17E-07 19 76 0.33924 4.02E-07
x38 32 0.10021 2.18E-07 5 20 0.073014 3.18E-07
x49 36 0.10806 5.02E-07 21 84 0.41239 8.23E-07
x510 40 0.10572 1.35E-07 21 84 0.57454 7.14E-07
x610 39 0.17226 1.94E-07 21 84 0.3827 9.75E-07
x718 72 0.2558 6.43E-07 21 84 0.93238 3.82E-07
100000
x19 36 0.26898 1.34E-07 20 80 0.79946 7.45E-07
x29 36 0.17345 8.28E-08 19 76 1.0298 5.69E-07
x38 32 0.25039 3.08E-07 5 20 0.14119 4.50E-07
x49 36 0.21676 7.10E-07 22 88 1.0177 4.22E-07
x510 40 0.27836 1.91E-07 22 88 0.81176 7.50E-07
x610 39 0.19144 2.75E-07 22 88 0.93483 5.00E-07
x720 80 0.54314 3.45E-07 20 80 0.73771 6.67E-07
228
Table 7. Numerical results for problem 7
MRMIL PDY
DIM INP ITER FVAL TIME NORM ITER FVAL TIME NORM
1000
x125 100 0.073028 4.63E-07 36 144 0.20315 6.34E-07
x225 100 0.11462 9.07E-07 35 140 0.2928 9.13E-07
x322 88 0.059525 9.01E-07 35 140 0.18604 7.34E-07
x425 100 0.085671 4.88E-07 33 132 0.16095 2.30E-07
x525 100 0.060418 1.67E-07 31 124 0.13438 8.06E-07
x625 100 0.080534 8.31E-07 24 96 0.10379 9.72E-07
x726 104 0.088063 5.63E-07 29 116 0.1692 3.15E-07
5000
x128 112 0.35075 7.48E-07 34 136 0.71146 8.36E-07
x224 96 0.38934 8.05E-07 34 136 0.69158 7.93E-07
x325 100 0.30709 7.27E-07 34 136 0.63571 6.18E-07
x425 100 0.337 5.12E-07 31 124 0.66455 3.90E-07
x525 100 0.39557 5.84E-07 30 120 0.59363 8.11E-07
x623 92 0.27766 5.03E-07 24 96 0.54085 7.51E-07
x728 112 0.50805 6.90E-07 25 100 0.79827 2.93E-07
10000
x130 120 0.71688 6.74E-07 34 136 1.8057 6.78E-07
x230 120 0.66154 7.68E-07 34 136 1.3939 6.42E-07
x325 100 0.55277 5.63E-07 33 132 1.3301 7.57E-07
x429 113 0.7388 9.13E-07 30 120 1.3051 3.94E-07
x525 100 0.57022 7.39E-07 30 120 1.1445 5.57E-07
x625 100 0.56951 8.67E-07 24 96 0.8758 7.21E-07
x729 116 0.72454 6.65E-07 25 100 0.89229 4.07E-07
50000
x128 112 2.8081 8.07E-07 34 136 7.9299 6.35E-07
x230 120 2.9855 7.96E-07 33 132 6.6438 6.12E-07
x326 104 2.6057 9.25E-07 32 128 7.4126 7.22E-07
x45 17 0.40473 NaN 24 96 5.4526 3.36E-07
x57 25 0.6025 NaN 29 116 6.8103 5.83E-07
x628 112 2.9107 4.55E-07 31 124 6.0871 7.91E-07
x729 116 3.0354 2.75E-07 27 108 5.43 3.65E-07
100000
x130 119 6.2617 4.81E-07 33 132 19.5575 8.00E-07
x228 112 5.7996 8.27E-07 33 132 17.3005 7.49E-07
x329 116 6.1519 8.53E-07 40 160 21.0229 9.75E-07
x45 17 0.83758 NaN 30 120 12.1478 9.85E-07
x526 104 5.5499 7.56E-07 28 112 10.8844 9.46E-07
x633 131 7.1703 6.89E-07 26 104 9.8098 9.05E-07
x731 124 6.7745 5.21E-07 27 108 9.8646 4.03E-07
229
References
[1] N.A. Iusem, V.M. Solodov, Newton-type methods with generalized distances for
constrained optimization, Optimization, 41(3) (1997), 257–278.
[2] S. Huang, Z. Wan, A new nonmonotone spectral residual method for nonsmooth non-
linear equations, Journal of Computational and Applied Mathematics, 313 (2017),
82–101.
[3] Z. Wan, J. Guo, J. Liu, W. Liu, A modified spectral conjugate gradient projection
method for signal recovery, Signal, Image and Video Processing, 12(8) (2018), 1455–
1462.
[4] X.J. Tong, L. Qi, On the convergence of a trust-region method for solving constrained
nonlinear equations with degenerate solutions, Journal of optimization theory and
applications, 123(1) (2004), 187–211.
[5] C. Kanzow, N. Yamashita, M. Fukushima, Levenberg–marquardt methods with
strong local convergence properties for solving nonlinear equations with convex con-
straints, Journal of Computational and Applied Mathematics, 172(2) (2004), 375–
397.
[6] Q. Li, D.H. Li, A class of derivative-free methods for large-scale nonlinear monotone
equations, IMA Journal of Numerical Analysis, 31(4) (2011), 1625–1635.
[7] D. Li, M. Fukushima, A globally and superlinearly convergent gauss–newton-based
bfgs method for symmetric nonlinear equations, SIAM Journal on Numerical Anal-
ysis, 37(1) (1999), 152–172.
[8] D. Li, M, Fukushima, A globally and superlinearly convergent gauss-newton based
bfgs method for symmetric equations, Toclmical Report, (1998), 98006.
[9] W.W. Hager, H. Zhang, A survey of nonlinear conjugate gradient methods, Pacific
journal of Optimization, 2(1) (2006), 35–58.
[10] M.V. Solodov, B.F. Svaiter, A globally convergent inexact newton method for sys-
tems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth,
Semismooth and Smoothing Methods, Springer, (1998), 355–369.
[11] W. Cheng, A prp type method for systems of monotone equations, Mathematical
and Computer Modelling, 50(1-2) (2009), 15–20.
[12] B.T. Polyak, The conjugate gradient method in extremal problems, USSR Compu-
tational Mathematics and Mathematical Physics, 9(4) (1969), 94–112.
[13] Z. Yu, J. Lin, J. Sun, Y. Xiao, L. Liu, Z. Li, Spectral gradient projection method
for monotone nonlinear equations with convex constraints, Applied numerical math-
ematics, 59(10) (2009), 2416–2423.
[14] W.L. Cruz, M. Raydan, Nonmonotone spectral methods for large-scale nonlinear
systems, Optimization Methods and Software, 18(5) ( 2003), 583–599.
[15] W.L. Cruz, J. Mart´ınez, M. Raydan, Spectral residual method without gradient
information for solving large-scale nonlinear systems of equations, Mathematics of
Computation, 75(255) (2006), 1429–1448.
[16] L. Zhang, W. Zhou, Spectral gradient projection method for solving nonlinear mono-
tone equations, Journal of Computational and Applied Mathematics, 196(2) (2006),
478–484.
[17] A.B. Abubakar, P. Kumam, A descent dai-liao conjugate gradient method for non-
linear equations, Numerical Algorithms, 81(1) (2019), 197–210.
230
[18] A.B. Abubakar, P. Kumam, A.M. Awwal, P. Thounthong, A modified self-adaptive
conjugate gradient method for solving convex constrained monotone nonlinear equa-
tions for signal reovery problems, Mathematics, 7(8) (2019), 693.
[19] A. Padcharoen, P. Sukprasert, Nonlinear Operators as Concerns Convex Program-
ming and Applied to Signal Processing, Mathematics, 7(9) (2019), 866.
[20] A. Padcharoen, P. Kumam, Y.J. Cho, Split common fixed point problems for demi-
contractive operators, Numerical Algorithms 82(1) (2019), 297–320.
[21] D. Kitkuan, P. Kumam, A. Padcharoen, W. Kumam, P. Thounthong, Algorithms
for zeros of two accretive operators for solving convex minimization problems and its
application to image restoration problems, Journal of Computational and Applied
Mathematics, 354 (2019), 471–495.
[22] A. Padcharoen, P. Kumam, J. Mart´ınez-Moreno, Augmented Lagrangian method
for TV-l1-l2based colour image restoration, Journal of Computational and Applied
Mathematics, 354 (2019), 507–519.
[23] Adaptive algorithm for solving the SCFPP of demicontractive operators without a
priori knowledge of operator norms, Analele Universitatii “Ovidius” Constanta-Seria
Matematica, 27(3) (2019), 153–175.
[24] A. Padcharoen, D. Kitkuan, P. Kumam, J. Rilwan, W. Kumam, Accelerated alter-
nating minimization algorithm for poisson noisy image recovery, Inverse Problems in
Science and Engineering, (2020).
[25] A.B. Abubakar, P. Kumam, H. Mohammad, A.M. Awwal, K. Sitthithakerngkiet, A
modified fletcher–reeves conjugate gradient method for monotone nonlinear equations
with some applications, Mathematics, 7(8) (2019), 745.
[26] A.B. Abubakar, P. Kumam, H. Mohammad, and A.M. Awwal, An efficient con-
jugate gradient method for convex constrained monotone nonlinear equations with
applications, Mathematics, 7(9) (2019), 767.
[27] H. Mohammad and A.B. Abubakar, A positive spectral gradient-like method for
large-scale nonlinear monotone equations, Bulletin of Computational and Applied
Mathematics, 5(1) (2017), 99–115.
[28] A.M. Awwal, P. Kumam, A.B. Abubakar, A. Wakili, A projection hestenes-stiefel-
like method for monotone nonlinear equations with convex constraints, Thai Journal
of Mathematics, (2019), 181–199.
[29] A.B. Abubakar, P. Kumam, A.M. Awwal, A descent dai-liao projection method for
convex constrained nonlinear monotone equations with applications, Thai Journal of
Mathematics, (2019), 128–152.
[30] A.B. Abubakar, P. Kumam, A.M. Awwal, Global convergence via descent modi-
fied three-term conjugate gradient projection algorithm with applications to signal
recovery, Results in Applied Mathematics, 4 (2019), 100069.
[31] A.B. Abubakar, P. Kumam, A.M. Awwal, An inexact conjugate gradient method for
symmetric nonlinear equations, Computational and Mathematical Methods, (2019).
[32] A.M. Awwal, P. Kumam, A.B. Abubakar, Spectral modified polak–ribi´ere–polyak
projection conjugate gradient method for solving monotone systems of nonlinear
equations, Applied Mathematics and Computation, 362 (2019), 124514.
[33] A.M. Awwal, P. Kumam, A.B. Abubakar, A modified conjugate gradient method for
monotone nonlinear equations with convex constraints, Applied Numerical Mathe-
matics, (2019).
231
[34] A.B. Abubakar, P. Kumam, An improved three-term derivative-free method for solv-
ing nonlinear equations, Computational and Applied Mathematics, 37(5) (2018),
6760–6773.
[35] M. Rivaie, M. Mamat, L.W. June, I. Mohd, A new class of nonlinear conjugate
gradient coefficients with global convergence properties, Applied Mathematics and
Computation, 218(22) (2012), 11323–11332.
[36] J. Liu, Y. Feng, A derivative-free iterative method for nonlinear monotone equations
with convex constraints, Numerical Algorithms, (2018), 1–18.
[37] W.L. Cruz, J. Mart´ınez, M. Raydan, Spectral residual method without gradient
information for solving large-scale nonlinear systems of equations, Mathematics of
Computation, 75(255) (2006), 1429–1448.
[38] W. Zhou, D. Li, Limited memory bfgs method for nonlinear monotone equations,
Journal of Computational Mathematics, (2007), 89–96.
[39] C. Wang, Y. Wang, C. Xu, A projection method for a system of nonlinear monotone
equations with convex constraints, Mathematical Methods of Operations Research,
66(1) (2007), 33–46.
[40] Y. Bing, G. Lin, An efficient implementation of merrills method for sparse or par-
tially separable systems of nonlinear equations, SIAM Journal on Optimization, 1(2)
(1991), 206–221.
[41] G. Yu, S. Niu, J. Ma, Multivariate spectral gradient projection method for nonlinear
monotone equations with convex constraints, Journal of Industrial and Management
Optimization, 9(1) (2013), 117–129.
[42] E.D. Dolan, J.J. Mor´e, Benchmarking optimization software with performance pro-
files, Mathematical programming, 91(2) (2002), 201–213.
... Interested readers can refer to the following articles and references therein. [24][25][26][27][28][29][30][31][32][33][34][35] Our interest in this article is to propose, analyze, and test a derivative-free hybrid iterative method for solving (1). The propose method also uses a convex combination of the HS and the DY methods as in Abubakar et al. 20 However, a different hybridization parameter is chosen in this paper. ...
... The inequality above contradicts (28) and hence (29) holds. ...
Article
Full-text available
This paper presents a hybrid conjugate gradient (CG) approach for solving nonlinear equations and signal reconstruction. The CG parameter of the approach is a convex combination of the Dai‐Yuan (DY)‐like and Hestenes‐Stiefel (HS)‐like parameters. Independent of any line search, the search direction is descent and bounded. Under some reasonable assumptions, the global convergence of the hybrid approach is proved. Numerical experiments on some benchmark test problems show that the proposed approach is efficient compared with some existing algorithms. Finally, the proposed approach is applied in signal reconstruction.
... Compared with existing algorithms, they proved to be more efficient. For more on such algorithms, see [1,2,9,[23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39]. ...
Article
In this paper, a hybrid derivative-free conjugate gradient method that inherits the structures of two conjugate gradient methods is introduced to recover sparse signal in compressive sensing by solving the nonlinear convex constrained equations. The global convergence of the proposed method is proved, under some appropriate assumptions. Numerical experiments and comparisons suggest that the proposed algorithm is an efficient approach for sparse signal and image reconstruction in compressive sensing.
... This is made possible by combining our proposed threeterm direction with classical Newton's direction. More recently, new CG algorithms were presented for solving monotone convex constraints nonlinear equations with applications, see [27][28][29][30][31]. ...
Article
Full-text available
In this paper, we proposed a new derivative-free three-term spectral conjugate gradient (DFTTS) method via extending the direction proposed by Birgin and Mertinez [E.G. Birgin, J.M. Mar-tinez, A spectral Conjugate Gradient Method for Unconstrained Optimization, Appl. Math. Optim. 43 (2) (2001) 117-128] to three-term together with the classical Newton's direction. One of the important properties of the proposed method is that, it generated a descent direction using inexact line search. The global convergence of the proposed algorithm was established under appropriate conditions. Numerical results for the benchmark test problems demonstrated an improved efficiency of the method over some existing ones. MSC: 49K35; 47H10; 20M12
... The wide range of applications of (1) inspired several researchers to solve problem (1). As such, several derivative-free methods were proposed [1,2,7,18,20,21,[23][24][25][26]28,39,44,45] and are suitable for solving large-scale problems. ...
Article
In this article, a hybrid approach technique incorporated with three-term conjugate gradient (CG) method is proposed to solve constrained nonlinear monotone operator equations. The search direction is defined such that it is close to the one obtained by the memoryless Broyden-Fletcher-Goldferb-Shanno (BFGS) method. Independent of the line search, the search direction possess the sufficient descent and trust region properties. Furthermore, the sequence of iterates generated converge globally under some appropriate assumptions. In addition, numerical experiments is carried out to test the efficiency of the proposed method in contrast with existing methods. Finally, the applicability of the proposed method in compressive sensing is shown.
... We note that (1.6) is a monotone system of equation [37,38]. Exploiting the simplicity and low storage requirement of the conjugate gradient method [1,2], in recent times, several authors have extended many conjugate gradient algorithms designed to solve unconstrained optimization problems to solve large-scale nonlinear equations (1.6) (see [3][4][5][6][7][8][9][10][11][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33]36]). For instance, using the projection scheme of Solodov and Svaiter [35], Xiao and Zhu [38] extended the Hager and Zhang conjugate descent (CG DESCENT) method to solve (1.6). ...
Article
Many real-world phenomena in engineering, economics, statistical inference, compressed sensing and machine learning involve finding sparse solutions to under-determined or ill-conditioned equations. Our interest in this paper is to introduce a derivative-free method for recovering sparse signal and blurred image arising in compressed sensing by solving a nonlinear equation involving a monotone operator. The global convergence of the proposed method is established under the assumptions of monotonicity and Lipschitz continuity of the underlying operator. Numerical experiments are performed to illustrate the efficiency of the proposed method in the reconstruction of sparse signals and blurred images.
... Theorem 4.7 Suppose that the conditions of Assumption 1 hold. If {x k } is the sequence generated by (11) in Algorithm 1, then ...
Article
Full-text available
In recent times, various algorithms have been incorporated with the inertial extrapolation step to speed up the convergence of the sequence generated by these algorithms. As far as we know, very few results exist regarding algorithms of the inertial derivative-free projection method for solving convex constrained monotone nonlinear equations. In this article, the convergence analysis of a derivative-free iterative algorithm (Liu and Feng in Numer. Algorithms 82(1):245–262, 2019) with an inertial extrapolation step for solving large scale convex constrained monotone nonlinear equations is studied. The proposed method generates a sufficient descent direction at each iteration. Under some mild assumptions, the global convergence of the sequence generated by the proposed method is established. Furthermore, some experimental results are presented to support the theoretical analysis of the proposed method.
... This method inherits the stability of the DY method, and greatly improves its computing performance. For more algorithms for solving (1.1), see [3,4,5,6,7,8,9,16,17,18,19,20,21,22,23,24,25,26,27,28]. ...
Article
Full-text available
Let R n be an Euclidean space and g : R n → R n be a monotone and continuous mapping. Suppose the convex constrained nonlinear monotone equation problem x ∈ C s.t g(x) = 0 has a solution. In this paper, we construct an inertial-type algorithm based on the three-term derivative-free projection method (TTMDY) for convex constrained monotone nonlinear equations. Under some standard assumptions, we establish its global convergence to a solution of the convex constrained nonlinear monotone equation. Furthermore , the proposed algorithm converges much faster than the existing non-inertial algorithm (TTMDY) for convex constrained monotone equations. 0
... Step 5. Find u k = z k + t k p k , where t k = m k with m k being the smallest nonnegative integer such that 10 Step 6. If u k ∈  and ‖ (u k )‖ ≤ Tol, stop. ...
Article
In optimization theory, to speed up the convergence of iterative procedures, many mathematicians often use the inertial extrapolation method. In this article, based on the three-term derivative-free method for solving monotone nonlinear equations with convex constraints [Calcolo, 2016;53(2):133-145], we design an inertial algorithm for finding the solutions of nonlinear equation with monotone and Lipschitz continuous operator. The convergence analysis is established under some mild conditions. Furthermore, numerical experiments are implemented to illustrate the behavior of the new algorithm. The numerical results have shown the effectiveness and fast convergence of the proposed inertial algorithm over the existing algorithm. Moreover, as an application, we extend this method to solve the LASSO problem to decode a sparse signal in compressive sensing. Performance comparisons illustrate the effectiveness and competitiveness of the method.
Article
Full-text available
Finding the sparse solution to under-determined or ill-condition equations is a fundamental problem encountered in most applications arising from a linear inverse problem, compressive sensing, machine learning and statistical inference. In this paper, inspired by the reformulation of the ℓ 1-norm regularized minimization problem into a convex quadratic program problem by Xiao et al. (Nonlinear Anal Theory Methods Appl, 74(11), 3570-3577), we propose, analyze, and test a derivative-free conjugate gradient method to solve the ℓ 1-norm problem arising from the reconstruction of sparse signal and image in compressive sensing. The method combines the MLSCD conjugate gradient method proposed for solving unconstrained minimization problem by Stanimirović et al. (J Optim Theory Appl, 178(3), 860-884) and a line search method. Under some mild assumptions, the global convergence of the proposed method is established using the backtracking line search. Computational experiments are carried out to reconstruct sparse signal and image in compressive sensing. The numerical results indicate that the proposed method is stable, accurate and robust.
Article
Full-text available
The convex constraint nonlinear equation problem is to find a point q with the property that q 2 D where D is a nonempty closed convex subset of Euclidean space R n. The convex constraint problem arises in many practical applications such as chemical equilibrium systems, economic equilibrium problems, and the power flow equations. In this paper, we extend the modified Dai-Yuan nonlinear conjugate gradient method with su ciently descent property proposed for large-scale optimization problem to solve convex constraint nonlinear equation and establish the global convergence of the proposed algorithm under certain mild conditions. Our result is a significant improvement compared with related method for solving the convex constraint nonlinear equation. MSC: 90C30; 65K05
Article
Full-text available
In this article, we propose a three-term conjugate gradient projection algorithm for solving constrained monotone nonlinear equations. The global convergence of the algorithm was established under suitable assumptions. Numerical examples presented indicate that the algorithm has a very good performance in solving monotone nonlinear equations. Finally, the algorithm is applied to solve signal recovery problems. Keywords: Non-linear equations, Conjugate gradient method, Projection method, Convex constraints, MSC: 65K05, 90C52, 90C56, 52A20
Article
Full-text available
This research paper proposes a derivative-free method for solving systems of nonlinearequations with closed and convex constraints, where the functions under consideration are continuousand monotone. Given an initial iterate, the process first generates a specific direction and then employsa line search strategy along the direction to calculate a new iterate. If the new iterate solves theproblem, the process will stop. Otherwise, the projection of the new iterate onto the closed convex set(constraint set) determines the next iterate. In addition, the direction satisfies the sufficient descentcondition and the global convergence of the method is established under suitable assumptions.Finally, some numerical experiments were presented to show the performance of the proposedmethod in solving nonlinear equations and its application in image recovery problems.
Article
Full-text available
In this article, we present a conjugate gradient method for large‐scale nonlinear equations with symmetric Jacobian. The method is a modification of the descent Dai‐Liao conjugate gradient method for unconstrained optimization problem proposed by Kafaki and Gambari [Optim. Meth. Soft. 2013; 29(3) 583‐591]. Under some assumptions which include symmetric property of the Jacobian, we establish the global convergence of the proposed method and finally, some numerical results to show its efficiency.
Article
Full-text available
One of the fastest growing and efficient methods for solving the unconstrained minimization problem is the conjugate gradient method (CG). Recently, considerable efforts have been made to extend the CG method for solving monotone nonlinear equations. In this research article, we present a modification of the Fletcher–Reeves (FR) conjugate gradient projection method for constrained monotone nonlinear equations. The method possesses sufficient descent property and its global convergence was proved using some appropriate assumptions. Two sets of numerical experiments were carried out to show the good performance of the proposed method compared with some existing ones. The first experiment was for solving monotone constrained nonlinear equations using some benchmark test problem while the second experiment was applying the method in signal and image recovery problems arising from compressive sensing.
Article
Full-text available
In this article, we propose a modified self-adaptive conjugate gradient algorithm for handling nonlinear monotone equations with the constraints being convex. Under some nice conditions, the global convergence of the method was established. Numerical examples reported show that the method is promising and efficient for solving monotone nonlinear equations. In addition, we applied the proposed algorithm to solve sparse signal reconstruction problems.
Article
Full-text available
In this paper, a modified Hestenes-Stiefel (HS) spectral conjugate gradient (CG) method for monotone nonlinear equations with convex constraints is proposed based on projection technique. The method can be viewed as an extension of a modified HS-CG method for unconstrained optimization proposed by Amini et al. (Optimization Methods and Software, pp: 1-13, 2018). A new search direction is obtained by incorporating the idea of spectral gradient parameter and some modification of the conjugate gradient parameter. The proposed method is derivative-free and requires low memory which makes it suitable for large scale monotone nonlinear equations. Global convergence of the method is established under suitable assumptions. Preliminary numerical comparisons with some existing methods are given to show the efficiency of our proposed method. Furthermore, the proposed method is successfully applied to solve sparse signal reconstruction in compressive sensing.
Article
Full-text available
The Hestenes-Stiefel (HS) conjugate gradient (CG) method is generally regarded as one of the most efficient methods for large-scale unconstrained optimization problems. In this paper, we extend a modified Hestenes-Stiefel conjugate gradient method based on the projection technique and present a new projection method for solving nonlinear monotone equations with convex constraints. The search direction obtained satisfies the sufficient descent condition. The method can be applied to solve nonsmooth monotone problems because it is derivative free. Under appropriate assumptions, the method is shown to be globally convergent. Preliminary numerical results show that the proposed method works well and is very efficient.
Article
Full-text available
In this article, we propose a descent Dai-Liao projection method to solve non-linear monotone equations with convex constraints. Under some suitable conditions, we prove the global convergence of the method. Numerical experiments presented show that the method is efficient and promising compared to existing method for solving monotone nonlinear equations. Finally, the proposed method was applied to solve the sparse signal reconstruction problem in compressive sensing.
Article
Restoring images corrupted by Poisson noise have attracted much attention in recent years due to its significant applications in image processing. There are various regularization methods of solving this problem and one of the most famous is the total variation (TV) model. In this paper, we present a new method based on accelerated alternating minimization algorithm (AAMA) which involves minimizing the sum of a Kullback–Leibler divergence term and a TV term for restoring Poisson noise degraded images. Our proposed algorithm is applied in solving the aforementioned problem and its convergence analysis is established under very weak conditions. In addition, the numerical examples reported demonstrate the efficiency and versatility of our method compared to existing methods of restoring images with Poisson noise.
Article
In this paper, we present a modification of Polak-Ribí e re-Polyak (PRP) conjugate gradient method for solving system of monotone nonlinear equations which is a combination of spectral conjugate gradient method and the hyperplane projection technique. The method is based on two methods for unconstrained optimization proposed by Wan et al. (2011) and Sun (2015). We obtained a new search direction by the use of a different formula for the conjugate gradient parameter. The search direction satisfies the sufficient descent condition and the global convergence of the method is established under some assumptions. Preliminary numerical comparison with some existing methods shows the efficiency of the proposed method.