Figure - uploaded by Auwal Bala Abubakar

Content may be subject to copyright.

Source publication

An efficient method for solving large scale unconstrained optimization problems is the conjugate method. Based on the conjugate gradient algorithm proposed by Rivaie, Mohd, et al. ("A new class of nonlinear conjugate gradient coefficients with global convergence properties." Applied Mathematics and Computation 218.22 (2012): 11323-11332.), we propo...

## Similar publications

In this paper, we present a family of Perry conjugate gradient methods for solving large-scale systems of monotone nonlinear equations. The methods are developed by combining modified versions of Perry (Oper. Res. Tech. Notes 26(6), 1073–1078, 1978) conjugate gradient method with the hyperplane projection technique of Solodov and Svaiter (1998). Gl...

Conjugate gradient methods are the most famous methods for solving nonlinear unconstrained optimization
problems, especially large scale problems. That is, for its simplicity and low memory requirement. The strong Wolfe line
search are usually used in practice for the analyses and implementations of conjugate gradient methods. In this paper, we
pre...

This paper introduces a new notion of a Fenchel conjugate, which generalizes the classical Fenchel conjugation to functions defined on Riemannian manifolds. We investigate its properties, e.g., the Fenchel–Young inequality and the characterization of the convex subdifferential using the analogue of the Fenchel–Moreau Theorem. These properties of th...

In this paper, we propose a hybrid conjugate gradient (CG) method based on the approach of convex combination of Fletcher-Reeves (FR) and Polak-Ribière-Polyak (PRP) parameters, and Quasi-Newton's update. This is made possible by using self-scaling memory-less Broy-den's update together with a hybrid direction consisting of two CG parameters. Howeve...

The conjugate gradient method is among the efficient method for solving unconstrained optimization problems. In this paper, we propose a new formula for the conjugate gradient method based on the modification of the NPRP formula (Zhang, 2009). The proposed method satisfies the sufficient descent condition, and global convergence proof was establish...

## Citations

... Compared with existing algorithms, they proved to be more efficient. For more on such algorithms, see [1,2,9,[23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39]. ...

In this paper, a hybrid derivative-free conjugate gradient method that inherits the structures of two conjugate gradient methods is introduced to recover sparse signal in compressive sensing by solving the nonlinear convex constrained equations. The global convergence of the proposed method is proved, under some appropriate assumptions. Numerical experiments and comparisons suggest that the proposed algorithm is an efficient approach for sparse signal and image reconstruction in compressive sensing.

... Interested readers can refer to the following articles and references therein. [24][25][26][27][28][29][30][31][32][33][34][35] Our interest in this article is to propose, analyze, and test a derivative-free hybrid iterative method for solving (1). The propose method also uses a convex combination of the HS and the DY methods as in Abubakar et al. 20 However, a different hybridization parameter is chosen in this paper. ...

... The inequality above contradicts (28) and hence (29) holds. ...

This paper presents a hybrid conjugate gradient (CG) approach for solving nonlinear equations and signal reconstruction. The CG parameter of the approach is a convex combination of the Dai‐Yuan (DY)‐like and Hestenes‐Stiefel (HS)‐like parameters. Independent of any line search, the search direction is descent and bounded. Under some reasonable assumptions, the global convergence of the hybrid approach is proved. Numerical experiments on some benchmark test problems show that the proposed approach is efficient compared with some existing algorithms. Finally, the proposed approach is applied in signal reconstruction.

... The wide range of applications of (1) inspired several researchers to solve problem (1). As such, several derivative-free methods were proposed [1,2,7,18,20,21,[23][24][25][26]28,39,44,45] and are suitable for solving large-scale problems. ...

In this article, a hybrid approach technique incorporated with three-term conjugate gradient (CG) method is proposed to solve constrained nonlinear monotone operator equations. The search direction is defined such that it is close to the one obtained by the memoryless Broyden-Fletcher-Goldferb-Shanno (BFGS) method. Independent of the line search, the search direction possess the sufficient descent and trust region properties. Furthermore, the sequence of iterates generated converge globally under some appropriate assumptions. In addition, numerical experiments is carried out to test the efficiency of the proposed method in contrast with existing methods. Finally, the applicability of the proposed method in compressive sensing is shown.

... We note that (1.6) is a monotone system of equation [37,38]. Exploiting the simplicity and low storage requirement of the conjugate gradient method [1,2], in recent times, several authors have extended many conjugate gradient algorithms designed to solve unconstrained optimization problems to solve large-scale nonlinear equations (1.6) (see [3][4][5][6][7][8][9][10][11][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33]36]). For instance, using the projection scheme of Solodov and Svaiter [35], Xiao and Zhu [38] extended the Hager and Zhang conjugate descent (CG DESCENT) method to solve (1.6). ...

Many real-world phenomena in engineering, economics, statistical inference, compressed sensing and machine learning involve finding sparse solutions to under-determined or ill-conditioned equations. Our interest in this paper is to introduce a derivative-free method for recovering sparse signal and blurred image arising in compressed sensing by solving a nonlinear equation involving a monotone operator. The global convergence of the proposed method is established under the assumptions of monotonicity and Lipschitz continuity of the underlying operator. Numerical experiments are performed to illustrate the efficiency of the proposed method in the reconstruction of sparse signals and blurred images.

... Theorem 4.7 Suppose that the conditions of Assumption 1 hold. If {x k } is the sequence generated by (11) in Algorithm 1, then ...

In recent times, various algorithms have been incorporated with the inertial extrapolation step to speed up the convergence of the sequence generated by these algorithms. As far as we know, very few results exist regarding algorithms of the inertial derivative-free projection method for solving convex constrained monotone nonlinear equations. In this article, the convergence analysis of a derivative-free iterative algorithm (Liu and Feng in Numer. Algorithms 82(1):245–262, 2019) with an inertial extrapolation step for solving large scale convex constrained monotone nonlinear equations is studied. The proposed method generates a sufficient descent direction at each iteration. Under some mild assumptions, the global convergence of the sequence generated by the proposed method is established. Furthermore, some experimental results are presented to support the theoretical analysis of the proposed method.

... This method inherits the stability of the DY method, and greatly improves its computing performance. For more algorithms for solving (1.1), see [3,4,5,6,7,8,9,16,17,18,19,20,21,22,23,24,25,26,27,28]. ...

Let R n be an Euclidean space and g : R n → R n be a monotone and continuous mapping. Suppose the convex constrained nonlinear monotone equation problem x ∈ C s.t g(x) = 0 has a solution. In this paper, we construct an inertial-type algorithm based on the three-term derivative-free projection method (TTMDY) for convex constrained monotone nonlinear equations. Under some standard assumptions, we establish its global convergence to a solution of the convex constrained nonlinear monotone equation. Furthermore , the proposed algorithm converges much faster than the existing non-inertial algorithm (TTMDY) for convex constrained monotone equations. 0

... Step 5. Find u k = z k + t k p k , where t k = m k with m k being the smallest nonnegative integer such that 10 Step 6. If u k ∈ and ‖ (u k )‖ ≤ Tol, stop. ...

In optimization theory, to speed up the convergence of iterative procedures, many mathematicians often use the inertial extrapolation method. In this article, based on the three-term derivative-free method for solving monotone nonlinear equations with convex constraints [Calcolo, 2016;53(2):133-145], we design an inertial algorithm for finding the solutions of nonlinear equation with monotone and Lipschitz continuous operator. The convergence analysis is established under some mild conditions. Furthermore, numerical experiments are implemented to illustrate the behavior of the new algorithm. The numerical results have shown the effectiveness and fast convergence of the proposed inertial algorithm over the existing algorithm. Moreover, as an application, we extend this method to solve the LASSO problem to decode a sparse signal in compressive sensing. Performance comparisons illustrate the effectiveness and competitiveness of the method.

... where ϑ k = ϑ(y k ), z k−1 = ϑ k − ϑ k−1 . For more on CG methods, we refer the reader to [1][2][3][4]8,9,11,[22][23][24][25][26]36]. Several algorithms have been proposed using the above parameters or their modified version. ...

... and from (19), (22) and the Cauchy-Schwartz inequality, ...

In this paper, we present a new hybrid spectral-conjugate gradient (SCG) algorithm for finding approximate solutions to nonlinear monotone operator equations. The hybrid conjugate gradient parameter has the Polak–Ribière–Polyak (PRP), Dai-Yuan (DY), Hestenes-Stiefel (HS) and Fletcher-Reeves (FR) as special cases. Moreover, the spectral parameter is selected such that the search direction has the descent property. Also, the search directions are bounded and the sequence of iterates generated by the new hybrid algorithm converge globally. Furthermore, numerical experiments were conducted on some benchmark nonlinear monotone operator equations to assess the efficiency of the proposed algorithm. Finally, the algorithm is shown to have the ability to recover disturbed signals.

... Derivative-free methods based on the conjugate gradient (CG) method have been studied by several authors (see, [8][9][10][11][12][13][14][15] ). These methods are derived by incorporating the conjugate gradient methods for solving an unconstrained optimization problem with the projection technique of Solodov and Svaiter [16] . ...

... The following lemmas are useful in the proof of the main theorem. (2) , (7) and (8), respectively, then (i ) there exists α k = κρ i satisfying (6) for some i ∈ N ∪ { 0 } and ∀ k ≥ 0 . ...

This article introduces a derivative-free method for solving convex constrained nonlinear equations involving a monotone operator with a Lipschitz condition imposed on the underlying operator. The proposed method incorporate the projection technique with the three-term Polak-Ribière-Polyak conjugate gradient method for the unconstrained optimization problem proposed by Min Li [J.Ind.Manag. Optim.16.1(2020): 245.16.1 (2020): 245]. Under some standard assumptions, we establish the global convergence of the proposed method. Furthermore, we provide some numerical examples and application to image deblurring problem to illustrate the effectiveness and competitiveness of the proposed method. Numerical results indicate that the proposed method is remarkably promising.

... The parameter β k is called the CG parameter. For more on derivative-free methods for solving (1), interested readers can refer to [14]- [35] and references therein. ...

... Also, using (14) and the monotonicity of T , ...

In this paper, we propose an inertial derivative-free projection method for solving convex constrained nonlinear monotone operator equations (CNME). The method incorporates the inertial step with an existing method called derivative-free projection (DFPI) method for solving CNME. The reason is to improve the convergence speed of DFPI as it has been shown and reported in several works that indeed the inertial step can speed up convergence. The global convergence of the proposed method is proved under some mild assumptions. Finally, numerical results reported clearly show that the propose method is more efficient than the DFPI.