Article

Globally convergent modified Perry’s conjugate gradient method

If you want to read the PDF, try requesting it from the authors.

Abstract

Conjugate gradient methods are probably the most famous iterative methods for solving large scale optimization problems in scientific and engineering computation, characterized by the simplicity of their iteration and their low memory requirements. In this paper, we propose a new conjugate gradient method which is based on the MBFGS secant condition by modifying Perry’s method. Our proposed method ensures sufficient descent independent of the accuracy of the line search and it is globally convergent under some assumptions. Numerical experiments are also presented.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Recently, some research articles have been published aiming at developing spectral CG methods for solving large-scale nonlinear system of monotone equations. Dai et al 7 combined the modified Perry's CG method 8 and the hyperplane projection method of Solodov and Savaiter 9 and proposed a derivative-free Perry's-type method for solving nonlinear monotone equations. Liu and Li 10 incorporated the Dai-Yuan (DY) 11 CG method with the projection technique and proposed a spectral DY-type projection method for solving nonlinear monotone equations. ...
... > 0, then d k will be computed by (8). Else, d k = −F(x k ). ...
... Remark 2. The search direction defined by (8) satisfies the sufficient descent condition if k − k ...
Article
Full-text available
We present a new approach for constructing a spectral conjugate gradient‐type method for solving nonlinear equations. The proposed method uses an approximate optimal step size together with the memoryless Broyden–Fletcher–Goldfarb–Shanno (BFGS) formula to generate a new choice of the spectral conjugate gradient‐type direction that satisfies the sufficient descent condition without line search requirement. The global convergence of the method is achieved under some mild assumptions. Numerical experiments on both nonlinear monotone equations and signal reconstruction problems reveal the efficiency of the new approach.
... An attractive property of this class is that the property g T k d k = − g k 2 always holds independent of the choice of parameter β k . Furthermore, if β k in (6) is specified by an existing conjugate gradient formula, we obtain the corresponding modified conjugate gradient method [9,10,22,[32][33][34][35]. Recently, researchers [7,[22][23][24][25]30] paid special attention to hybridize the above two approaches. ...
... An attractive property of this class is that the property g T k d k = − g k 2 always holds independent of the choice of parameter β k . Furthermore, if β k in (6) is specified by an existing conjugate gradient formula, we obtain the corresponding modified conjugate gradient method [9,10,22,[32][33][34][35]. Recently, researchers [7,[22][23][24][25]30] paid special attention to hybridize the above two approaches. More specifically, new descent conjugate gradient methods have been proposed which possess global convergence and are theoretically superior to classical methods by utilizing new modified secant equations. ...
... Furthermore, when orthogonality is restored, they exploit the second order information obtained from the previous step and develop an L-BFGS based preconditioner in order to accelerate the convergence of their method. Motivated by the previous works, we present a limited memory descent conjugate gradient method which consists of a preconditioned version of descent Perry [22]. An important property of our proposed method is that it corrects the loss of orthogonality that can occur in ill-conditioned optimization problems. ...
Article
Full-text available
In this work, we present a new limited memory conjugate gradient method which is based on the study of Perry’s method. An attractive property of the proposed method is that it corrects the loss of orthogonality that can occur in ill-conditioned optimization problems, which can decelerate the convergence of the method. Moreover, an additional advantage is that the memory is only used to monitor the orthogonality relatively cheaply; and when orthogonality is lost, the memory is used to generate a new orthogonal search direction. Under mild conditions, we establish the global convergence of the proposed method provided that the line search satisfies the Wolfe conditions. Our numerical experiments indicate the efficiency and robustness of the proposed method.
... However, the global convergence result for general functions was not established then. Livieris and Pintelas [18] proposed a modified Perry's conjugate gradient method based on the modified secant condition. An important property of their method is that it satisfies sufficient descent condition independent of the line search strategy used. ...
... Recently, CG algorithms for solving unconstrained optimization have been extended to address monotone nonlinear systems of equations. For instance, Dai et al. [23] extended the modified Perry CG method in [18] to solve the unconstrained version of problem (4) based on the hyperplane projection method proposed by Solodov and Svaiter [24]. Preliminary numerical results presented show that their method works well. ...
... Step 2. Compute d k using Equations (12), (18) and (19). ...
Article
Full-text available
In this paper, we propose a Perry-type derivative-free algorithm for solving systems of nonlinear equations. The algorithm is based on the well-known BFGS quasi-Newton method with a modified Perry's parameter. The global convergence of the algorithm is established without assumption on the regularity or boundedness of the solution set. Meanwhile, the sequence of iterates generated by the algorithm converges globally to the solution of the problem provided that the function is Lipschitz continuous and monotone. Preliminary numerical experiments on some collection of general nonlinear equations and convex constrained nonlinear monotone equations demonstrate the efficiency of the algorithm. Moreover, we successfully apply the proposed algorithm to solve signal recovery problem.
... where y k = g k+1 − g k is the gradient change and s k = x k+1 − x k , which was considered as one of the most efficient and robust conjugate gradient methods [7][8][9][10][11][12]. An advantage of the β P k is that the direction generated by (3) and (6) has the quasi-Newton form: ...
... with t = 1. In [13], based on the mean value theorem and quasi-Newton equation, Dai and Liao proposed the above Dai-Liao conjugacy condition (9). By using condition (9), Dai and Liao obtained the following new formula for β k ...
... In this paper, we consider the combination of Perry update matrix (8) and Dai-Liao conjugacy condition (9). Instead of ensuring Dai-Liao conjugacy condition (9) holds exactly, we construct a modified Perry update matrix to guarantee the symmetric property and discuss the adaptive choice for the parameter in the model. ...
Article
Full-text available
In this paper, we present a conjugate gradient method for solving unconstrained optimization problems. Motivated by Perry conjugate gradient method and Dai-Liao method, an improved Perry update matrix is proposed to overcome the non-symmetric positive definite property of the Perry matrix. The parameter in the update matrix is determined by minimizing the condition number of the iterative matrix which can ensure the positive definite property. The obtained method can also be considered as a modified form of CG-DESCENT method with an adjusted term. Under some mild conditions, the presented method is global convergent. Numerical experiments under CUTEst environment show that the proposed algorithm is promising.
... Conjugate gradient (CG) method for unconstrained optimization problems is the preferable choice and most appropriate alternative to the aforementioned schemes, when dealing with problems with large dimensions. This is due to the fact that the scheme requires less memory to implement and possess strong convergence properties [7,49]. The CG iterative schemes are mostly applied to solve the following minimization problem: min ∈R ( ), (1.3) with : R −→ R representing a real-valued nonlinear mapping, whose gradient is attainable. ...
... As with typical CG methods, the scheme is derivative-free and utilizes the line search procedure, modified by Grippo et al. [27] and Li-Fukushima [44]. Furthermore, Dai et al. [16] also proposed a derivative-free method for solving monotone nonlinear equations by combining the modified Perry method [49] with the projection method [64]. The scheme converges globally and is considered as an improvement of the classical Perry method [49]. ...
... Furthermore, Dai et al. [16] also proposed a derivative-free method for solving monotone nonlinear equations by combining the modified Perry method [49] with the projection method [64]. The scheme converges globally and is considered as an improvement of the classical Perry method [49]. By employing a modified secant equation and carrying out eigenvalue study of a modified Dai-Liao search direction matrix, Waziri et al. [72] proposed an effective CG method, which converges globally for nonlinear systems of equations. ...
Article
Full-text available
Notwithstanding its efficiency and nice attributes, most research on the iterative scheme by Hager and Zhang [Pac. J. Optim. 2(1) (2006) 35-58] are focused on unconstrained minimization problems. Inspired by this and recent works by Waziri et al. [Appl. Math. Comput. 361(2019) 645-660], Sabi’u et al. [Appl. Numer. Math. 153(2020) 217-233], and Sabi’u et al. [Int. J. Comput. Meth, doi:10.1142/S0219876220500437], this paper extends the Hager-Zhang (HZ) approach to nonlinear monotone systems with convex constraint. Two new HZ-type iterative methods are developed by combining the prominent projection method by Solodov and Svaiter [Springer, pp 355-369, 1998] with HZ-type search directions, which are obtained by developing two new parameter choices for the Hager-Zhang scheme. The first choice, is obtained by minimizing the condition number of a modified HZ direction matrix, while the second choice is realized using singular value analysis and minimizing the spectral condition number of the nonsingular HZ search direction matrix. Interesting properties of the schemes include solving non-smooth functions and generating descent directions. Using standard assumptions, the methods’ global convergence are obtained and numerical experiments with recent methods in the literature, indicate that the methods proposed are promising. The schemes effectiveness are further demonstrated by their applications to sparse signal and image reconstruction problems, where they outperform some recent schemes in the literature.
... where s k−1 = x k − x k−1 and y k−1 = g k − g k−1 . This conjugate gradient method is based on a quasi-Newton philosophy and has been considered to be one of the most efficient conjugate gradient methods in the context of unconstrained minimization [1][2][3]5,6,11,29,37]. From the previous decade, much effort has been devoted to develop new conjugate gradient methods which have good computational efficiency and also possess strong convergence properties. ...
... Recently, researchers [13,29,30,32,38] paid special attention to hybridize the above two approaches in order to attain good numerical performance and possess strong convergence properties. More analytically, new conjugate gradient methods have been proposed which maintain the the attractive feature of generating descent directions avoiding thereby the usual inefficient restarts. ...
... Moreover, in order to have a convex combination in (10) we restrict the values of λ k in the interval [0, 1], namely if λ k < 0 then we set λ k = 0 and also, if λ k > 1 then we set λ k = 1. Next, by taking into consideration, the theoretical advantages of the modified secant equation (9) and the computational efficiency of Perry's conjugate gradient method [29,30,40,41], we propose a modification of Perry's formula (4) as follows: ...
Article
In this work, we propose a new conjugate gradient method which consists of a modification of Perry’s method and ensures sufficient descent independent of the accuracy of the line search. An important property of our proposed method is that it achieves a high-order accuracy in approximating the second order curvature information of the objective function by utilizing a new modified secant condition. Moreover, we establish that the proposed method is globally convergent for general functions provided that the line search satisfies the Wolfe conditions. Our numerical experiments indicate that our proposed method is preferable and in general superior to classical conjugate gradient methods in terms of efficiency and robustness.
... In the recent decade, conjugate gradient (CG) methods have been given much attention by researchers dealing with large-scale optimization problems. This is due to the method's simple implementation, low memory requirement and global convergence properties [5,43]. ...
... An extension of the PRP method [51,52] was proposed by Yu [69,70] to solve large-scale nonlinear systems with monotone line search strategies, which are modifications of the Grippo et al. [21] and Li and Fukushima [38] methods. As an improvement of Perry's CG method for unconstrained optimization, Dai et al. [15] proposed a derivative-free method for solving large-scale nonlinear monotone equations by combining the modified Perry CG method [43] and the hyperplane projection technique of Solodov and Svaiter [56]. By replacing the gradients of the classical PRP method [51,52] with the residuals combined with the hyperplane projection technique, Zhou and Wang [81] presented a derivative-free residual method for large-scale monotone nonlinear equations, which may also be nonsmooth. ...
Article
Full-text available
In this paper, we present two Dai-Yuan type iterative methods for solving large-scale systems of nonlinear monotone equations. The methods can be considered as extensions of the classical Dai-Yuan conjugate gradient method for unconstrained optimization. By employing two different approaches, the Dai-Yuan method is modified to develop two different search directions, which are combined with the hyperplane projection technique of Solodov and Svaiter. The first search direction was obtained by carrying out eigenvalue study of the search direction matrix of an adaptive DY scheme, while the second is obtained by minimizing the distance between two adaptive versions of the DY method. Global convergence of the methods are established under mild conditions and preliminary numerical results show that the proposed methods are promising and more effective compared to some existing methods in the literature.
... For instance, Zhang and Zhou [17] extended the work of Birgin and Martinéz [18] for unconstrained optimization problems by combining it with the projection method and proposed a spectral gradient projection-based algorithm for solving (1). Dai et al. [19] extend the modified Perry's CG method [20] for solving unconstrained optimization problem to solve (1) by combining it with the projection method. Liu and Li [21] incorporated the Dai-Yuan (DY) [22] CG method with the projection method and proposed a spectral Dai-Yuan (SDY) projection method for solving nonlinear monotone equations. ...
... since M 2 > M 1 . Multiplying (20) by ‖d k ‖, we get ...
Article
Full-text available
is paper proposes a modified scaled spectral-conjugate-based algorithm for finding solutions to monotone operator equations. e algorithm is a modification of the work of Li and Zheng in the sense that the uniformly monotone assumption on the operator is relaxed to just monotone. Furthermore, unlike the work of Li and Zheng, the search directions of the proposed algorithm are shown to be descent and bounded independent of the monotonicity assumption. Moreover, the global convergence is established under some appropriate assumptions. Finally, numerical examples on some test problems are provided to show the efficiency of the proposed algorithm compared to that of Li and Zheng.
... We focus our study on conjugate gradient and quasi-Newton methods [4,5,19,38,40,47,57], and all of these algorithms are based on the following three steps. ...
... Notice that a local minimum may coincide with the global one, and in this case we have G ∞ = G * . We select in our analysis the following ten optimization algorithms due to their wide use in practice, good speed of convergence, and general acceptance in the literature: • Perry (P), see [38], [47]; • Dai-Yuan (DY), see [20]; • Liu-Storey (LS), see [37]. ...
Chapter
Full-text available
This chapter presents conditions for which the optimal finite-stage cost, divided by the number of stages, converges to the optimal long-run average cost as the number of stages goes to infinity. The main condition is based on a controllability to the origin property. The discrete-time stochastic system is linear with respect to the system state but the control possess a general structure, possibly nonlinear. To illustrate the effectiveness of the result, an application to the simultaneous state-feedback control problem is considered.
... We focus our study on conjugate gradient and quasi-Newton methods [4,5,19,38,40,47,57], and all of these algorithms are based on the following three steps. ...
... Notice that a local minimum may coincide with the global one, and in this case we have G ∞ = G * . We select in our analysis the following ten optimization algorithms due to their wide use in practice, good speed of convergence, and general acceptance in the literature: • Perry (P), see [38], [47]; • Dai-Yuan (DY), see [20]; • Liu-Storey (LS), see [37]. ...
Chapter
Full-text available
In this chapter, we present the finite-time control problem of Markov jump linear systems for the case in which the controller does not have access to the state of the Markov chain. A necessary optimal condition, which is nonlinear with respect to the optimizing variables, is introduced and the corresponding solution is obtained through a variational convergent method. We illustrate the practical usefulness of the derived approach by applying it in the speed control of a real DC motor device subject to abrupt power failures.
... Some iterative methods for solving these problems include Newton and quasi-Newton schemes [9,14,31,54], the Gauss-Newton methods [16,31], the Levenberg-Marquardt methods [25,28,40], the derivative-free methods [55], the subspace methods [64], the tensor methods [8], and the trust-region methods [51,65,69]. Conjugate gradient (CG) methods represent an ideal choice for mathematicians and engineers engaged in large-scale problems because of their low memory requirement and strong global convergence properties [5,37]. Generally, the nonlinear CG method is used to solve large-scale problems in the following form; ...
... Yu [60,61] extended the PRP method [45,46] to solve large-scale nonlinear systems with monotone line search strategies, which are modifications of the Grippo et al. [20] and Li-Fukushima [32] schemes. As a further research of the Perry CG method, Dai et al. [13] combined the modified Perry CG method [37] and the hyperplane projection technique of Solodov and Svaiter [48] to propose a derivativefree method for solving large-scale nonlinear monotone equations. Also, by replacing the gradients of the unmodified PRP method [45,46] with the residuals combined with the hyperplane projection technique, Zhou and Wang [74] presented a derivative-free residual method for large-scale monotone nonlinear equations, which may also be nonsmooth. ...
Article
Full-text available
In this paper, we propose two conjugate gradient methods for solving large-scale monotone nonlinear equations. The methods are developed by combining the hyperplane projection method by Solodov and Svaiter (Reformulation: nonsmooth, piecewise smooth, semismooth and smoothing methods. Springer, pp 355–369, 1998) and two modified search directions of the famous Dai and Liao (Appl Math Optim 43(1): 87–101, 2001) method. It is shown that the proposed schemes satisfy the sufficient descent condition. The global convergence of the methods are established under mild conditions, and computational experiments on some benchmark test problems show that the methods are promising.
... Some iterative methods for solving these problems include Newton and quasi-Newton schemes [9,14,31,54], the Gauss-Newton methods [16,31], the Levenberg-Marquardt methods [25,28,40], the derivative-free methods [55], the subspace methods [64], the tensor methods [8], and the trust-region methods [51,65,69]. Conjugate gradient (CG) methods represent an ideal choice for mathematicians and engineers engaged in large-scale problems because of their low memory requirement and strong global convergence properties [5,37]. Generally, the nonlinear CG method is used to solve large-scale problems in the following form; ...
... Yu [60,61] extended the PRP method [45,46] to solve large-scale nonlinear systems with monotone line search strategies, which are modifications of the Grippo et al. [20] and Li-Fukushima [32] schemes. As a further research of the Perry CG method, Dai et al. [13] combined the modified Perry CG method [37] and the hyperplane projection technique of Solodov and Svaiter [48] to propose a derivativefree method for solving large-scale nonlinear monotone equations. Also, by replacing the gradients of the unmodified PRP method [45,46] with the residuals combined with the hyperplane projection technique, Zhou and Wang [74] presented a derivative-free residual method for large-scale monotone nonlinear equations, which may also be nonsmooth. ...
Article
Following a recent attempt by Waziri et al. [2019] to find an appropriate choice for the nonnegative parameter of the Hager–Zhang conjugate gradient method, we have proposed two adaptive options for the Hager–Zhang nonnegative parameter by analyzing the search direction matrix. We also used the proposed parameters with the projection technique to solve convex constraint monotone equations. Furthermore, the global convergence of the methods is proved using some proper assumptions. Finally, the efficacy of the proposed methods is demonstrated using a number of numerical examples.
... Hence v k is well-defined and so is β PMHS k by (18). Now taking the inner product of the search direction defined by (10) ...
... The first and second inequalities follow from triangle inequality and Cauchy-Schwartz inequality, respectively. The third inequality follows from (18) and (29), while the fourth inequality follows from (26). If we let c : ...
... Hence v k is well-defined and so is β PMHS k by (18). Now taking the inner product of the search direction defined by (10) ...
... The first and second inequalities follow from triangle inequality and Cauchy-Schwartz inequality, respectively. The third inequality follows from (18) and (29), while the fourth inequality follows from (26). If we let c : ...
Article
Full-text available
A number of practical problems in science and engineering can be converted into a system of nonlinear equations and therefore, it is imperative to develop efficient methods for solving such equations. Due to their nice convergence properties and low storage requirements, conjugate gradient methods are considered among the most efficient for solving large-scale nonlinear equations. In this paper, a modified conjugate gradient method is proposed based on a projection technique and a suitable line search strategy. The proposed method is matrix-free and its sequence of search directions satisfies sufficient descent condition. Under the assumption that the underlying function is monotone and Lipschitzian continuous, the global convergence of the proposed method is established. The method is applied to solve some benchmark monotone nonlinear equations and also extended to solve 1-norm regularized problems to reconstruct a sparse signal in compressive sensing. Numerical comparison with some existing methods shows that the proposed method is competitive, efficient and promising.
... We focus our study on conjugate gradient and quasi-Newton methods [4,5,19,38,40,47,57], and all of these algorithms are based on the following three steps. ...
... Notice that a local minimum may coincide with the global one, and in this case we have G ∞ = G * . We select in our analysis the following ten optimization algorithms due to their wide use in practice, good speed of convergence, and general acceptance in the literature: • Perry (P), see [38], [47]; • Dai-Yuan (DY), see [20]; • Liu-Storey (LS), see [37]. ...
Chapter
Full-text available
Markov jump linear systems represent a class of stochastic systems able to represent processes subject to abrupt random variations. In this book, we present some recent advances for the control of such a class of systems, in particular when the controller does not have access to the Markovian mode. The book also presents a real-time application for direct current motors, illustrating the practical usefulness of Markov jump linear systems.
... where α k is the stepsize obtained by some line search, and d k is the search direction generated by d k+1 = −g k+1 + β k+1 d k , d 0 = −g 0 , in which β k is known as the CG parameter and has many different choices. We refer to the book [9], the survey paper [14] and some recent references [6,5,7,16, ...
... This inequality together with (16) and (47) implies that ...
... Moreover, if β k is specified by an existing conjugate gradient formula, we obtain the corresponding modified conjugate gradient method. Along this line, many related conjugate gradient methods have been extensively studied which possess global convergence for general functions and are also computationally competitive to classical methods [5,20,43,44]. On the basis of this idea, Livieris and Pintelas [19,21,22] proposed some descent conjugate gradient training algorithms providing some promising results. Based on their numerical experiments the authors concluded that the sufficient descent property led to a significant improvement of the efficiency of the training process. ...
Article
In this paper, we propose a new class of conjugate gradient algorithms for training neural networks which is based on a new modified nonmonotone scheme proposed by Shi and Wang (2011). The utilization of a nonmonotone strategy enables the training algorithm to overcome the case where the sequence of iterates runs into the bottom of a curved narrow valley, a common occurrence in neural network training process. Our proposed class of methods ensures sufficient descent, avoiding thereby the usual inefficient restarts and it is globally convergent under mild conditions. Our experimental results provide evidence that the proposed nonmonotone conjugate gradient training methods are efficient, outperforming classical methods, proving more stable, efficient and reliable learning.
... Numerical results, obtained by using the modified Armijo line search and the strong Wolfe line search, show that the proposed method is more effective than the classical PRP method. Moreover, the new scheme of Zhang et al. has been extensively studied and adopted by many researchers, see [20][21][22][23][24][25]. In these references, the authors proposed modifications of the classical conjugate gradient methods which ensure sufficient descent d T k g k = −∥g k ∥ 2 , possessed global convergence for general function and exhibited very good computational performance. ...
Article
This paper establishes a spectral conjugate gradient method for solving unconstrained optimization problems, where the conjugate parameter and the spectral parameter satisfy a restrictive relationship. The search direction is sufficient descent without restarts in per-iteration. Moreover, this feature is independent of any line searches. Under the standard Wolfe line searches, the global convergence of the proposed method is proved when [Formula presented] holds. The preliminary numerical results are presented to show effectiveness of the proposed method.
... Step size largely infects the convergent speed of conjugate gradient algorithm. If we choose Wolfe line search rules, we have the following formula [20]: ...
Article
Full-text available
Our work is devoted to a class of optimal control problems of parabolic partial differential equations. Because of the partial differential equations constraints, it is rather difficult to solve the optimization problem. The gradient of the cost function can be found by the adjoint problem approach. Based on the adjoint problem approach, the gradient of cost function is proved to be Lipschitz continuous. An improved conjugate method is applied to solve this optimization problem and this algorithm is proved to be convergent. This method is applied to set-point values in continuous cast secondary cooling zone. Based on the real data in a plant, the simulation experiments show that the method can ensure the steel billet quality. From these experiment results, it is concluded that the improved conjugate gradient algorithm is convergent and the method is effective in optimal control problem of partial differential equations.
... Conjugate gradient (CG) methods form an important class of algorithms used in solving large-scale unconstrained optimization problems. They represent an ideal choice for mathematicians and engineers engaged in large-scale problems because of their low memory requirement and strong global convergence properties [5,36]. Generally, the nonlinear CG method is used to solve large-scale problems in the following form: ...
Article
Full-text available
In this paper, we present a family of Perry conjugate gradient methods for solving large-scale systems of monotone nonlinear equations. The methods are developed by combining modified versions of Perry (Oper. Res. Tech. Notes 26(6), 1073–1078, 1978) conjugate gradient method with the hyperplane projection technique of Solodov and Svaiter (1998). Global convergence and numerical results of the methods are established and preliminary numerical results shows that the proposed methods are promising and more effective compared to some existing methods in the literature.
... Yu [58,59] extended the PRP method [45] to solve large-scale nonlinear systems with monotone line search strategies, which are modifications of the Grippo-Lampariello-Lucidi [29] and Li-Fukushima [35] schemes. As a further research of the Perry's conjugate gradient method, Dai et al. [21] combined the modified Perry conjugate gradient method [41] and the hyperplane projection technique of Solodov and Svaiter [48] to propose a derivative-free method for solving large-scale nonlinear monotone equations. By combining the descent Dai-Liao CG method by Babaie-Kafaki and Ghanbari [54] and the projection method in [48], Abubakar and Pumam [2] proposed a descent Dai-Liao CG method for nonlinear equations. ...
Article
Full-text available
In this paper, we propose a Dai–Liao (DL) conjugate gradient method for solving large-scale system of nonlinear equations. The method incorporates an extended secant equation developed from modified secant equations proposed by Zhang et al. (J Optim Theory Appl 102(1):147–157, 1999) and Wei et al. (Appl Math Comput 175(2):1156–1188, 2006) in the DL approach. It is shown that the proposed scheme satisfies the sufficient descent condition. The global convergence of the method is established under mild conditions, and computational experiments on some benchmark test problems show that the method is efficient and robust.
... Yu [25,26] extended the PRP method for unconstrained optimization [16,18] to solve large-scale nonlinear systems with monotone line search strategies, which are modifications of the Grippoet al. [20] and Li and Fukushima [9] schemes. As a further research of the Perry conjugate gradient method for unconstrained optimization [47] , Dai et al. [30] combined the modified Perry conjugate gradient method for unconstrained optimization problems [29] and the hyperplane projection technique of Solodov and Svaiter [28] to propose a derivative-free method for solving large-scale nonlinear monotone equations. By replacing the gradients of the unmodified PRP nonlinear conjugate gradient method [16,18] with the residuals combined with the hyperplane projection technique, Zhou and Wang [13] presented a derivative-free residual method for largescale monotone nonlinear equations, which may also be nonsmooth. ...
Article
Full-text available
This paper presents two modified Hager-Zhang Conjugate gradient methods for solving large-scale system of monotone nonlinear equations. The methods were developed by combining modified forms of the one-parameter method by Hager and Zhang (2006) with the hyperplane projection technique. Global convergence and numerical results show that the proposed methods are promising and more efficient compared to the methods presented by Mushtak and Keyvan (2018) and Sun et al. (2017).
... Yu [25,26] extended the PRP method for unconstrained optimization [16,18] to solve large-scale nonlinear systems with monotone line search strategies, which are modifications of the Grippoet al. [20] and Li and Fukushima [9] schemes. As a further research of the Perry conjugate gradient method for unconstrained optimization [47] , Dai et al. [30] combined the modified Perry conjugate gradient method for unconstrained optimization problems [29] and the hyperplane projection technique of Solodov and Svaiter [28] to propose a derivative-free method for solving large-scale nonlinear monotone equations. By replacing the gradients of the unmodified PRP nonlinear conjugate gradient method [16,18] with the residuals combined with the hyperplane projection technique, Zhou and Wang [13] presented a derivative-free residual method for largescale monotone nonlinear equations, which may also be nonsmooth. ...
Article
This paper presents two modified Hager–Zhang (HZ) Conjugate Gradient methods for solving large-scale system of monotone nonlinear equations. The methods were developed by combining modified forms of the one-parameter method by Hager and Zhang (2006) and the hyperplane projection technique. Global convergence and numerical results of the methods are established. Preliminary numerical results show that the proposed methods are promising and more efficient compared to the methods presented by Mushtak and Keyvan (2018) and Sun et al. (2017).
... Recently, the CG algorithms for solving unconstrained optimization problems has prompted several researchers to extend them to solve system of monotone nonlinear operator problems (1.4). For instance, Dai et al. [14] extended the modified Perry CG method in [15] to solve unconstrained version of problem (1.4) based on the hyperplane projection method proposed by Solodov and Svaiter in [16]. The numerical results presented show that their method works well. ...
Article
Full-text available
The convex constraint nonlinear equation problem is to find a point q with the property that q 2 D where D is a nonempty closed convex subset of Euclidean space R n. The convex constraint problem arises in many practical applications such as chemical equilibrium systems, economic equilibrium problems, and the power flow equations. In this paper, we extend the modified Dai-Yuan nonlinear conjugate gradient method with su ciently descent property proposed for large-scale optimization problem to solve convex constraint nonlinear equation and establish the global convergence of the proposed algorithm under certain mild conditions. Our result is a significant improvement compared with related method for solving the convex constraint nonlinear equation. MSC: 90C30; 65K05
... Another extended algorithm was proposed by Dai et al. [9] for solving problem (1.1). The algorithm is a combination of the projection technique and the Perry's conjugate gradient algorithm proposed in [25]. Likewise, an extension of a scaled conjugate gradient (SCG) algorithm by Andrei [7] was proposed by Ou and Li [27]. ...
Preprint
This paper proposes two new derivative-free algorithms for solving convex constraints nonlinear monotone equations and signal recovery problems arising in compressive sensing. The algorithms combine a three term conjugate residual algorithms for unconstrained optimization problems and the projection technique. The search direction generated by both algorithms, independent of the line search satisfies the sufficient descent condition and are bounded. Convergence of the algorithms was obtained under some assumptions. Finally, numerical examples were reported to show the performance of the algorithms compared with others.
... Another extended algorithm was proposed by Dai et al. [9] for solving problem (1.1). The algorithm is a combination of the projection technique and the Perry's conjugate gradient algorithm proposed in [25]. Likewise, an extension of a scaled conjugate gradient (SCG) algorithm by Andrei [7] was proposed by Ou and Li [27]. ...
Article
This paper proposes two new derivative-free algorithms for solving convex constraints nonlinear monotone equations and signal recovery problems arising in compressive sensing. The algorithms combine a three term conjugate residual algorithms for unconstrained optimization problems and the projection technique. The search direction generated by both algorithms, independent of the line search satisfies the sufficient descent condition and are bounded. Convergence of the algorithms was obtained under some assumptions. Finally, numerical examples were reported to show the performance of the algorithms compared with others.
... Yu (2010) and Yu (2011) extend the PRP method for unconstrained optimization (Polyak 1969) to solve large-scale nonlinear system of equations, which are modifications of the Grippo et al. (1986) and Li and Fukushima (2000) methods. As a further research of the Perry's CG method for unconstrained optimization, Dai et al. (2015) combined the modified Perry CG method for unconstrained optimization problems (Livieris and Pintelas 2012) and the hyperplane projection technique of Solodov and Svaiter (1998) to propose a derivative-free method for solving large-scale nonlinear equations. However, over the years, many researchers are solving system of nonlinear equations by the approach of memory-less technique. ...
Article
Full-text available
In this paper, we propose a hybrid conjugate gradient (CG) method based on the approach of convex combination of Fletcher-Reeves (FR) and Polak-Ribière-Polyak (PRP) parameters, and Quasi-Newton's update. This is made possible by using self-scaling memory-less Broy-den's update together with a hybrid direction consisting of two CG parameters. However, an important property of the new algorithm is that, it generates a descent search direction via non-monotone type line search. The global convergence of the algorithm is established under appropriate conditions. Finally, numerical experiments on some benchmark test problems, demonstrate the effectiveness of the proposed algorithm over some existing alternatives.
... Another proposed derivative-free method was proposed by Dai et al. [23] for solving the CCMN problem (1) . The method is a combination of the projection technique and Perry's conjugate gradient algorithm proposed in [24] . Likewise, the scaled conjugate gradient (SCG) method by Andrei [25] was combined with the projection technique by Ou and Li [4] resulting in a derivative-free method referred to as SCG. ...
Article
Full-text available
This article introduces a derivative-free method for solving convex constrained nonlinear equations involving a monotone operator with a Lipschitz condition imposed on the underlying operator. The proposed method incorporate the projection technique with the three-term Polak-Ribière-Polyak conjugate gradient method for the unconstrained optimization problem proposed by Min Li [J.Ind.Manag. Optim.16.1(2020): 245.16.1 (2020): 245]. Under some standard assumptions, we establish the global convergence of the proposed method. Furthermore, we provide some numerical examples and application to image deblurring problem to illustrate the effectiveness and competitiveness of the proposed method. Numerical results indicate that the proposed method is remarkably promising.
... We focus our study on conjugate gradient and quasi-Newton methods [18][19][20][21][22][23][24], and all of these algorithms are based on the following three steps. ...
Article
Full-text available
The paper formulates the static control problem of Markov jump linear systems, assuming that the controller does not have access to the jump variable. We derive the expression of the gradient for the cost motivated by the evaluation of 10 gradient-based optimization techniques. The numerical efficiency of these techniques is verified by using the data obtained from practical experiments. The corresponding solution is used to design a scheme to control the velocity of a real-time DC motor device subject to abrupt power failures. Copyright © 2014 John Wiley & Sons, Ltd.
Article
The conjugate gradient method is an effective method for large-scale unconstrained optimization problems. Recent research has proposed conjugate gradient methods based on secant conditions to establish fast convergence of the methods. However, these methods do not always generate a descent search direction. In contrast, Y. Narushima, H. Yabe, and J.A. Ford [A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM J. Optim. 21 (2011), pp. 212–230] proposed a three-term conjugate gradient method which always satisfies the sufficient descent condition. This paper makes use of both ideas to propose descent three-term conjugate gradient methods based on particular secant conditions, and then shows their global convergence properties. Finally, numerical results are given.
Article
In this decade, nonlinear conjugate gradient methods have been focused on as effective numerical methods for solving large-scale unconstrained optimization problems. Especially, nonlinear conjugate gradient methods with the sufficient descent property have been studied by many researchers. In this paper, we review sufficient descent nonlinear conjugate gradient methods.
Article
In this paper, we propose a derivative-free method for solving large-scale nonlinear monotone equations. It combines the modified Perry's conjugate gradient method (I.E. Livieris, P. Pintelas, Globally convergent modified Perrys conjugate gradient method, Appl. Math. Comput., 218 (2012) 9197-9207) for unconstrained optimization problems and the hyperplane projection method (M.V. Solodov, B.F. Svaiter, A globally convergent inexact Newton method for systems of monotone equations, in: M. Fukushima, L. Qi (Eds.), Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Kluwer Academic Publishers, 1998, pp. 355-369). We prove that the proposed method converges globally if the equations are monotone and Lipschitz continuous without differentiability requirement on the equations, which makes it possible to solve some nonsmooth equations. Another good property of the proposed method is that it is suitable to solve large-scale nonlinear monotone equations due to its lower storage requirement. Preliminary numerical results show that the proposed method is promising.
Article
Conjugate gradient methods stand out as the most ideal iterative algorithms for solving nonlinear systems with large-dimensions. This is due to the fact that they are implemented with less memory and because of their ability to converge globally to solutions of problems considered. One of the most essential iterative method in this category is the Polak-Ribière-Polyak (PRP) scheme, which is numerically effective, but its search directions are mostly not descent directions. In this paper, based upon the adaptive PRP scheme by Yuan et al. and the projection method, a numerically efficient PRP-type scheme for system of monotone nonlinear equations is presented, where the solution is restricted to a closed convex set. Apart from the ability to generate descent search directions that is quite vital for global convergence, a distinct novelty of the new scheme is its application in compressive sensing, where it’s applied to restore blurry images. The scheme’s global convergence is established with mild assumptions. Preliminary numerical results show that the method proposed is promising.
Article
Two new conjugate residual algorithms are presented and analyzed in this article. Specifically, the main functions in the system considered are continuous and monotone. The methods are adaptations of the scheme presented by Narushima et al. (SIAM J Optim 21: 212–230, 2011). By employing the famous conjugacy condition of Dai and Liao (Appl Math Optim 43(1): 87–101, 2001), two different search directions are obtained and combined with the projection technique. Apart from being suitable for solving smooth monotone nonlinear problems, the schemes are also ideal for non-smooth nonlinear problems. By employing basic conditions, global convergence of the schemes is established. Report of numerical experiments indicates that the methods are promising.
Article
Full-text available
This paper explores the convergence of nonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes of methods that are globally convergent on smooth, nonconvex functions. Some properties of the Fletcher-Reeves method play an important role in the first family, whereas the second family shares an important property with the Polak-Ribiere method. Numerical experiments are presented.
Article
Full-text available
The BFGS method is the most effective of the quasi-Newton methods for solving unconstrained optimization problems. Wei, Li, and Qi [16] have proposed some modified BFGS methods based on the new quasi-Newton equation B k+1s k = y*k , where y*k is the sum of y k and A ks k, and A k is some matrix. The average performance of Algorithm 4.3 in [16] is better than that of the BFGS method, but its superlinear convergence is still open. This article proves the superlinear convergence of Algorithm 4.3 under some suitable conditions.
Article
Full-text available
Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. This paper proposes a new conjugacy condition, which considers an inexact line search scheme but reduces to the old one if the line search is exact. Based on the new conjugacy condition, two nonlinear conjugate gradient methods are constructed. Convergence analysis for the two methods is provided. Our numerical results show that one of the methods is very efficient for the given test problems.
Article
Full-text available
In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribière conjugate gradient method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line search every limit point produced by the Polak-Ribière iteration is a stationary point of the objective function. Moreover, we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the same good features of the Polak-Ribière method, while avoiding pathological situations.
Article
Full-text available
In this paper, we propose a three-parameter family of conjugate gradient methods for unconstrained optimization. The three-parameter family of methods not only includes the already-existing six practical nonlinear conjugate gradient methods, but subsumes some other families of nonlinear conjugate gradient methods as its subfamilies. With Powell's restart criterion, the three-parameter family of methods with the strong Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the three-parameter family of methods. This paper can also be regarded as a brief review on nonlinear conjugate gradient methods. Key words: unconstrained optimization, conjugate gradient methods, line search, global convergence. AMS classification: 65k, 90c. 1. Introduction Consider the unconstrained optimization problem min x2R n f(x); (1.1) where f is a smooth function and its gradient is available. Conjugate gradient method...
Article
Full-text available
The development of software for minimization problems is often based on a line search method. We consider line search methods that satisfy sufficient decrease and curvature conditions, and formulate the problem of determining a point that satisfies these two conditions in terms of finding a point in a set T(μ). We describe a search algorithm for this problem that produces a sequence of iterates that converge to a point in T(μ) and that, except for pathological cases, terminates in a finite number of steps. Numerical results for an implementation of the search algorithm on a set of test functions show that the algorithm terminates within a small number of iterations.
Article
Full-text available
The purpose of this paper is to discuss the scope and functionality of a versatile environment for testing small and large-scale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we believe that they will be useful in their own right and should be available to researchers for their development of optimization software. The tools are available by anonymous ftp from a number of sources and may, in many cases, be installed automatically. The scope of a major collection of test problems written in the standard input format (SIF) used by the LANCELOT software package is described. Recognising that most software was not written with the SIF in mind, we provide tools to assist in building an interface between this input format and other optimization packages. These tools already provide a link between the SIF and an number of existing packages, including MINOS and OSL. In ad...
Article
Full-text available
In this paper, by the use of Gram-Schmidt orthogonalization, we propose a class of modified conjugate gradient methods. The methods are modifications of the well-known conjugate gradient methods including the PRP, the HS, the FR and the DY methods. A common property of the modified methods is that the direction generated by any member of the class satisfies . Moreover, if line search is exact, the modified method reduces to the standard conjugate gradient method accordingly. In particular, we study the modified YT and YT+ methods. Under suitable conditions, we prove the global convergence of these two methods. Extensive numerical experiments show that the proposed methods are efficient for the test problems from the CUTE library.
Article
Full-text available
If an inexact lilne search which satisfies certain standard conditions is used . then it is proved that the Fletcher-Reeves method had a descent property and is globally convergent in a certain sense.
Article
Full-text available
This paper is concerned with the open problem whether BFGS method with inexact line search converges globally when applied to nonconvex unconstrained optimization problems. We propose a cautious BFGS update and prove that the method with either Wolfe-type or Armijo-type line search converges globally if the function to be minimized has Lipschitz continuous gradients. Key words: unconstrained optimization, BFGS method, global convergence 1 Present address (available until October, 1999): Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan, e-mail: lidh@kuamp.kyoto-u.ac.jp 1 Introduction BFGS method is a well-known quasi-Newton method for solving unconstrained optimization problems. Because of favorable numerical experience and fast theoretical convergence, it has become a method of choice for engineers and mathematicians who are interested in solving optimization problems. Local convergence theory of BFGS method has...
Article
Full-text available
. Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. However, the strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, which are similar to that required by the Zoutendijk condition. Key words. unconstrained optimization, new conjugate gradient method, Wolfe conditions, global convergence. AMS subject classifications. 65k, 90c 1. Introduction. Our problem is to minimize a function of n variables min f(x); (1.1) where f is smooth and its gradient g(x) is available. Conjugate gradient methods for solving (1.1) are iterative methods of the form x k+1 = x k + ff k d k ; (1.2) where ff k ? 0 is a steplength, d k is a search direction. Normally the search direction at...
Article
We develop a new nonmonotone line search for the PRP conjugate gradient method (Polak-Ribiére-Polyak) [E. Polak and G. Ribiére, Rev. Franç. Inform. Rech. Opér. 3, No. 16, 35–43 (1969; Zbl 0174.48001); B. T. Polyak, U.S.S.R. Comput. Math. Math. Phys. 9 (1969), No. 4, 94–112 (1971); translation from Zh. Vychisl. Mat. Mat. Fiz. 9, 807–821 (1969; Zbl 0229.49023)] for minimizing functions having Lipschitz continuous partial derivatives. The nonmonotone line search can guarantee the global convergence of the original PRP method under some mild conditions. Numerical experiments show that the PRP method with the new nonmonotone line search is available and efficient in practical computation.
Article
A class of new spectral conjugate gradient methods are proposed in this paper. First, we modify the spectral Perry's conjugate gradient method, which is the best spectral conjugate gradient algorithm SCG by Birgin and Martinez [E.G. Birgin and J.M. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001), 117–128.], such that it possesses sufficient descent property for any (inexact) line search. It is shown that, for strongly convex functions, the method is a global convergent. Further, a global convergence result for nonconvex minimization is established when the line search fulfils the Wolfe line search conditions. Some other spectral conjugate gradient methods with guaranteed descent are presented here. Numerical comparisons are given with both SCG and CG_DESCENT methods using the unconstrained optimization problems in the CUTE library.
Article
We consider the global convergence of conjugate gradient methods without restarts, assuming exact arithmetic and exact line searches, when the objective function is twice continuously differentiable and has bounded level sets. Most of our attention is given to the Polak-Ribire algorithm, and unfortunately we find examples that show that the calculated gradients can remain bounded away from zero. The examples that have only two variables show also that some variable metric algorithms for unconstrained optimization need not converge. However, a global convergence theorem is proved for the Fletcher-Reeves version of the conjugate gradient method.
Article
In this paper, we propose two new hybrid nonlinear conjugate gradient methods, which produce sufficient descent search direction at every iteration. This property depends neither on the line search used nor on the convexity of the objective function. Under suitable conditions, we prove that the proposed methods converge globally for general nonconvex functions. The numerical results show that both hybrid methods are efficient for the given test problems from the CUTE library.
Article
Multistep quasi-Newton methods for unconstrained optimization were introduced by the authors in several papers. At each iteration, these methods employ two polynomials, one to define a path interpolating recent iterates in the variable space and the other to approximate the gradient as the path is followed. Numerical experiments strongly indicated that several multistep methods yield substantial computational gains over the standard (one-step) BFGS method. In this paper, we consider how to modify the structure of such methods to provide a more general model of the gradient, with the intention of improving the approximation. The results of numerical experiments on the new methods are reported and compared with those produced by existing methods.
Article
In previous work, the authors (1993, 1994) developed the concept of multi-step quasi-Newton methods, based on the use of interpolating polynomials determined by data from the m most recent steps. Different methods for parametrizing these polynomials were studied by the authors (1993), and several methods were shown (empirically) to yield substantial gains over the standard (one-step) BFGS method for unconstrained optimization. In this paper, we will consider the issue of how to incorporate function-value information within the framework of such multi-step methods. This is achieved, in the case of two-step methods, through the use of a carefully chosen rational form to interpolate the three most recent iterates. The results of numerical experiments on the new methods are reported.
Article
In this article, a new conjugate gradient method based on the MBFGS secant condition is derived, which is regarded as a modified version of Dai–Liao method or Yabe–Takano method. This method is shown to be globally convergent under some assumptions. It is new feature that the proof of global convergence of this method is very simple without proving so-called Property() given by Gilbert and Nocedal for general unconstrained optimization problems. Our numerical results show that this method is efficient for the given test problems.
Article
This paper reviews some of the most successful methods for unconstrained, constrained and nondifferentiable optimization calculations. Particular attention is given to the contribution that theoretical analysis has made to the development of algorithms. It seems that practical considerations provide the main new ideas, and that subsequent theoretical studies give improvements to algorithms, coherence to the subject, and better understanding.
Article
This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties.
Chapter
Conjugate gradient methods are a class of important methods for solving linear equations and for solving nonlinear optimization. In this article, a review on conjugate gradient methods for unconstrained optimization is given. They are divided into early conjugate gradient methods, descent conjugate gradient methods and sufficient descent conjugate gradient methods. Two general convergence theorems are provided for the conjugate gradient method assuming the descent property of each search direction. Some research issues on conjugate gradient methods are mentioned.
Article
The conjugate gradient method is particularly useful for minimizing functions of very many variables because it does not require the storage of any matrices. However the rate of convergence of the algorithm is only linear unless the iterative procedure is restarted occasionally. At present it is usual to restart everyn or (n + 1) iterations, wheren is the number of variables, but it is known that the frequency of restarts should depend on the objective function. Therefore the main purpose of this paper is to provide an algorithm with a restart procedure that takes account of the objective function automatically. Another purpose is to study a multiplying factor that occurs in the definition of the search direction of each iteration. Various expressions for this factor have been proposed and often it does not matter which one is used. However now some reasons are given in favour of one of these expressions. Several numerical examples are reported in support of the conclusions of this paper.
Article
A family of scaled conjugate gradient algorithms for large-scale unconstrained minimization is defined. The Perry, the Polak—Ribière and the Fletcher—Reeves formulae are compared using a spectral scaling derived from Raydan's spectral gradient optimization method. The best combination of formula, scaling and initial choice of step-length is compared against well known algorithms using a classical set of problems. An additional comparison involving an ill-conditioned estimation problem in Optics is presented.
Article
Conjugate gradient methods are appealing for large scale nonlinear optimization problems, because they avoid the storage of matrices. Recently, seeking fast convergence of these methods, Dai and Liao (Appl. Math. Optim. 43:87–101, 2001) proposed a conjugate gradient method based on the secant condition of quasi-Newton methods, and later Yabe and Takano (Comput. Optim. Appl. 28:203–225, 2004) proposed another conjugate gradient method based on the modified secant condition. In this paper, we make use of a multi-step secant condition given by Ford and Moghrabi (Optim. Methods Softw. 2:357–370, 1993; J.Comput. Appl. Math. 50:305–323, 1994) and propose two new conjugate gradient methods based on this condition. The methods are shown to be globally convergent under certain assumptions. Numerical results are reported.
Article
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. It is well-known that the direction generated by a conjugate gradient method may not be a descent direction of the objective function. In this paper, we take a little modification to the Fletcher–Reeves (FR) method such that the direction generated by the modified method provides a descent direction for the objective function. This property depends neither on the line search used, nor on the convexity of the objective function. Moreover, the modified method reduces to the standard FR method if line search is exact. Under mild conditions, we prove that the modified method with Armijo-type line search is globally convergent even if the objective function is nonconvex. We also present some numerical results to show the efficiency of the proposed method.
Article
In this paper, we propose two modified versions of the Dai-Yuan (DY) nonlinear conjugate gradient method. One is based on the MBFGS method (Li and Fukushima, J Comput Appl Math 129:15–35, 2001) and inherits all nice properties of the DY method. Moreover, this method converges globally for nonconvex functions even if the standard Armijo line search is used. The other is based on the ideas of Wei et al. (Appl Math Comput 183:1341–1350, 2006), Zhang et al. (Numer Math 104:561–572, 2006) and possesses good performance of the Hestenes-Stiefel method. Numerical results are also reported.
Article
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.
Article
In this paper, we propose a modification of the BFGS method for unconstrained optimization. A remarkable feature of the proposed method is that it possesses a global convergence property even without convexity assumption on the objective function. Under certain conditions, we also establish superlinear convergence of the method.
Article
The conjugate gradient (CG) method has played a special role in solving large-scale nonlinear optimization due to the simplicity of their iterations and their very low memory requirements. Based on a new quasi-Newton equation proposed in [Z. Wei, G. Li, L. Qi, New quasi-newton methods for unconstrain optimization, preprint, Z. Wei, G. Yu, G. Yuan, Z. Lian, The superlinear convergence of a modified BFGS-type method for unconstrained optimization, Comput. Optim. Appl. 29(3) (2004) 315–332], we establish a new conjugacy condition for CG methods and propose several new CG methods. It is a interesting feature that these new CG methods take both the gradient and function value information. Under some suitable conditions, the global convergence is achieved for these methods. The numerical results show that one of our new CG methods is very encouraging.
Article
In this paper, we propose a variant FR (VFR) formula βkVFR and corresponding spectral-type conjugate gradient method (SVFR) such that the direction generated is always a descent direction for the objective function. We also extend βkVFR to βk∗ such that |βk∗|⩽βkFR and have similar conclusions. Under appropriate conditions, we prove that the proposed method is globally convergent under not only Wolfe line search but also Armijo-type line search. Numerical experiments show the SVFR method performs well.
Article
This paper deals with a new nonlinear modified spectral FR conjugate gradient method for solving large scale unstrained optimization problems. The direction generated by the method is a descent direction for the objective function. Under mild conditions, we prove that the modified spectral FR conjugate gradient method with Wolfe type line search is globally convergent. Preliminary numerical results show the proposed method is very promising.
Article
In this paper, a new Wolfe-type line search is proposed, and the global and superlinear of the Shamanskii method with the new line search are proved under mild assumptions. Furthermore, the iterative scheme of the Shamanskii method is also generalized.
Article
It is well known that global convergence has not been established for the Polak–Ribière–Polyak (PRP) conjugate gradient method using the standard Wolfe conditions. In the convergence analysis of PRP method with Wolfe line search, the (sufficient) descent condition and the restriction βk⩾0 are indispensable (see [4,7]). This paper shows that these restrictions could be relaxed. Under some suitable conditions, by using a modified Wolfe line search, global convergence results were established for the PRP method. Some special choices for βk which can ensure the search direction’s descent property were also discussed in this paper. Preliminary numerical results on a set of large-scale problems were reported to show that the PRP method’s computational efficiency is encouraging.
Article
Recently, similar to Hager and Zhang (SIAM J Optim 16:170–192, 2005), Yu (Nonlinear self-scaling conjugate gradient methods for large-scale optimization problems. Thesis of Doctors Degree, Sun Yat-Sen University, 2007) and Yuan (Optim Lett 3:11–21, 2009) proposed modified PRP conjugate gradient methods which generate sufficient descent directions without any line searches. In order to obtain the global convergence of their algorithms, they need the assumption that the stepsize is bounded away from zero. In this paper, we take a little modification to these methods such that the modified methods retain sufficient descent property. Without requirement of the positive lower bound of the stepsize, we prove that the proposed methods are globally convergent. Some numerical results are also reported.
Article
A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes-Stiefel conjugate gradient scheme. For any (inexact) line search, our scheme satisfies the descent condition g kTd k ≤ -7/8||gk|| 2. Moreover, a global convergence result is established when the line search fulfills the Wolfe conditions. A new line search scheme is developed that is efficient and highly accurate. Efficiency is achieved by exploiting properties of linear interpolants in a neighborhood of a local minimizer. High accuracy is achieved by using a convergence criterion, which we call the "approximate Wolfe" conditions, obtained by replacing the sufficient decrease criterion in the Wolfe conditions with an approximation that can be evaluated with greater precision in a neighborhood of a local minimum than the usual sufficient decrease criterion. Numerical comparisons are given with both L-BFGS and conjugate gradient methods using the unconstrained optimization problems in the CUTE library.
Article
In this paper, we propose a modication of the BFGS method for unconstrained optimization. A remarkable feature of the proposed method is that it possesses a global convergence property even without convexity assumption on the objective function. Under certain conditions, we also establish superlinear convergence of the method. Key words: BFGS method, global convergence, superlinear convergence 1 Present address (available before October, 1999): Department of Applied Mathematics and Physics, Graduate School of Engineering, Kyoto University, Kyoto 606, Japan, e-mail: lidh@kuamp.kyoto-u.ac.jp 1 Introduction Let f : R n ! R be continuously dierentiable. Consider the following unconstrained optimization problem: min f(x); x 2 R n : (1:1) Among numerous iterative methods for solving (1.1), quasi-Newton methods constitute particularly important class. Throughout the paper, we assume that f in (1.1) has Lipschitz continuous gradients, i.e. there is a constant L > 0 such kg(x) g(y)k ...
Article
We propose performance profiles-distribution functions for a performance metric-as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.
Note sur la convergence de methods de directions conjuguees, Revue Francais d'Informatique et de
  • E Polak
  • G Ribière
E. Polak, G. Ribière, Note sur la convergence de methods de directions conjuguees, Revue Francais d'Informatique et de Recherche Operationnelle 16 (1969) 35-43.
  • Z J Shi
  • J Shen
Z.J. Shi, J. Shen, Convergence of PRP method with new nonmonotone line search, Applied Mathematics and Computation 181 (1) (2006) 423-431.