A globally convergent version of the Polak-Ribière conjugate gradient method

Mathematical Programming (Impact Factor: 1.8). 09/1997; 78(3):375-391. DOI: 10.1007/BF02614362
Source: DBLP


In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribière conjugate gradient
method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line
search every limit point produced by the Polak-Ribière iteration is a stationary point of the objective function. Moreover,
we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction
can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under
strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational
point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the
same good features of the Polak-Ribière method, while avoiding pathological situations.

Download full-text


Available from: Stefano Lucidi
  • Source
    • ", therefore, during the past few years, many authors has been investigated to create new formula for k β ,[3] [4] [9] [10] [19] [21]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we suggest a new nonlinear conjugate gradient method for solving large scale unconstrained optimization problems. We prove that the new conjugate gradient coefficient βk with exact line search is globally convergent. Preliminary numerical results with a set of 116 unconstrained optimization problems show that βk is very promising and efficient when compared to the other conjugate gradient coefficients Fletcher - Reeves (FR) and Polak -Ribiere - Polyak (PRP).
    Full-text · Article · Jan 2015
  • Source
    • "Energy minimization and structural optimization was carried out with the conjugate gradient method as implemented in the LAMMPS simulation package2829. For the radial and angular distribution analysis, the obtained structures (which had about 13,000 atoms with sizes ca. "
    [Show abstract] [Hide abstract]
    ABSTRACT: One of the most interesting questions in solid state theory is the structure of glass, which has eluded researchers since the early 1900's. Since then, two competing models, the random network theory and the crystallite theory, have both gathered experimental support. Here, we present a direct, atomic-level structural analysis during a crystal-to-glass transformation, including all intermediate stages. We introduce disorder on a 2D crystal, graphene, gradually, utilizing the electron beam of a transmission electron microscope, which allows us to capture the atomic structure at each step. The change from a crystal to a glass happens suddenly, and at a surprisingly early stage. Right after the transition, the disorder manifests as a vitreous network separating individual crystallites, similar to the modern version of the crystallite theory. However, upon increasing disorder, the vitreous areas grow on the expense of the crystallites and the structure turns into a random network. Thereby, our results show that, at least in the case of a 2D structure, both of the models can be correct, and can even describe the same material at different degrees of disorder.
    Full-text · Article · Feb 2014 · Scientific Reports
  • Source
    • "The non-negative setting β k = max{β P R k , 0} ensures the descent property but it is not always efficient [12]. Another significant work on the convergence of PRP method due to Grippo and Lucidi [13], in which an efficient line search is developed to ensure that the objective function reduces greatly. In the past few years, two classes of method have been received much attention in the literature. "
    [Show abstract] [Hide abstract]
    ABSTRACT: For solving large-scale unconstrained minimization problems, the nonlinear conjugate gradient method is welcome due to its simplicity, low storage, efficiency and nice convergence properties. Among all the methods in the framework, the conjugate gradient descent algorithm — CG_DESCENT is very popular, in which the generated directions descend automatically, and this nice property is independent of any line search used. In this paper, we generalize CG_DESCENT with two Barzilai–Borwein steplength reused cyclically. We show that the resulting algorithm owns attractive sufficient descent property and converges globally under some mild conditions. We test the proposed algorithm by using a large set of unconstrained problems with high dimensions in CUTEr library. The numerical comparisons with the state-of-the-art algorithm CG_DESCENT illustrate that the proposed method is effective, competitive, and promising.
    Full-text · Article · Jul 2012 · Journal of Computational and Applied Mathematics
Show more