Implementation issues for high-order algorithms

To read the full-text of this research, you can request a copy directly from the authors.


Newton-Raphson method, which dates back to 1669–1670, is widely used to solve systems of equations and unconstrained optimization problems. Nowton-Raphson consists in linearizing the system of equations and provides quadratic local convergence order. Quite soon after Newton and Raphson introduced their iterative process. Halley in 1694 proposed a higher-order method providing cubic asymptotic convergence order. Chebyshev in 1838 proposed another high-order variant. In “High-order Newton-penalty algorithms” [J.-P. Dussault, J. Comput. Appl. Math. 182, No. 1, 117–133 (2005; Zbl 1077.65061)], by interpreting Newton’s iteration as a linear extrapolation, formulae were proposed to compute higher-order extrapolations generalizing Newton-Raphson’s and Chebyshev’s methods. In this paper, we provide details using an automatic differentiation (AD) tool to implement those high-order extrapolations. We present a complexity analysis allowing to predict the efficiency of those high-order strategies.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We use this complexity bound to assess the overall complexity of our HoC algorithm. Actually, as described in [4], obtaining ∇F costs n times the cost of F for a vector valued function F : R n → R n . But when one considers the scalar quantity ∇F (x)u, it can be expressed as: ∇(F (x) · u), i.e. derivative of a new scalar valued function Φ : x → F (x) · u. ...
... Therefore, in this example, c will be equivalent to n times cost of F i which is the cost of four multiplications, one addition, and four subtractions. When one refers to [4], flops of these different operations are described for a Sparc system. For more recent computers, we can check on Intel Optimization Reference Manual [11] for example. ...
... , additional costs are given by: [4] 25c + 30n Solving the linear system 2n 2 − n Vector sum and dot product 2n T otal 2n 2 + 31n + 25c ...
The 1669–1670 Newton–Raphson's method is still used to solve equations systems and unconstrained optimization problems. Since this method, some other algorithms inspired by Newton's have been proposed: in 1839, Chebyshev developed a high-order cubical convergence algorithm, and in 1967, Shamanskii proposed an acceleration of Newton's method. By considering Newton-type methods as displacement directions, we introduce in this article, new high-order algorithms extending these famous methods. We provide convergence order results and per iteration complexity analysis to predict the efficiency of such iterative processes. Preliminary examples confirm the applicability of our analysis.
Some algorithms for unconstrained and differentiable optimization problems involve the evaluation of quantities related to high order derivatives. The cost of these evaluations depends widely on the technique used to obtain the derivatives and on some characteristics of the objective function: its size, structure and complexity. Functions with banded Hessian are a special case that we study in this paper. Because of their partial separability, the cost of obtaining their high order derivatives, subtly computed by the technique of automatic differentiation, makes High order Chebyshev methods more interesting for banded systems than for dense functions. These methods have an attractive efficiency as we can improve their convergence order without increasing significantly their algorithmic costs. This paper provides an analysis of the per-iteration complexities of High order Chebyshev methods applied to sparse functions with banded Hessians. The main result can be summarized as: the per-iteration complexity of a High order Chebyshev method is of order of the objective function's. This theoretical analysis is verified by numerical illustrations.
In this work, we present an introduction to automatic differentiation, its use in optimization software, and some new potential usages. We focus on the potential of this technique in optimization. We do not dive deeply in the intricacies of automatic differentiation, but put forward its key ideas. We sketch a survey, as of today, of automatic differentiation software, but warn the reader that the situation with respect to software evolves rapidly. In the last part of the paper, we present some potential future usage of automatic differentiation, assuming an ideal tool is available, which will become true in some unspecified future.
We present a method, based on the Chebyshev third-order algorithm and accelerated by a Shamanskii-like process, for solving nonlinear systems of equations. We show that this new method has a quintic convergence order. We will also focus on efficiency of high-order methods and more precisely on our new Chebyshev–Shamanskii method. We also identify the optimal use of the same Jacobian in the Shamanskii process applied to the Chebyshev method. Some numerical illustrations will confirm our theoretical analysis.
ResearchGate has not been able to resolve any references for this publication.