Newton-Raphson method, which dates back to 1669–1670, is widely used to solve systems of equations and unconstrained optimization problems. Nowton-Raphson consists in linearizing the system of equations and provides quadratic local convergence order. Quite soon after Newton and Raphson introduced their iterative process. Halley in 1694 proposed a higher-order method providing cubic asymptotic convergence order. Chebyshev in 1838 proposed another high-order variant. In “High-order Newton-penalty algorithms” [J.-P. Dussault, J. Comput. Appl. Math. 182, No. 1, 117–133 (2005; Zbl 1077.65061)], by interpreting Newton’s iteration as a linear extrapolation, formulae were proposed to compute higher-order extrapolations generalizing Newton-Raphson’s and Chebyshev’s methods. In this paper, we provide details using an automatic differentiation (AD) tool to implement those high-order extrapolations. We present a complexity analysis allowing to predict the efficiency of those high-order strategies.