Figure - available from: ISRN Applied Mathematics
This content is subject to copyright. Terms and conditions apply.
2nd NR

2nd NR

Source publication
Article
Full-text available
The Newton secant method is a third-order iterative nonlinear solver. It requires two function and one first derivative evaluations. However, it is not optimal as it does not satisfy the Kung-Traub conjecture. In this work, we derive an optimal fourth-order Newton secant method with the same number of function evaluations using weight functions and...

Citations

... We can suppose that the values of A are independent with time as given in the article [4]. The concentrated CO 2 is evaluated from (4.6) as ...
Article
Full-text available
The present article deals with the effect of convexity in the study of the well-known Whittaker iterative method, because an iterative method converges to a unique solution t * of the nonlinear equation ψ(t) = 0 faster when the function's con-vexity is smaller. Indeed, fractional iterative methods are a simple way to learn more about the dynamic properties of iterative methods, i.e., for an initial guess, the sequence generated by the iterative method converges to a fixed point or diverges. Often, for a complex root search of nonlinear equations, the selective real initial guess fails to converge, which can be overcome by the fractional iterative methods. So, we have studied a Caputo fractional double convex acceleration Whittaker's method (CFDCAWM) of order at least (1 + 2ζ) and its global convergence in broad ways. Also, the faster convergent CFDCAWM method provides better results than the existing Caputo fractional Newton method (CFNM), which has (1 + ζ) order of convergence. Moreover, we have applied both fractional methods to solve the non-linear equations that arise from different real-life problems.
... Many higher order variants of Newton's method have been developed and rediscovered in the last 15 years. Recently, the order of convergence of many variants of Newton's method has been improved using the same number of functional evaluations by means of weight functions (see [1][2][3][4][5][6] and the references therein). The aim of such research is to develop optimal methods which satisfy Kung-Traub's conjecture. ...
... x (k+1) = ψ(x (k) , φ 1 (x (k) ), ..., φ i (x (k) )) (3) Then ψ is called a multipoint I.F without memory. Kung-Traub's Conjecture [9] Let ψ be an I.F. ...
Article
Full-text available
Kung-Traub’s conjecture states that an optimal iterative method based on d function evaluations for finding a simple zero of a nonlinear function could achieve a maximum convergence order of 2^{d − 1} . During the last years, many attempts have been made to prove this conjecture or develop optimal methods which satisfy the conjecture. We understand from the conjecture that the maximum order reached by a method with three function evaluations is four, even for quadratic functions. In this paper, we show that the conjecture fails for quadratic functions. In fact, we can find a 2-point method with three function evaluations reaching fifth order convergence. We also develop 2-point 3rd to 8th order methods with one function and two first derivative evaluations using weight functions. Furthermore, we show that with the same number of function evaluations we can develop higher order 2-point methods of order r + 2 , where r is a positive integer, ≥ 1 . We also show that we can develop a higher order method with the same number of function evaluations if we know the asymptotic error constant of the previous method. We prove the local convergence of these methods which we term as Babajee’s Quadratic Iterative Methods and we extend these methods to systems involving quadratic equations. We test our methods with some numerical experiments including an application to Chandrasekhar’s integral equation arising in radiative heat transfer theory. download at http://www.mdpi.com/1999-4893/9/1/1/pdf
... The family of methods (3) is of order three with three evaluations per full iteration having E I 3rd P M = 1.442. Recently, some fourth order optimal two-point I.F.s have been developed using weight functions (see [1,2,4,5,10,12] and the references therein). In this work, we have developed a fourth order version of 3rdPM family with 3 function evaluations using a weight function. ...
Article
Full-text available
In this paper, we have presented a family of two-point fourth order, three-point sixth order and four-point twelfth order iterative methods without memory based on power mean using weight function. The family of fourth order methods is optimal in the sense of Kung-Traub hypothesis. In terms of computational point of view, our methods require three evaluations (one function and two first derivatives) to get fourth order, four evaluations (two functions and two derivatives) to get sixth order and five evaluations (three functions and two derivatives) to get twelfth order. Hence, these methods have high efficiency indices 1.587, 1.565 and 1.644 respectively. Few known results can be regarded as particular cases of our family of methods. Some numerical examples are tested to know the efficiencies of the methods which verify the theoretical results.
... All members of this family are third order except the Jarratt method which is fourth order. Use of weight functions is becoming an important technique to improve the order of old methods (see [2,4,5,[16][17][18] and the references therein). ...
... Recently, some fourth order optimal two-point I.F.s have been developed using weight functions (see [2,4,5,[16][17][18] and the references therein). ...
Article
The one-parameter Chebyshev-Halley family is an important family of one-point third order iterative methods which requires one function, one first and one second derivative evaluations. The famous Chebyshev, Halley and Super-Halley methods are its members. Nedzhibov-Hasanov-Petkov (Numer. Alg. 42:127–136, 2006) approximated the second derivative present in the Chebyshev-Halley family to obtain a two-parameter Chebyshev-Halley-like family of two-point iterative methods free from second derivative. Only one member of this family known as the famous Jarratt method is fourth order and satisfies the Kung-Traub Conjecture. Other members are third order. Recent advancement in this field of numerical analysis have made it possible to develop fourth order methods from third order ones with the same number of function evaluations using weight functions. In this work, we develop a two-parameter Chebyshev-Halley-like family of two-point fourth order methods using weight functions. We compare the special members of the new family with the old one through an numerical example to illustrate the efficiency of the new family.
... This method is a three-point method requiring 1 function and 3 first derivative evaluations and has an efficiency index of 3 1/4 = 1.316 which is lower than 2 1/2 = 1.414 of the 1-point Newton method. Recently, the order of many variants of Newton's method have been improved using the same number of functional evaluations by means of weight functions (see [2][3][4][5][6][7] and the references therein). ...
Article
Full-text available
In this paper, we present three improvements to a three-point third order variant of Newton’s method derived from the Simpson rule. The first one is a fifth order method using the same number of functional evaluations as the third order method, the second one is a four-point 10th order method and the last one is a five-point 20th order method. In terms of computational point of view, our methods require four evaluations (one function and three first derivatives) to get fifth order, five evaluations (two functions and three derivatives) to get 10th order and six evaluations (three functions and three derivatives) to get 20th order. Hence, these methods have efficiency indexes of 1.495, 1.585 and 1.648, respectively which are better than the efficiency index of 1.316 of the third order method. We test the methods through some numerical experiments which show that the 20th order method is very efficient