Modified Efficient Families of Two and Three-Step Predictor-Corrector Iterative Methods for Solving Nonlinear Equations

In this paper, we present and analyze modified families of predictor-corrector iterative methods for finding simple zeros of univariate nonlinear equations, permitting   0 f x   near the root. The main advantage of our methods is that they perform better and moreover, have the same efficiency indices as that of existing multipoint iterative methods. Furthermore, the convergence analysis of the new methods is discussed and several examples are given to illustrate their efficiency.


Introduction
One of the most important and challenging problems in computational mathematics is to compute approximate solutions of the nonlinear equation Therefore, the design of iterative methods for solving the nonlinear equation is a very interesting and important task in numerical analysis.Assume that Equation (1) has a simple root r which is to be found and let 0 x be our initial guess to this root.To solve this equation, one can use iterative methods such as Newton's method [1,2] and its variants namely, Halley's method [1][2][3][4][5][6], Chebyshev's method [1][2][3][4][5][6], Chebyshev-Halley type methods [6] etc.The requirement of   0 f x   is an essential condition for the convergence of Newton's method.The abovementioned variants of Newton's method have also two problems which restrict their practical applications rigorously.The first problem is that these methods require the computation of second order derivative.The second problem is that like Newton's method, these methods require the condition that   0 f x   in the vicinity of the root.
For the first problem, Nedzhibov et al. [5] derived many families of multipoint iterative methods by discretizing the second order derivative involved in Chebyshev-Halley type methods [6].We mention below only one root-finding technique (2.1) from [5], namely where    .For different specific values of  , various multipoint iterative methods may result from (2).For 1 This is the famous Traub-Ostrowski's formula [1,2,4,5,7,8], which is an order four formula.This method requires one evaluation of the function and two evaluations of its derivative per iteration.Thus the efficiency index [2] of this method is equal to 3 4 1.587  which is better than the one of Newton's method 2 2 1.414  .Furthermore, Sharma and Guha [8] have developed a variant of Traub-Ostrowski's method (3) which is defined by where a   is a parameter.This family requires an additional evaluation of function   f x at the point iterated by Traub-Ostrowski's method (3), consequently, the local order of convergence is improved from four to six.For 0 a  , we obtain the method developed by Grau and Díaz-Barrero [7] defined by All these multipoint iterative methods are variants of Newton's method.Therefore, they require sufficiently good initial approximation and fail miserably like Newton's method if at any stage of computation, the derivative of the function is zero or very small in the vicinity of the root.
Recently, Kanwar and Tomar [3,4] proposed an alternative to the failure situation of Newton's method and its various variants.They also derived modifications over the different families of Nedzhibov et al. [5] multipoint iterative methods.Unfortunately, the various families introduced by Kanwar and Tomar [3] produces only multipoint iterative methods of order three.
Recently, Mir et al. [9] have proposed a new predictor-corrector method (designated as Simpson-Mamta method (SM)), which is defined by where p is chosen as a positive or negative sign so as to make the denominator largest in magnitude.This method is obtained by combining the quadratically convergent method due to Mamta et al. [10] and cubically convergent method due to Hasnov et al. [11].This method will not fail like existing methods if   f x  is very small or even zero in the vicinity of the root.This method requires one evaluation of the function and three evaluations of its derivative per iteration.Thus the efficiency index of this method is equal to 4 3 1.316  which is not better than the one of Newton's method 2 2 1.414  or Traub-Ostrowski's method 3 4 1.587   .More recently, Gupta et al. [12] have developed a family of ellipse methods given by where 0 p    and in which  is permitted at some points in the vicinity of the root.The beauty of this method is that it converges quadratically and moreover, has the same error equation as Newton's method.Therefore, this method is an efficient alternative to Newton's method.
In this paper, we present two families of predictorcorrector iterative methods based on quadratically convergent ellipse method (7), Nedihzbov et al. family (2) and the well-known Traub-Ostrowski's Formula (3).

Two-Step Iterative Method and its Order of Convergence
Our aim is to develop a scheme that retains the order of convergence of Nedzhibov et al. family (3) and which can be used as an alternative to existing techniques or in cases where existing techniques are not successful.Thus we begin with the following predictor-corrector iterative scheme where the positive sign is taken if 0 x r  and the negative sign is taken if 0 is negative, then take positive sign otherwise, negative.It is interesting to note that by ignoring the term in p , proposed family (8) reduces to Nedzhibov et al. family (2).
For 1 This is the modification over the Formula (3) of Traub-Ostrowski [2,5,7], and is also an order four formula.This method requires same evaluation of the function and its derivative as Traub-Ostrowski's method per iteration.Thus the efficiency index [2] of this method is equal to 3 4 1.587  which is better than the one of Newton's method 2 2 1.414  or SM method 4 3 1.316  .More importantly, this method will not fail even if the derivative of the function is small or even zero in the vicinity of the root.
The asymptotic order of this method is presented in the following theorem.
Theorem 1. Suppose   f x is sufficiently differentiable function in the neighborhood of a simple root r and that 0 x is close to r, then the iteration scheme (8) has 3 rd and 4 th order convergence for 1) 1 Using Equations ( 10) and ( 11), we have Therefore, Using Equations ( 12), ( 13) and ( 15), we obtain Using Equations ( 13)-(16) in Equation ( 8), we obtain, While for 1   in (18), we have Thus Equation (18) establishes the maximum order of convergence equal to four, for iteration scheme (8).This completes the proof of the theorem.

Three-Step Iterative Method and its Order of Convergence
On similar lines, we also propose a modification over the Formula (4) of Sharma and Guha [8].Mir and Zaman [13] have considered three-step quadrature based iterative methods with sixth, seventh and eight order of convergence for finding simple zeros of nonlinear equations.Milovannović and Cvetković [14] further presented modifications over three-step iterative methods considered by Mir and Zaman [13].Also Rafiq et al. [15] have presented similar three-step iterative method based on Newton's method with sixth-order convergence.All these modifications are targeted at increasing the local order of convergence with a view of increasing their efficiency index.But all these methods are variants of Newton's method and will not work if is very small or zero in the vicinity of the root.To overcome this problem, now we begin with the following predictorcorrector iterative scheme where a and b are parameters to be determined from the following convergence theorem.
Theorem 2. Let : denote a real valued function defined on I, where I is a neighborhood of simple root r of  .
f x Assume that   f x is sufficiently differentiable function in I. Then the iteration scheme (19) defines a one-parameter ( .., ) i e a family of sixth order convergence if 2 b a   and satisfies the following error equation: Proof: follows on the similar steps as given in the previous theorem.
The proposed scheme ( 19) is now given by where a   .Note that for 0, p  we obtain method (4) obtained by Sharma and Guha [8] and for     , 0,0 p a  , we obtain method (5) developed by Grau and Díaz-Barrero [7].

Numerical Results
In this section, we shall present the numerical results obtained by employing the iterative methods namely Newton's method (NM), Traub-Ostrowski's method (3) (TOM), Simpson-Mamta method (6) (SM), modified Traub-Ostrowski's method (9) (MTOM), method (4) for .The following stopping criteria are used for computer programs: 1) The behaviors of existing multipoint iterative schemes and proposed modifications can be compared by their corresponding correction factors.The correction factor ( ) , ( ) which appears in the existing multipoint iterative schemes is now modified by where 0 p    .This is always well defined, even if ( ) 0.
It is investigated that formulas (8) and (21) give very good approximation to the root when p is taken in between 0 1 p   .This is because that for small value of p , the ellipse will shrink in the vertical direction and extend along horizontal direction.This means that our next approximation will move faster towards the desired root.However, for 1 p  but not very large, the formulas work if the initial guess is very close to the required root.For larger value of p , the formulas do not work.This is perhaps due to the occurrence of numerical instability in the process of computation.Example 8. sin 0 x  .This equation has an infinite number of roots.Newton's method and Traub-Ostrowski's method with initial 0 1.5 x  converge to 4  far away from the required root zero.Method (4) ( 3 M ) converges to 6  .Our methods and SM method do not exhibit this type of behavior and converge to the nearest root zero.

Conclusions
The presented results indicate that the new proposed methods are more efficient and perform better than classical existing methods.The computational results in Ta- ble 2 show that the modified Traub-Ostrowski's method (MTOM) (9) requires a smaller number of function evaluations than Newton's method (NM) and Traub-Ostrowski's method (3) (TOM).The computational results in Table 2 also show that modified method (21) ( 3 MM ) requires smaller number of function evaluations than method (4) ( 3 M ).On similar lines, we can also modify Mir and Zaman [13], Milovannović and Cvetković [14] three-step iterative methods.Now a reasonably close starting value 0 x is not required for these methods to converge.This condition, however applies to practically all existing iterative methods for solving equations.Moreover, they have same efficiency indices as that of existing methods and do not fail if the derivative of the function is either zero or very small in the vicinity of the root.Therefore, these techniques have a definite practical utility.

Table 1 .
The results are summarized in

Table 2 .
We use