1. Introduction
Determining zeros of scalar function f lines up with the most important problems in both theory and practice not only in mathematics but also in many other fields like engineering sciences, physics, computer science, finance. These problems lead to a well-endowed mixture of mathematics, numerical analysis and computing science. Discovering the source of non-linear equation is one of the most important challenges in science and engineering. The main concept to all root finding methods is the recurrence of successive approximation. Actually, analytical method for the non-linear Equation (1.1) is hard or nearly non existent.
(1.1)
In current years, researchers have been curious about modifying the Newton method which is the foundation of all algorihms for solving these problems. Current design methods tend to focus on usage of fonction evaluations and avoid usage of the derivatives. In this study, we consider a non-linear Equation (1.1) and we present a new modification of Newton method. The analysis of convergence shows that the new method is cubically convergent. In a recurrence way, our method requires an evaluation of the function and the one of its derivatives.
2. Preliminaries
The idea of the iterative method is that we make some estimate of the solution and we repeatedly improve that estimate using some well defined operations, until we end up with an approximate answer which is quite close to real answer.
Let
be r-times Fréchet differentiable function on an open interval
.
be a real zero of the non linear equation
(2.1)
As well known, roots of Equation (2.1) can be found analytically only in some special cases. We most commonly solve (2.1) approximately, that is, we find an approximation to the zero
by applying some iterative method of the form:
(2.2)
where
is an approximation to the zero
. The function
is called iteration function.
Definition 2.1. Let
be a real valued function with root
and let
be a sequence of real number from iterative method (sequence of iterate) that converge toward
. If there exists a real number r and a nonzero constant
such that:
Then p is called the order of convergence and
is the factor of convergence orthe asymptotic error constant.
Definition 2.2. Let
be the error of the approximation in the nth iteration.
(2.3)
is the error equation. If the error equation exists, the p is the order of convergence ofthe iterative method.
Theorem 2.3. (Schroder-Traub 1964) Let
an iterative function such that
is continuous in a neighborhood of
. Then,
is of order p if only if
(2.4)
The asymptotic error constant is given by:
(2.5)
Theorem 2.4 (Traub 1964 [3] ) Let
be a simple zero of a function f and let
define an iterative method of order p. Then a composite iterative function
introduced by Newtons method
(2.6)
defines an iterative method of order
.
Theorem 2.5 (Traub 1964 p. 28 [3] ) Let
be iteration functions with the orders
respectively. Then the composition
(2.7)
defines the iterative method of order
.
Definition 2.6. Let r be the number of function evaluations per iteration of the method. The efficiency index of the method is defined by:
(2.8)
where p is the order of convergence of the method.
Definition 2.7. Suppose that
,
et
are three successive iterative closer to the root
. Then the computational order of convergence may be approximated by:
(2.9)
where
.
3. Construction of the Methods and Convergence Analysis
In this section, we recall the modified Newton method that was proposed by Gentian Zavalani ( [4] ). To determine the order of convergence of the sequence
, let consider the Taylor expansion of
where the iterative method is
and the function g satisfied:
1) There exist
such that
for all
2) There exist
such that
for all
(3.1)
Definition 3.1. The mapping
is (totally or Fréchet) differentiable at x if the Jacobian matrix
exists at x and
(3.2)
If
, this defintion reduces to the usual definition of differentiability.
Definition 3.2. For mapping
, a solution
of
is simple if F is differentiable at
and
is non singular.
In this work, we assume that f admits a unique and simple solution.
Iterative Methods
For any
we may write the Taylor’s expansion for f as follows:
(3.3)
for
, we have:
(3.4)
Approximating the integral in (3.4), we have:
(3.5)
By using f(x) = 0, we have
(3.6)
Then
(3.7)
This is known as Newton method for the non linear equations
and has quadratic convergence when
the initial guess is quite close to
. If we approximate the integral in (3.4) by using the closed-open quadrature formula ( [5] ):
(3.8)
·
.
·
weigths satisfying
and
Then
(3.9)
Thus, by using f(x) = 0, we have:
(3.10)
(3.11)
Algorithm For a given
, compute approximate solution
· Predictor step:
(3.12)
· Correction step:
(3.13)
This is another iterative method for solving the non linear Equation (1.1). These modifications of Newton method are very important and interesting because per iteration, they require one evaluation of the function and two evaluation of the derivative, not requiring the second derivative
but they can converge cubically.
Theorem 3.3. Let
be r-times Fréchet differentiable function on an open interval
.
be a real zero of the non linear equation
. The iterative method defined by: For given
,
(3.14)
has cubic convergence and satisfies the error equation:
(3.15)
4. New Modified Newton Method
We consider the predictor-corrector method (3.14):
(4.1)
We replace
by finite difference approximation
(4.2)
which is a suitable approximation which does not require new information. The predictor step becomes the secant method scheme. The scheme (3.14) becomes:
(4.3)
Our option is motivated by the fact that the evaluation of two derivative may delay the convergence of the method. The proposed method requires only by iteration an evaluation of the function and one of its derivative which improved the index of efficiency of the method compared to those propose in many other works. It has a convergence of order
. More precisely, the algorithm of our iterative method is the following:
1) For a given
,
2) computing
(4.4)
3) For
,
(4.5)
and
(4.6)
Theorem 4.1. Let
be r-times Fréchet differentiable function on an open interval
.
be a real zero of the non linear equation
. The iterative method defined by (4.3) has p = 3 order convergence and the error equation is:
(4.7)
Proof. Let
the unique root simple of Equation (1.1).
Let
an approximation of
obtained by the Scheme (4.6).
Let
the approximation error.
(4.8)
(4.9)
(4.10)
(4.11)
By expansion, we have:
(4.12)
Then
(4.13)
By replacing
by (4.2)
(4.14)
(4.15)
By using the predictor step, we have:
(4.16)
(4.17)
Taylor expansion of
at
gives:
(4.18)
(4.19)
(4.20)
(4.21)
(4.22)
By using (4.14) we have,
(4.23)
By using (4.22) and (4.23),
(4.24)
Then
(4.25)
Thus
(4.26)
5. Numerical Examples and Comparison
We will compare the performance of our method with some existing methods and use in [1]. Therefore, we will give numerical results for some functions and initial values.
From the analysis made of Table 1, we can notice that the efficiency index of the proposed method is higher than that of the secant method which was already better than the Newton method and others developed in [1]. This is explained by the fact our method even though of order three of convergence requires only two evaluations of the function by iteration whereas most of the method of order three existing in the literature use three evaluations of the function and this by supposing that the evaluation of the function and its derivative have the same numerical cost. Our method defined by 4.3 is preferable.
Table 1. Efficiency index of different numerical methods.
Table 2.
et
.
Table 3.
et
.
Table 4.
et
.
Table 5.
et
.
6. Conclusion
We have presented in this paper a modified Newton method of order three of convergence, more efficient than most of the methods known in the literature. Analysis of efficiency shows that this method is preferable for solving the non-linear equations. A comparative study of the number of function evaluations and the number of iterations with convergence of some methods is indicated in Tables 2-5. The result of these tables confirms the theory.