Hybrid Steffensen’s Method for Solving Nonlinear Equation

Abstract

In this paper, we are going to present a class of nonlinear equation solving methods. Steffensen’s method is a simple method for solving a nonlinear equation. By using Steffensen’s method and by combining this method with it, we obtain a new method. It can be said that this method, due to not using the function derivative, would be a good method for solving the nonlinear equation compared to Newton’s method. Finally, we will see that Newton’s method and Steffensen’s hybrid method both have a two-order convergence.

Share and Cite:

Eskandari, H. (2022) Hybrid Steffensen’s Method for Solving Nonlinear Equation. Applied Mathematics, 13, 745-752. doi: 10.4236/am.2022.139046.

1. Introduction

In most fields of engineering, basic sciences, economics, management, etc., there are important problems, one of which is solving the nonlinear equation and finding the root of the nonlinear equation f ( x ) = 0 , which is one of the oldest problems in different and still scientists in different fields do a lot of research in this field.

If a number such as α in f ( α ) = 0 the case applies to the α root of the equation f ( x ) = 0 or zero function f ( x ) . It is easy to find zero function for a linear function or quadratic function, but if the function is not a first or second degree algebraic function or a nonalgebraic function, it is not easy to find a zero function and we have to find the approximate zero from numerical methods or iteration methods.

2. Simple Methods

There are many methods to find the root of the nonlinear equation f ( x ) = 0 in numerical analysis and computing books. [1] [2] [3] [4] [5] One of the easiest methods is simple iteration method or fixed point method. In this method, we assume that the function f ( x ) in the interval [ a , b ] has a unique root α and we write the equation f ( x ) = 0 by manipulation x = g ( x ) . Therefore, it can be concluded according to α from f ( α ) = 0 that α = g ( α ) . So, the root α is both equation f ( x ) = 0 and x = g ( x ) . We call the fixed point of function g.

Now we have an approximation of α the name x 0 and create the iteration sequence { x n } for x n + 1 = g ( x n ) the form n = 0 , 1 , 2 , 3 , . Under the appropriate conditions for the function g as stated below, this iteration sequence will converge to the fixed point of function g or function f. We call the iteration formula x n + 1 = g ( x n ) for n = 0 , 1 , 2 , 3 , the formula a simple iteration method.

Determining the characteristics of the function g and the initial value x 0 for which the iteration sequence { x n } converges requires the following theories, proving them in calculus and numerical mathematics.

Theorem 1

If there is a function g in [ a , b ] on [ a , b ] and there is a positive constant number L that | g ( x ) | L < 1 , then the equation x = g ( x ) has only one root that belongs to the interval [ a , b ] .

Theorem 2

With the conditions of the theorem 1 for each x 0 in [ a , b ] sequence { x n } with condition x n + 1 = g ( x n ) converges only with the answer x = g ( x ) .

Theorem 3

If the sequence { x n } obtained by the simple iteration method from the equation x = g ( x ) is connected to the root α convergent and g has a derivative of order k, g ( α ) = g ( α ) = = g ( k 1 ) ( α ) = 0 and g ( k ) ( α ) 0 then the order of convergence of the sequence { x n } is equal to k.

Stop Criteria

We use the following conditions to stop the calculation of xns.

If ε is a very small positive number

1) We calculate xns until the time when | f ( x n ) | < ε , means as soon as | f ( x n ) | < ε we stop the operation and where we accept x n as an approximation of α .

2) If we stop the operation | x n x n 1 | < ε and accept x n as an approximation of the root.

3. Preliminary Results

One of the fastest methods, as well as the method that can be seen in most calculation books, is Newton’s method [1]. This method is an iterative method and in it is assumed that a function like f : R R is a smooth nonlinear function with a simple root x * , that is, it is assumed that f ( x * ) = 0 has a first order derivative f ( x * ) 0 . Starting from a starting point x 0 R which is a relatively close estimate of the desired root, we obtain n = 0 , 1 , 2 , values per x n using the iteration formula

x n + 1 = x n f ( x n ) f ( x n ) (1)

This iteration formula is known as Newton iteration formula. This method is a special case of simple iteration method.

We use the previous theorems to discuss the convergence of Newton’s method. We write the equation of f ( x ) = 0 in the form of x = x f ( x ) f ( x ) , that is

g ( x ) = x f ( x ) f ( x ) (2)

and the formula for iteration Newton’s method will be in the form of x n + 1 = g ( x n ) , that is mean x n + 1 = x n f ( x n ) f ( x n ) .

To discuss the convergence of Newton’s method, we need to examine the convergence conditions of the simple iteration method for the g ( x ) defined in (2). For this purpose, we calculate g ( x ) and assume that the f function has a continuous second order derivative. So we have

g ( x ) = f ( x ) f ( x ) ( f ( x ) ) 2 (3)

If α is a simple root, it is f ( α ) = 0 and f ( α ) 0 , and therefore it is g ( α ) = 0 .

And also g ( α ) = f ( α ) f ( α ) is obtained, which is easily the result of g ( α ) 0 , and therefore Newton’s method has convergence of the second order.

One of the problems with Newton’s method is the necessity of existence of f ( x ) and its calculation at points x n and the fact that it is always f ( x n ) 0 . Sometimes the function f does not have a derivative, as a result of which it will not be possible to use Newton formula, or the f ( x ) form and its calculation are complicated.

To solve these problems and avoid calculating the first order derivative, you can use Steffensen’s method [1] [2] [4] [6].

In this method, for the first order derivative of the function f ( x ) at the point x with step length h, we use the three main approximation formulas, i.e. the forward difference formula

f ( x ) f ( x + h ) f ( x ) h (4)

or the backward difference formula

f ( x ) f ( x ) f ( x h ) h (5)

or the central difference formula of

f ( x ) f ( x + h ) f ( x h ) 2 h (6)

If we use the first order derivative of the f ( x ) function at the point x n with step length h = f ( x n ) in the forward difference formula (4), we have

f ( x n ) f ( x n + f ( x n ) ) f ( x n ) f ( x n )

By inserting in Newton iteration formula (1), we arrive at Steffensen iteration formula

x n + 1 = x n ( f ( x n ) ) 2 f ( x n + f ( x n ) ) f ( x n ) (7)

If we use the first derivative of the function f ( x ) at the point x n with step length h = f ( x n ) in the backward difference formula (5), we have

f ( x n ) f ( x n ) f ( x n f ( x n ) ) f ( x n )

By inserting in Newton iteration formula (1) we have

x n + 1 = x n ( f ( x n ) ) 2 f ( x n ) f ( x n f ( x n ) ) (8)

If we use the first order derivative of the function f ( x ) at the point x n with step length h = f ( x n ) in the central difference formula (6), we have

f ( x n ) f ( x n + f ( x n ) ) f ( x n f ( x n ) ) 2 f ( x n )

By inserting in Newton iteration formula (1) we have

x n + 1 = x n 2 ( f ( x n ) ) 2 f ( x n + f ( x n ) ) f ( x n f ( x n ) ) (9)

Each of the iteration formulas (7), (8) and (9) has convergence higher than two. [1] [2] [4] [6] That this subject has been proven in many articles and books.

We want to combine the iteration formula (7) and (8). We do this by using the w parameter. We introduce the w parameter in this method. If 0 w 1 is a real constant number to write the convex combination of these two iteration formulas, we act like this. We know that

x n + 1 = w x n + 1 + ( 1 w ) x n + 1

Now, instead of each of the x n + 1 , we place the iterations formulas (7) and (8).

= w ( x n 2 ( f ( x n ) ) 2 f ( x n + f ( x n ) ) f ( x n ) ) + ( 1 w ) ( x n 2 ( f ( x n ) ) 2 f ( x n ) f ( x n f ( x n ) ) )

By simplifying we will have each

= x n 2 ( f ( x n ) ) 2 ( w f ( x n + f ( x n ) ) f ( x n ) + 1 w f ( x n ) f ( x n f ( x n ) ) )

So, the formula for hybrid Steffensen iteration will be as follows

x n + 1 = x n 2 ( f ( x n ) ) 2 ( w f ( x n + f ( x n ) ) f ( x n ) + 1 w f ( x n ) f ( x n f ( x n ) ) )

Where w is parameter and 0 w 1 .

If we change the value of w, we can get the zero of the nonlinear equation for different w and 0 w 1 with a relatively good approximation even compared to Newton’s method. It can be said that the new iteration formula, i.e., Steffensen’s hybrid method, has the same order of convergence as Steffensen’s method because it uses Steffensen’s method.

In order to see this issue better, we will give some examples in the next part. And we will see that the speed of convergence changes with the change of w for different functions.

4. Numerical Experiments

Here, in order to compare different w in this method, we bring some examples of non-linear equations and compare them (Table 1).

The result of this programming in Maple for these functions using Newton’s method is shown in the following table. In all these functions ε = 10 1000 and x 0 = 1.000000000 are assumed. The different iterations of this method are listed in Table 2.

The result of this programming in Maple for these functions using Steffensen’s method is shown in the following table. In all these functions ε = 10 1000 and x 0 = 1.000000000 are assumed. The different iterations of this method are listed in Table 3.

The result of this programming in Maple for these functions using Steffensen’s hybrid method is shown in the following table that we have obtained an approximate value for several values of w. In all these functions ε = 10 1000 and x 0 = 1.000000000 are assumed. The different iterations of this method are listed in Tables 4-6 for w = 0.5, 0.25 and 0.85, respectively.

Table 1. Some examples of the function with its zero.

Table 2. Solution by using Newton’s method.

Table 3. Solution by using Steffensen’s method.

Table 4. Solution by using Steffensen’s hybrid method w = 0.5.

Table 5. Solution by using Steffensen’s hybrid method w = 0.25.

Table 6. Solution by using Steffensen’s hybrid method w = 0.85.

When we compare the number of repetitions with different w in Tables 4-6, we get interesting results. It can be seen that with this simple method it is possible to converge to a quick answer.

5. Conclusion

In this article, a class of solutions for a nonlinear equation was presented. Steffensen’s method is a simple method to solve the nonlinear equation. Using the w parameter, we obtained a class of answers. The method presented in this article called the Steffensen’s Hybrid Method, is a relatively good and fast method for solving nonlinear equations. One of its good features is that it does not use the derivative of the function in this method, and perhaps it can be said that this method is superior to Newton’s method, that by changing w, better results can be obtained and the solution method can converge faster.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Kincaid, D. and Cheney, W. (1996) Numerical Analysis. 2nd Edition, Brooks/Cole, Pacific Grove.
[2] Jaiswal, J.P. (2013) A New Third-Order Derivative Free Method for Solving Nonlinear Equations. Universal Journal of Applied Mathematics, 1, 131-135.
https://doi.org/10.13189/ujam.2013.010215
[3] Eskandari, H. (2010) Solution of Equation by Using Difference Formulas. Proceedings of the 15th WSEAS International Conference on Applied mathematics. Athens, 29-31 December 2010, 17-19.
[4] Sharma J. (2005) A Composite Third Order Newton–Steffensen Method for Solving Nonlinear Equations. Applied Mathematics and Computation, 169, 242-246.
https://doi.org/10.1016/j.amc.2004.10.040
[5] Hamideh, E. (2014) Generalized Difference Formula for a Nonlinear Equation. Applied and Computational Mathematics, 3, 30-136.
https://doi.org/10.11648/j.acm.20140304.14
[6] Tahereh, E. (2014) A New Sixth-Order Steffensen-Type Iterative Method for Solving Nonlinear Equations. International Journal of Analysis, 2014, Article ID: 685796.
https://doi.org/10.1155/2014/685796

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.