Two Families of Multipoint Root-Solvers Using Inverse Interpolation with Memory ()
1. Introduction
Newton’s method is a well-known iterative method to solve nonlinear problems in scientific computation. For a nonlinear equation
, Newton’s method is as the following (see [1] ):
Furthermore, Steffensen’s method is a derivative-free iterative method, and a self-accelerating Steffensen’s method is introduced in Traub’s book ( [2] ) as the following:
(1)
where
and
by using
recursively the memory on the previous step without any new functional evaluation.
The efficiency index of an iterative method (IM) is defined as
where p is the order of IM and d is the number of function evaluations of IM per step. Kung and Traub conjectured in 1974 that a multipoint iteration based on n + 1 evaluations without memory has optimal order 2n of convergence [3]. Thus, Newton’s method and Steffensen’s method are methods of optimal second-order, and their efficiency indices are both
. Self-accelerating Steffensen’s method achieves super convergence of order
with memory, and its efficiency index is
. A one-parameter multipoint iteration of optimal order 16 in [4] [5] consists of successively a Newton substep, a modified Newton substep and two substeps of inverse interpolation, requires four evaluations of f and only one evaluation of
per step, and reaches the efficiency index
. General multipoint iterations of optimal order have been constructed by using inverse interpolation in [3] [6] [7] and direct
interpolation in [5] [7] - [12], and reach the efficiency index
by n + 1
evaluations without memory. Furthermore, self-accelerations of general multipoint iterations with memory from the current and previous iterations can achieve better convergence and efficiency [5] [7] [10] [11] [12].
Recently, the family of n + 1-point iterative methods of optimal order 2n with n + 1 self-accelerating parameters was proposed by using Newton’s interpolation in [12] as follows:
(2)
where
. This family generalized the two-point
two-parameter Steffensen’s method in [13] and the general parametric families in [7] [9] [10] [11] by using n + 1 parameters, and achieved the convergence of
order
by using the parameters with the memory on all points in the previous step as the following:
(3)
where
is a Newton’s interpolating polynomial of order
for
, such that
and
. When
, the efficiency index of (2) without memory is also
and the efficiency index of (2) and (3) with memory is
. The topic on basins of attraction was addressed in [14] - [20] for derivative-free methods and various other methods.
In this paper, a general family of n + 1-point iterative methods derivative-free and another general family of n-point iterative methods using a first derivative are constructed by the inverse interpolation with the memory on points in the previous step in Sections 2 and 3 respectively, the proposed families are tested by numerical examples for solving nonlinear equations and the basins of attraction of the methods are illustrated in Section 4, and finally conclusions are made.
2. General n + 1-Point Iteration Derivative-Free with Memory
Let
be an approximation of the simple root of
and
be a further approximation. Let us recognize the mapping f previously from the independent variable to the dependent variable inversely as a mapping
now from the dependent variable to the independent variable in the obtained discrete information. We can have an inverse Newton’s interpolating polynomial of degree one (see, e.g., [2] [3] ):
such that
and
, where
. The next approximation of the root could be obtained by
as the following:
which can give Steffensen’s method and self-accelerating Steffensen’s method obviously.
However, by using the most information up to the previous step, i.e., using the known discrete information of
in Table 1 when
.
We have the inverse Newton’s interpolating polynomial of degree three as the following:
![]()
Table 1. The known discrete information of
for
and
.
such that
,
,
and
, where
and
and so forth.
could be better than
to approximate the root of
. We suggest that
and propose a derivative-free iteration as the following:
Furthermore, by the inverse Newton’s interpolating polynomial of degree
satisfying Table 1, we construct an optimal family of n + 1-point iterations with memory as the following:
(4)
where
is a constant.
Theorem 1. Let
be a sufficiently differentiable function with a simple root
,
be an open set,
be close enough to a, then the family (4) satisfies the error equation
(5)
where
and
, and achieves the convergence of order at least
.
Proof. Supposed that
and
. Then,
Noticing the definition of divided difference, for
, we have
So,
Thus,
, and
. 
The parameter in the multipoint iteration (4) should be expressed by using memory as good as possible. According to the asymptotic convergence constant in (5), besides others such as
we choose the expression of the parameter here to be the following:
(6)
Theorem 2. Let
be a sufficiently differentiable function with a simple root
,
be an open set,
be close enough to a, then the family (4) with the self-acceleration (6) satisfies the error equation:
(7)
where
and
, and achieves convergence of order at least
.
Proof. By the proof of Theorem 1, for
, we have
For
, we have
So,
Thus,
, and
. 
3. General n-Point Iteration with Memory and a First Derivative
Let
be an approximation of the simple root of
. By an inverse interpolation on this one point of degree one, we have
where
as usual and so forth. Therefore, by using
to approximate the root, we have
which is Newton’s method. However, by using the most information up to the previous step, we have the inverse interpolation of degree two:
We suggest a one-point method with memory by
as follows:
(8)
Furthermore, we construct a family of n-point iterations with the memory on the whole previous step by using the inverse interpolation as follows:
(9)
Theorem 3. Let
be a sufficiently differentiable function with a simple root
,
be an open set,
be close enough to a, then the family of n-point iterations (9) satisfies the error equation:
(10)
where
and
, and achieves convergence of order at least
.
Proof. Denote
and
. Supposed that
and
. Then,
When
, we have
So, we have
Thus,
and
.
When
, for
, we have
So,
Thus,
, and
. 
4. Numerical Examples
The proposed families (4), (4) with (6), (9), as well as the existing family (2) with and without memory are demonstrated to solve the nonlinear equations in the examples. For general families of biparametric multipoint iterations with and without memory as well as other related discussions, please refer to, e.g., [10] [11]. The computational order of convergence is defined by:
Example 1. The numerical results on
in Table 2 agree with the convergence rates in Theorems 1 - 3.
Example 2. The numerical results in Table 3 are for nonlinear functions:
Example 3. The basins of attraction of the existing family (2) with (3) by using direct Newton interpolation, the family (4) with (6) and family (9) by using inverse interpolation to solve
with the criterion
in
are shown in Figure 1. The colors “r”, “g”, “b”, “c”, “w”, “y”, “m”, “k” are assigned for the number of iteration {0, 1, 2}, 3, 4, {5, 6}, {7, 8}, {9, 10, 11}, {12, 13, 14, 15} and default. The basin of attraction of the method based on inverse interpolation may be a little smaller, but not too much worse than that based on direct interpolation, since the first substep is a Steffensen’s method for (4) with (6) and a Newton’s method for (9) respectively in the first step when
.
![]()
Figure 1. (2) with (3), (4) with (6), (9) in 1st, 2nd, 3rd rows (left n = 1, middle n = 2 and right n = 3).
![]()
Table 2. The numerical results of
.
![]()
Table 3. Numerical results for
.
5. Conclusion
In this paper, we construct a family (4) of n + 1-point iterations derivative-free and another family (9) of n-point iterations using a first derivative by the inverse interpolatory polynomial with memory to solve the simple root of a nonlinear equation. The general families (4) and (9) use n + 1 functional evaluations with the memory in the previous step to achieve the super convergence of order
and
respectively. The general family (4) with (6) achieves the super convergence of order
, which is the same as that of the existing family (2) with (3). When
, as special case, both of them achieve the super convergence of order
and have the efficiency index
. The application of the memory is more handy in
the proposed families than that of (3) in (2). The basins of attraction of the related multipoint iterations with memory are also demonstrated. The advantage of effectiveness and convenience in practice of the proposed families is confirmed by numerical examples.