Multistep Quadrature Based Methods for Nonlinear System of Equations with Singular Jacobian

Abstract

Methods for the approximation of solution of nonlinear system of equations often fail when the Jacobians of the systems are singular at iteration points. In this paper, multi-step families of quadrature based iterative methods for approximating the solution of nonlinear system of equations with singular Jacobian are developed using decomposition technique. The methods proposed in this study are of convergence order , and require only the evaluation of first-order Frechet derivative per iteration. The approximate solutions generated by the proposed iterative methods in this paper compared with some existing contemporary methods in literature, show that methods developed herein are efficient and adequate in approximating the solution of nonlinear system of equations whose Jacobians are singular and non-singular at iteration points.

Share and Cite:

Ogbereyivwe, O. and Muka, K. (2019) Multistep Quadrature Based Methods for Nonlinear System of Equations with Singular Jacobian. Journal of Applied Mathematics and Physics, 7, 702-725. doi: 10.4236/jamp.2019.73049.

1. Introduction

System of equations that is used in describing real life phenomena is often nonlinear in nature. Examples of Mathematical models that are formulated using nonlinear system of equations (NLSE) include mathematical models that describe kinematics, combustion, chemical equilibrium, economic problem and neurophysiology problems, [1] [2] ; Reactor Steering problem, [3] [4] ; transportation problem, [5] . Indeed most real life problems are best described using NLSE.

Consider the NLSE,

G ( X ) = 0 (1)

where X m , 0 is a null vector of dimension m, G : D m m is functional define by

G ( X 1 , X 2 , , X m ) = [ G i ( X 1 , X 2 , , X m ) ] T ,

G i , i = 1 , 2 , , m are coordinate functions of G and D is an open domain in m .

The Newton method in m-dimension is a popular iterative method for approximating the solution of NLSE (1). The sequence of approximations { X k } k 1 generated using the Newton method, converges to the solution Φ of the NLSE (1) with convergence order ρ = 2 , under the condition that det ( G ( X k ) ) 0 , [6] . One setback of the Newton method is that it fails if at any iteration stage of computation, the Jacobian matrix G ( X k ) is singular [7] [8] .

The development of new iterative methods for approximating the solution of (1) has attracted the attention of researchers in recent years, as evident in the literature such as [9] [10] [11] [12] . One objective of developing iterative methods for the approximation of the solutions of NLSE (1) is to obtain methods with better convergence rate, computational efficiency or modified to solve certain problems. Recently, plethora numbers of iterative methods for approximating solution of (1) have been developed via diverse techniques. These techniques include Taylor series and homotopy [13] [14] [15] [16] ; decomposition technique in [3] [17] [18] and quadrature formulas technique [19] [20] [21] [22] .

Quadrature formulas are veritable tools for the evaluation of the integrals, [23] [24] [25] . The idea used in developing quadrature based iterative methods is the approximation of the integral in the Taylor expansion of vector function using quadrature formulas, [26] [27] . The quadrature based iterative functions are implicit-type, [19] [20] [21] [28] [29] . To implement the implicit iterative formula derived via quadrature formula, the predictor and corrector technique is utilized with the Newton method used often as predictor and the iterative function derived from quadrature formula as corrector. The quadrature based methods breakdown in implementation when the Jacobian G ( X ) of the NLSE (1) is singular at iteration points. The presence of singular Jacobian G ( X ) within the domain of evaluation does not suggest in practice the absence of solution to (1). In order to circumvent the problem of having singular Jacobian at a point in the vicinity of the solution Φ of (1), the Newton method is modified by introducing perturbation term (a diagonal matrix) to the Jacobian of its corrector factor [30] . In [31] , the idea of the perturbation term introduced in [30] is utilized to develop a Two-step iterative method for approximating the solution of (1), where the Jacobian G ( X ) is singular at some iteration points. In [8] , similar perturbation term was introduced at every step of the corrector factor in the three step frozen Jocobian iterative method for approximating the solution of (1). Other articles like [32] [33] have also developed several iterative methods for solving (1) with the help of same perturbation term introduced to the target Jacobian of (1). It is worth of note that the diagonal matrix used as the perturbation term in literature has not been significantly modified since introduced in [30] . Also, its application has not been extended to quadrature based iterative methods in literature. Motivated and inspired by the work in [30] [31] [32] and [33] , to develop families of multi-step quadrature based iterative methods with infused perturbation term to its Jacobian in this paper. It is important to note that the perturbation term developed and used in this work is different from the diagonal matrix that is formed by the coordinates functions of the target NLSE (1) used in literature. To achieve this target, a continuous and differentiable auxiliary function is directly infused into (1). The resulting NLSE is thereafter expressed as coupled equations with generic quadrature formula (1). The decomposition technique is used to resolve the coupled equation, from which some iterative schemes that can be utilized in developing iterative methods for approximating the solution of (1) whose Jacobian is singular are proposed.

2. The Proposed Iterative Methods

Let β be the initial approximation close to Φ a solution of the NLSE (1.1.1) and Ω ( X ) a function such that

Ω ( X ) = ψ ( X ) G ( X ) = 0 (2)

where ψ ( X ) is a differentiable nonzero scalar function. The notation is component-wise element operator such that

ψ ( X ) G ( X ) = [ ψ ( X ) G ( X 1 ) , ψ ( X ) G ( X 2 ) , , ψ ( X ) G ( X m ) ] T (3)

The solution of Ω ( X ) = 0 and G ( X ) = 0 are same because ψ ( X ) 0 for all values of X.

With the aid of Taylor series expansion of a multi-dimensional function about β up to the second term and using the generic quadrature formula to approximate G ( β ) , then (2) can be rewritten as

G ( β ) + [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X β ) ) ] ( X β ) + H ( X ) = 0 (4)

where H ( X ) is higher order terms of the Taylor expansion, the division operator in ( ψ ( X ) T ψ ( X ) ) is element wise, θ i and μ i , i = 1 , 2 , , q are knots and weights respectively such that

i = 1 q μ i = 1 , i = 1 q μ i θ i = 1 2 , (5)

Equation (5) is consistency conditions, [34] .

The (4) is expressed into coupled equation given in (6) and (7).

G ( β ) + [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X β ) ) ] ( X β ) + H ( X ) = 0 (6)

H ( X ) = G ( X ) + G ( β ) [ G ( β ) ( p ( X ) T p ( X ) ) + i = 1 q μ i G ( β + θ i ( X β ) ) ] ( X β ) (7)

In compact form, (6) can be expressed as

X = β + ( X ) (8)

where

( X ) = [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X β ) ) ] 1 ( G ( β ) + H ( X ) ) (9)

is a nonlinear function.

Applying the decomposition technique due to [17] to decompose the nonlinear function (9) as

( X ) = ( X 0 ) + i = 1 [ ( j = 0 i X j ) ( j = 0 i 1 X j ) ] (10)

where X 0 is initial guess.

The idea here is to find the solution vector X of the NLSE (1) in series form through an iterative scheme, such that the solution X is the sum of the initial guess β and the sum of consecutive differences of successive and preceding iterate points approximations of X, that is;

X = i = 0 X j = β + ( X 0 ) + i = 1 [ ( j = 0 i X j ) ( j = 0 i 1 X j ) ] (11)

Hence the following scheme from (11) is obtained:

X 0 = β X 1 = ( X 0 ) X s + 1 = ( j = 0 i X j ) ( j = 0 i 1 X j ) , i = 1 , 2 , (12)

The sum of the respective sides of (12) is

i = 0 s + 1 X i = β + ( i = 0 s X i ) , i = 1 , 2 , (13)

From (2) and (12), the solution X of the (1) is approximated as:

X β + i = 0 s X i = β + ( i = 0 s 1 X i ) , i = 1 , 2 , (14)

As s becomes large, the approximations of the solution X gets closer to the exact solution of (1).

From Equation (12)

X 0 = β (15)

Since X 0 is initial guess, setting X 0 in (7) yields

H ( X 0 ) = 0 (16)

From (9)and (15),

X 1 = ( X 0 ) = [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + ( i = 1 q μ i ) G ( β ) ] 1 G ( β ) (17)

For s = 1 in (14), and using (17), the following is obtained.

X X 0 + X 1 β + ( X 0 ) β [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + G ( β ) ] 1 G ( β ) (18)

Using the formulation in (18), a One-step family of iterative scheme for approximating the solution of (1) is proposed as in Scheme 1.

Scheme 1 Assume X 0 is an initial guess, approximate the solution Φ of (1.1.1) using the iterative scheme:

X k + 1 = X k [ G ( X k ) ( ψ ( X k ) T ψ ( X k ) ) + G ( X k ) ] 1 G ( X k ) , k = 0 , 1 , 2 , (19)

Scheme 1 is a family of One-step iterative scheme that can be used to propose iterative methods for solving (1) for some function Φ ( X k ) .

For s = 2 in (14), the solution X of (1) can be approximated as:

X X 0 + X 1 + X 2 X 0 + ( X 0 + X 1 ) β [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 β ) ) ] 1 × G ( β + H ( X 0 + X 1 ) ) (20)

Set X = X 0 + X 1 in (7) implies

H ( X 0 + X 1 ) = G ( X 0 + X 1 ) G ( β ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 β ) ) ] 1 ( X 0 + X 1 β ) (21)

From (18),

X 0 + X 1 β = [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + G ( β ) ] 1 G ( β ) (22)

substituting (22) into (21) yields

H ( X 0 + X 1 ) = G ( X 0 + X 1 ) G ( β ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 β ) ) ] 1 × [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + G ( β ) ] 1 G ( β ) (23)

Inserting (23) into (20), gives the equation

X X 0 + X 1 + X 0 β [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + G ( β ) ] 1 G ( β ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 β ) ) ] 1 G ( X 0 + X 1 ) (24)

Using (24), a Two-step iterative scheme for the approximation of solution Φ of (1) as given in Scheme 2 is proposed.

Scheme 2 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative scheme:

ν k = X k [ G ( X k ) ( ψ ( X k ) T ψ ( X k ) ) + G ( X k ) ] 1 G ( X k ) X k + 1 = ν k [ G ( X k ) ( ψ ( X k ) T ψ ( X k ) ) + i = 1 q μ i G ( X k + θ i ( ν k X k ) ) ] 1 G ( ν k ) , k = 0 , 1 , 2 , (25)

Scheme 2 is used to propose Two-step iterative methods for approximating the solution Φ of the NLSE (1).

For s = 3 in (14), the solution of (1) can be approximated as follows:

X X 0 + X 1 + X 2 + X 3 X 0 + ( X 0 + X 1 + X 2 ) β [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 + X 2 β ) ) ] 1 × ( G ( β ) + H ( X 0 + X 1 + X 2 ) ) (26)

Set X = X 0 + X 1 + X 2 in (7) yields

H ( X 0 + X 1 + X 2 ) = G ( X 0 + X 1 + X 2 ) G ( β ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 + X 2 β ) ) ] ( X 0 + X 1 + X 2 β ) (27)

From (26), (28) is obtained.

X 0 + X 1 + X 2 β = [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 β ) ) ] 1 G ( X 0 + X 1 ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + G ( β ) ] 1 G ( β ) (28)

Substituting (28) into (27) yields

H ( X 0 + X 1 + X 2 ) = G ( X 0 + X 1 + X 2 ) G ( β ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 + X 2 β ) ) ] × [ ( G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 + X 2 β ) ) ] 1 × G ( X 0 + X 1 + X 2 ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + G ( β ) ] 1 G ( β ) ) (29)

Substitute (29) in (26)

X X 0 + X 1 + X 2 + X 3 X 0 + ( X 0 + X 1 + X 2 ) β [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 + X 2 β ) ) ] 1 × G ( X 0 + X 1 + X 2 ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + i = 1 q μ i G ( β + θ i ( X 0 + X 1 β ) ) ] 1 × G ( X 0 + X 1 ) [ G ( β ) ( ψ ( X ) T ψ ( X ) ) + G ( β ) ] 1 G ( β ) (30)

The formulation in (30) enable the proposal of the three-step iterative scheme for the solution of (1).

Scheme 3 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative scheme:

ν k = X k [ G ( X k ) ( ψ ( X k ) T ψ ( X ) ) + G ( X k ) ] 1 G ( X k ) W k = ν k [ G ( X k ) ( ψ ( X k ) T ψ ( X k ) ) + i = 1 q μ i G ( X k + θ i ( ν k X k ) ) ] 1 G ( ν k ) , X k + 1 = W k [ G ( X k ) ( ψ ( X k ) T ψ ( X k ) ) + i = 1 q μ i G ( X k + θ i ( W k X k ) ) ] 1 G ( W k ) , k = 0 , 1 , 2 , (31)

Scheme 3 is used to propose Three-step iterative methods for approximating the solution Φ of the NLSE (1).

A suitable choice of the function ψ ( X k ) in the proposed Scheme 1, Scheme 2 and Scheme 3 yields families of quadrature based iterative methods for approximation of the solution Φ of (1). It is worthy of note that for ψ ( X k ) = 1 , Scheme 1 reduces to the classical Newton method, while Scheme 2 and Scheme 3 reduces to the family of approximation methods in [12] . One major target of proposing Ω ( X ) in (2) is to discover the perturbation function ψ ( X k ) by retaining the solution Φ of the target NLSE (1). Recall that ψ ( X k ) must be chosen such that it is a nonzero scalar function and its first derivative ψ ( X k ) does not vanish. This way, the solution of (1) is unperturbed. One function and its first derivative that is nonzero is the exponential function, [30] [33] [35] . Suppose ψ ( X k ) is replace by exp ψ ( X k ) , then

ψ ( X k ) T ψ ( X k ) = ψ ( X k ) T (32)

From (32), a generalization can be made as

ψ ( X k ) T = λ ( X k ) (33)

where λ ( X k ) = [ λ 1 ( X k ) , λ 2 ( X k ) , , λ m ( X k ) ] T . Consequently, the following iterative algorithms are obtained from Scheme 1, Scheme 2 and Scheme 3 respectively.

Algorithm 1 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

X k + 1 = X k [ G ( X k ) G ( X k ) λ ( X k ) ] 1 G ( X k ) , k = 0 , 1 , 2 , (34)

If the parameter λ ( X k ) = 0 , Algorithm 1 reduces to the m-dimensional classical Newton method (1). The major difference between Algorithm 1 and the Wu method in [30] is the introduction of a dense matrix G ( X k ) ψ ( X k ) T in place of the diagonal matrix d i a g ( σ i g i ( X k ) ) , i = 1 , 2 , , m in the Jacobian G ( X k ) of the target (1).

Algorithm 2 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ( X k ) ] 1 G ( X k ) , X k + 1 = ν k [ i = 1 q μ i G ( X k + θ i ( ν k X k ) ) G ( X k ) λ ( X k ) ] 1 G ( ν k ) , k = 0 , 1 , 2 , (35)

The Algorithm 2 is a Two-step family of iterative method for approximating the solution of (1).

Algorithm 3 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ( X k ) ] 1 G ( X k ) , W k = ν k [ i = 1 q μ i G ( X k + θ i ( ν k X k ) ) G ( X k ) λ ( X k ) ] 1 G ( ν k ) , X k + 1 = W k [ i = 1 q μ i G ( X k + θ i ( W k X k ) ) G ( X k ) λ ( X k ) ] 1 G ( W k ) , k = 0 , 1 , 2 , (36)

Remark 1

For numerical implementation, the choice of λ ( X k ) is subjectively chosen, however specific values of λ (for reference purpose, λ ( X k ) is denoted as λ ) are used such that the magnitude of their elements is less one in order to achieve better convergence rate and accuracy. Similarly the choice of θ i is also subjective but must satisfy the consistency condition in (5).

2.1. Convergence Analysis of the Proposed Iterative Methods

In this section, the convergence of the iterative methods (Algorithm 1, Algorithm 2 and Algorithm 3) are established using the Taylor series approach, [11] [12] [36] . In all the proofs, it is assumed that the function G ( ) is thrice Frechet differentiable.

2.2. Convergence Analysis of Algorithm 1

To establish the convergence of Algorithm 1, the proof of Theorem 1 is considered.

Theorem 1 Suppose the function G : D m m is continuous and differentiable in some neighborhood D m of Φ . If X 0 is an initial guess in the neighborhood of Φ , then the sequence of approximations { X k } k 0 , ( X k D ) generated by (34) converges to Φ with convergence order ρ = 2 .

Proof. Let E k = X k Φ be the error in the kth iteration point. Using the Taylor series expansion of G ( X ) and G ( X ) about Φ , the following equations are obtained.

G ( X ) = G ( Φ ) + G ( Φ ) ( X Φ ) + 1 2 ! G ( Φ ) ( X Φ ) 2 + 1 2 ! G ( Φ ) ( X Φ ) 3 + (37)

G ( X ) = G ( Φ ) + G ( Φ ) ( X Φ ) + 1 2 ! G ( Φ ) ( X Φ ) 2 + 1 3 ! G ' ( Φ ) ( X Φ ) 3 + (38)

Setting X = X k in (37) and (38), implies

G ( X k ) = G ( Φ + E k ) = G ( Φ ) [ E k + n = 2 4 C n E k n + O ( E k 5 ) ] , k = 0 , 1 , 2 , (39)

G ( X k ) = G ( Φ + E k ) = G ( Φ ) [ I + n = 2 5 C n E k n 1 + O ( E k 5 ) ] , k = 0 , 1 , 2 , (40)

where I is an m × m identity matrix and C n = 1 n ! ( G ( Φ ) ) 1 G n ( Φ ) , n 2 .

Using (39) and (40)

[ G ( X k ) G ( X k ) λ ] 1 = ( G ( Φ ) ) 1 [ I + ( 2 C 2 + λ ) E k + ( 4 C 2 2 3 C 3 3 C 2 λ + λ 2 ) E k 2 ( 8 C 2 3 + 12 C 2 C 3 + 4 C 4 + 8 C 2 2 5 C 3 λ + 4 C 2 λ 2 + λ 3 ) E k 3 + ( 16 C 2 4 + 9 C 3 2 5 C 5 20 C 2 3 20 C 2 3 λ 7 C 4 λ 7 C 3 λ 2 + λ 4 + C 2 ( 13 λ 2 36 C 3 ) + C 2 ( 16 C 4 + 26 C 3 λ 5 λ 3 ) E k 4 + O ( E k 5 ) ] (41)

multiply (41) and (39), yields

[ G ( X k ) G ( X k ) λ ] 1 G ( X k ) = E k + ( λ C 2 ) E k 2 + ( 2 C 2 2 2 C 3 2 C 2 λ + λ 2 ) E k 3 ( 4 C 2 3 + 7 C 2 C 3 + 4 C 4 3 C 4 + 5 C 2 2 4 C 3 λ 3 C 2 λ 2 + λ 3 ) E k 4 + O ( E k 5 ) (42)

substituting (42) in (34), the following equation is obtained.

X k + 1 = Φ + ( λ C 2 ) E k 2 + ( 2 C 2 2 + 2 C 3 + 2 C 2 λ λ 2 ) E k 3 ( 4 C 2 3 + 7 C 2 C 3 + 4 C 4 3 C 4 + 5 C 2 2 4 C 3 λ 3 C 2 λ 2 + λ 3 ) E k 4 + O ( E k 5 ) (43)

The Equation (45) implies that the sequence of approximations generated by the iterative method (34) converges to the solution Φ of (1) with convergence order ρ = 2 .

2.3. Convergence Analysis of the Proposed Algorithm 2

Similar to the proof of Theorem 1, the convergence of Algorithm 2 is established in the proof of Theorem 2.

Theorem 2 Suppose the function G : D m m is continuously differentiable in some neighborhood D m of Φ . If X 0 is an initial guess in the neighborhood of Φ , then for λ the sequence of approximations { X k } k 0 , ( X k D ) generated by (35) converges to Φ with convergence order ρ = 3 .

Proof. From Equation (35), ν k is defined. Setting X = ν k in (37) lead to obtaining the following equation.

G ( ν k ) = G ( Φ ) [ ( C 2 + λ ) E k 2 + ( 2 C 2 2 + 2 C 3 + 2 C 2 λ λ 2 ) E k 3 + O ( E k 4 ) ] (44)

Similarly, set X = X k + θ i ( ν k X k ) in (38), then

i = 1 q μ i G ( X k + θ i ( ν k X k ) ) = G ( Φ ) [ I + 2 C 2 ( i = 1 q μ i ( 1 θ i ) E k + ( i = 1 q μ i ( 3 C 3 ( θ i 1 ) 2 ) + 2 C 2 ( C 2 λ ) θ i ) E k 2 + ( i = 1 q μ i ( 4 C 3 ( θ i 1 ) 3 + 2 C 2 ( 2 C 2 2 + 2 C 3 + 2 C 2 λ λ 2 ) θ i + 6 C 32 λ ) ( 1 θ i ) ) E k 3 + ( i = 1 q μ i ( 2 C 2 ( 4 C 2 3 7 C 2 C 3 + 3 C 4 5 C 2 2 λ + 4 C 3 λ + 3 C 2 λ 2 λ 3 ) θ i ) + 12 C 4 ( C 2 λ ) ( θ i 1 ) 2 θ i + 3 C 3 ( 2 ( 2 C 2 2 + 2 C 3 + 2 C 2 λ λ 2 ) ( 1 θ i ) θ i + ( C 2 λ ) 2 ) θ i E k 4 + O ( E k 5 ) ] (45)

Using (39) and (45);

i = 1 q μ i G ( X k + θ i ( ν k X k ) ) G ( X k ) λ = G ( Φ ) [ I + ( i = 1 q μ i ( λ 2 C 2 μ i ( θ i 1 ) ) ) E k

+ ( C 2 λ + i = 1 q μ i ( 3 C 3 ( θ i 1 ) 2 ) + 2 C 2 ( C 2 λ ) θ i ) E k 2 + ( C 3 λ + i = 1 q μ i ( 4 C 4 ( θ i 1 ) 3 ) 2 C 2 ( 2 C 2 2 2 C 3 2 C 2 λ + λ 2 ) θ i 6 C 3 ( C 2 λ ) ( 1 θ i ) θ i ) E k 3

+ ( C 4 λ + i = 1 q μ i ( 2 C 2 ( 4 C 2 3 7 C 2 C 3 + 3 C 4 5 C 2 2 λ + 4 C 3 λ + 3 C 2 λ 2 λ 3 ) + 12 C 4 ( C 2 λ ) ( θ i 1 ) 2 + 3 C 3 ( 22 C 2 2 2 C 3 2 C 2 λ + λ 2 ) ( 1 θ i ) + ( C 2 λ ) 2 θ i ) E k 4 + O ( E k 4 ) ] (46)

From (46) and (44),

[ i = 1 q μ i G ( X k + θ i ( ν k X k ) ) G ( X k ) λ ] 1 G ( ν k ) = ( C 2 λ ) E k 2 + ( ( 2 C 2 2 2 C 2 2 C 2 λ + λ 2 ) + ( C 2 λ ) ( λ + i = 1 q μ i C 2 ( θ i 1 ) ) ) E k 3 + ( ( 5 C 2 3 7 C 2 C 3 + 3 C 4 7 C 2 2 λ + 4 C 3 λ + 4 C 2 λ 2 λ 3 ) + ( 2 C 2 2 λ + 2 C 3 + 2 C 2 λ λ 2 ) ( λ + i = 1 q μ i C 2 ( θ i 1 ) ) + ( C 2 λ ) ( λ + 2 i = 1 q μ i C 2 ( θ i 1 ) ) 2 ( λ C 3 + 3 i = 1 q μ i C 2 ( θ i 1 ) 2 + 2 C 2 ( C 2 λ ) θ i ) E k 4 + O ( E k 4 ) (47)

Using (47) in the second step of (35), with the expansion of ν k as in (43), the following equation is obtained.

X k + 1 = Φ + ( ( C 2 λ ) ( λ + 2 i = 1 q μ i C 2 ( θ i 1 ) ) ) E k 3 + O ( E k 4 ) (48)

Equation (48) implies that the sequence of approximations generated by the iterative method (35) converges to Φ with convergence order ρ = 3 .

2.4. Convergence Analysis of the Proposed Algorithm 3

The convergence of the propose Algorithm 3 is established by the proof of Theorem 3.

Theorem 3 Suppose the function G : m m is continuously differentiable in some neighborhood D m of its solution Φ . If X 0 is an initial guess in the neighborhood of Φ , then the sequence of approximations { X k } k 0 , ( X k D ) generated by (36) converges to Φ with convergence order ρ = 4 .

Proof. Set X = W k and X = X k + θ i ( W k X k ) in (37) and (38) respectively, where W k is the second step of (35) then,

G ( W k ) = G ( Φ ) [ ( ( C 2 λ ) ( λ + i = 1 q μ i C 2 ( θ i 1 ) ) ) E k 3 + ( ( C 2 3 + 2 C 2 2 C 2 λ 2 + λ 3 ) + ( 2 C 2 2 λ + 2 C 3 + 2 C 2 λ λ 2 ) ( λ + i = 1 q μ i C 2 ( θ i 1 ) ) + ( C 2 λ ) ( λ + 2 i = 1 q μ i C 2 ( θ i 1 ) ) 2 ( λ C 2 + 3 i = 1 q μ i C 3 ( θ i 1 ) 2 + 2 C 2 ( C 2 λ ) θ i ) E k 4 + O ( E k 5 ) ] (49)

and

i = 1 q μ i G ( X k + θ i ( W k X k ) ) = G ( Φ ) [ I + C 2 E k + ( 3 C 3 i = 1 q μ i θ i 2 ) E k 2 + ( 4 C 4 i = 1 q μ i ( 1 θ i ) 3 2 C 2 ( C 2 λ ) ( λ + 2 C 2 ( i = 1 q μ i θ i 2 1 2 ) ) ) E k 3 + O ( E k 4 ) ] (50)

From (50)

[ i = 1 q μ i G ( X k + θ i ( W k X k ) ) + G ( X k ) λ ] 1 = ( G ( Φ ) ) 1 [ I ( λ C 2 ) E k + ( λ 2 + 4 C 2 2 ( i = 1 q μ i θ i 2 1 2 ) 3 C 3 ( i = 1 q μ i θ i 2 1 2 ) C 2 λ ) E k 2 + ( 4 C 4 ( i = 1 q μ i ( θ i 1 ) 3 ) + 2 C 2 2 ( 1 2 + 4 i = 1 q μ i θ i 2 ) ) + 2 C 3 2 ( 1 2 5 i = 1 q μ i θ i 2 + 2 i = 1 q μ i θ i 3 ) + λ ( λ 2 + C 3 ( 1 6 i = 1 q μ i θ i 2 ) ) E k 3 + O ( E k 4 ) ] (51)

By multiplying (51) by (49), the following equation is obtained.

[ i = 1 q μ i G ( X k + θ i ( W k X k ) ) + G ( X k ) λ ] 1 G ( W k ) = ( C 2 λ ) 2 E k 3 + ( C 2 3 ( 2 8 i = 1 q μ i θ i 2 ) + C 2 2 λ ( 7 + 8 i = 1 q μ i θ i 2 ) + λ ( 3 λ 2 + C 2 ( 2 3 i = 1 q μ i θ i 2 ) ) + C 2 ( 8 λ 2 C 3 ( 2 + 3 i = 1 q μ i θ i 2 ) ) ) E k 4 + O ( E k 5 ) (52)

Using (48) and (50) in the third step of (36), yields

X k = Φ + ( C 2 λ ) ( C 2 λ ) 2 E k 4 + O ( E k 5 ) (53)

Equation (53) implies that the sequence of approximations generated by the iterative method (36) converges to the solution Φ of the (1) with convergence order ρ = 4 .

2.5. Particular Forms of the Proposed Iterative Methods

Here, some particular forms of the iterative methods in Algorithm 2 and Algorithm 3 are developed by assigning arbitrary values to the parameters μ i and θ i , i = 1 , 2 , satisfying the conditions given in (5).

2.6. Particular Forms of Algorithm 2

For q = 1 , μ 1 = 1 , θ 1 = 1 2 , in Algorithm 2 give rise to the following iterative method for approximating Φ of (1).

Algorithm 4 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ] 1 G ( X k ) , X k + 1 = ν k [ G ( X k + ν k 2 ) G ( X k ) λ ] 1 G ( ν k ) , k = 0 , 1 , 2 , (54)

Algorithm 4 is an iterative method for approximating the solution Φ of (1) with convergence order ρ = 3 and error equation satisfying

E k + 1 = ( C 2 λ ) 2 E k 3 + O ( E k 4 ) (55)

For q = 2 , μ 1 = 1 4 , μ 2 = 3 4 , θ 1 = 0 , θ 2 = 2 3 , in Algorithm 2 give rise to the following new iterative method for approximating Φ of (1) is obtained.

Algorithm 5 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ] 1 G ( X k ) , X k + 1 = ν k 4 [ G ( X k ) + 3 G ( X k + 2 ν k 3 ) 4 G ( X k ) λ ] 1 G ( ν k ) , k = 0 , 1 , 2 , (56)

Algorithm 5 is of convergence order ρ = 3 iterative method for approximating the solution Φ of (1) error equation satisfying

E k + 1 = ( C 2 λ ) 2 E k 3 + O ( E k 4 ) (57)

For q = 3 , μ 1 = 1 6 , μ 2 = 2 3 , μ 3 = 1 6 , θ 1 = 0 , θ 2 = 1 2 , and θ 3 = 1 in Algorithm 2 it reduces to the following new iterative method.

Algorithm 6 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ] 1 G ( X k ) , X k + 1 = ν k 6 [ G ( X k ) + 4 G ( X k + ν k 2 ) + G ( ν k ) 6 G ( X k ) λ ] 1 G ( ν k ) , k = 0 , 1 , 2 , (58)

Algorithm 6 is a convergence order ρ = 3 iterative method for approximating the solution Φ of (1) error equation satisfying

E k + 1 = ( C 2 λ ) 2 E k 3 + O ( E k 4 ) (59)

For q = 3 , μ 1 = 1 4 , μ 2 = 1 2 , μ 3 = 1 4 , θ 1 = 0 , θ 2 = 1 2 , and θ 3 = 1 in Algorithm 2 it reduces to the following new iterative method.

Algorithm 7 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ] 1 G ( X k ) , X k + 1 = ν k 4 [ G ( X k ) + 4 G ( X k + ν k 2 ) + G ( X k ) 4 G ( X k ) λ ] 1 G ( ν k ) , k = 0 , 1 , 2 , (60)

Algorithm 7 is a convergence order ρ = 3 iterative method for approximating Φ of (1) having error equation satisfying

E k + 1 = ( C 2 λ ) 2 E k 3 + O ( E k 4 ) (61)

2.7. Particular Forms of Algorithm 3

Consider some particular forms of Algorithm 3. Set q = 1 , μ 1 = 1 , θ 1 = 1 2 , in Algorithm 3 leads to the iterative method.

Algorithm 8 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ] 1 G ( X k ) , W k = ν k [ G ( X k + ν k 2 ) G ( X k ) λ ] 1 G ( ν k ) , X k + 1 = W k [ G ( X k + W k 2 ) G ( X k ) λ ] 1 G ( W k ) , k = 0 , 1 , 2 , (62)

Algorithm 8 is an iterative method for approximating the solution Φ of (1) with convergence order ρ = 4 and error equation satisfying

E k + 1 = ( λ C 2 ) 3 E k 4 + O ( E k 5 ) (63)

For q = 2 , μ 1 = 1 4 , μ 2 = 3 4 , θ 1 = 0 , θ 2 = 2 3 , in Algorithm 3 leads to the following iterative method.

Algorithm 9 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ] 1 G ( X k ) , W k = ν k 4 [ G ( X k ) + 3 G ( X k + 2 ν k 3 ) 4 G ( X k ) λ ] 1 G ( ν k ) , X k + 1 = W k 4 [ G ( X k ) + 3 G ( X k + 2 W k 3 ) 4 G ( X k ) λ ] 1 G ( W k ) , k = 0 , 1 , 2 , (64)

Algorithm 9 is an iterative method for approximating the solution Φ of (1) with convergence order ρ = 4 . The error equation of Algorithm 9 is

E k + 1 = ( λ C 2 ) 2 E k 4 + O ( E k 5 ) (65)

For q = 3 , μ 1 = 1 6 , μ 2 = 2 3 , μ 3 = 1 6 , θ 1 = 0 , θ 2 = 1 2 , and θ 3 = 1 in Algorithm 3 the following iterative method is proposed:

Algorithm 10 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ] 1 G ( X k ) , W k = ν k 6 [ G ( X k ) + 4 G ( X k + ν k 2 ) + G ( ν k ) 6 G ( X k ) λ ] 1 G ( ν k ) , X k + 1 = W k 6 [ G ( X k ) + 4 G ( X k + W k 2 ) + G ( ϒ k ) 6 G ( X k ) λ ] 1 G ( W k ) , k = 0 , 1 , 2 , (66)

The Algorithm 10 is of convergence order ρ = 4 for approximating the solution Φ of (1). Its error equation is

E k + 1 = ( λ C 2 ) 2 E k 4 + O ( E k 5 ) (67)

For q = 3 , μ 1 = 1 4 , μ 2 = 1 2 , μ 3 = 1 4 , θ 1 = 0 , θ 2 = 1 2 , and θ 3 = 1 in Algorithm 2 it reduces to the following new iterative method.

Algorithm 11 Assume X 0 is an initial guess, approximate the solution Φ of (1) using the iterative method:

ν k = X k [ G ( X k ) G ( X k ) λ ] 1 G ( X k ) , W k = ν k 4 [ G ( X k ) + 4 G ( X k + ν k 2 ) + G ( ν k ) 4 G ( X k ) λ ] 1 G ( ν k ) , X k + 1 = W k 4 [ G ( X k ) + 4 G ( X k + W k 2 ) + G ( W k ) 4 G ( X k ) λ ] 1 G ( W k ) , k = 0 , 1 , 2 , (68)

The Algorithm 11 is a convergence order ρ = 3 for approximating the solution Φ of (1). Its error equation is

E k + 1 = ( C 2 λ ) 2 E k 4 + O ( E k 5 ) (69)

3. Efficiency Index

In this section, the efficiency index (EI) of the iterative methods proposed are established. Let A v ρ represents iterative method v with convergence order ρ . For reference purpose, the iterative methods proposed are denoted as indicated in Table 1. The formula E I = ρ 1 T , is adopted to obtain the efficiency index (EI)

of the iterative methods, [37] . Assume that the cost of evaluation of the function G ( ) are equal, for any method the computation of G ( ) needs m functional evaluations of the scalar functions G i , i = 1 , 2 , , m . Similarly, if the cost of evaluation of the Jacobian G ( ) are equal, then the computation of G ( ) requires m 2 evaluations of the scalar functions. For A 1 2 requires m evaluation of G ( ) and m 2 evaluation of G ( ) per iteration and its efficiency index is

2 ( 1 m + m 2 ) , for m 2 . This is same as the efficiency index (EI) of the classical

Newton method ( N 1 2 ) and the Wu method with convergence order ρ = 2 developed in [30] . The performance with respect to efficiency index (EI) for the proposed iterative methods compared with the Wu method in [30] denoted as W 1 2 is presented in Table 2, for m = 10 and 20. Where m is the dimension of the (1).

The Wu method is given as:

W 1 2 : X k + 1 = X k [ G ( X k ) d i a g ( σ i G i ( X k ) ) ] 1 G ( X k ) (70)

where the parameter σ i [ 1 , 1 ] , i = 1 , 2 , , m .

Table 1. Algorithms and their denotation.

Table 2. Efficiency Index for proposed methods and compared method.

From Table 2, observe that for m 2 , the EI is monotonic decrease with increase in the step of the method and nodes (q) of the quadrature formula in the iterative method.

4. Numerical Experimentation

The developed iterative methods are tested on three standard problems in the literature, in order to illustrate their performance and confirm the theoretical convergence order ( ρ ). The computational performance of the iterative methods developed are compared with the performance of the Wu method in [30] and Haijun method proposed in [31] . The Haijun method is given as

H 2 3 : X k = X k [ G ( X k ) d i a g ( σ i G i ( X k ) ) ] 1 ( G ( X k ) + G ( η k ) ) (71)

where the parameter σ i [ 1 , 1 ] , i = 1 , 2 , , m and η k is approximated using W 1 2 .

For the implementation, Intel Celeron(R) CPU 1.6 GHz with 2 GB of RAM processor is used to execute PYTHON 2.7.12 programs. The stopping criterion used for computer programs is G ( X k + 1 ) < ϵ , where ϵ is error tolerance. The Metrics used in comparison are:

Number of iterations (IT), Central Processing Unit Time or Execution time (CPU-Time), Norm function of last iteration ( G ( X k + 1 ) < ϵ ) , and Computational order of convergence ( ρ c o c ) given in [38] as

ρ c o c = ln ( G ( X k ) ) ln ( G ( X k 1 ) ) (72)

To test the performance of proposed methods, the following problems are solved.

Problem 1 [39]

Consider the NLSE

G ( X ) = 0

where

G ( X 1 , X 2 ) = [ X 1 3 + X 1 X 2 X 2 + X 2 2 ]

The solutions of Problem 1 in the domain G : ( 1.5,1.5 ) × ( 1.5,1.5 ) are Φ ( 1 ) = ( 0 , 0 ) T and Φ ( 2 ) = ( 1 , 1 ) T . The initial approximation used is X 0 = ( 0.5 , 0.5 ) T . The numerical results obtained for each method using different values of the parameters λ i and σ i are presented in Tables 3-7. All computations are carried out with 200 digit precision and ϵ = 10 15 .

Problem 2 [31]

X 1 2 X 2 + 1 = 0 ,

X 1 cos ( π X 2 2 ) = 0 .

Table 3. Computational results for Problem 1 using λ i = σ i = 1 / 2 .

Table 4. Computational results for Problem 1 using λ i = σ i = 1 / 3 .

Table 5. Computational results for Problem 1 using λ i = σ i = 1 / 5 .

Table 6. Computational results for Problem 1 using λ i = σ i = 1 / 7 .

Table 7. Computational results for Problem 1 using λ i = σ i = 1 / 9 .

The solutions of Problem 2 within the domain D = ( 1 , 0 ) × ( 0 , 2 ) are

Φ ( 1 ) = ( 2 2 , 1.5 ) T , Φ ( 2 ) = ( 1 , 2 ) T and Φ ( 3 ) = ( 0 , 1 ) T . The numerical solutions to Problem 2 are presented in Table 8 for methods of orders ρ = 2 , 3 and 4.

Problem 3 [40]

Consider the chemical equilibrium system modeled in NLSE (1) with

X 1 X 2 + X 1 3 X 5 = 0

2 X 1 X 2 + X 1 + X 2 X 3 2 + R 8 X 2 R X 5 + R 10 X 2 2 + R 7 X 1 X 3 + R 9 X 2 X 4 = 0

2 X 2 X 3 2 + 2 R 5 X 3 2 8 X 5 + R 6 X 3 + R 7 X 2 X 3 = 0

R 9 X 2 X 4 + 2 X 4 2 4 R 5 = 0

X 1 ( X 2 + 1 ) + R 10 X 2 2 + X 2 X 3 2 + R 8 X 2 + R 5 X 3 2 + X 4 2 1 + R 6 X 3 + R 7 X 2 X 3 + R 9 X 2 X 4 = 0

where

R = 10 , R 10 = 0.193 , R 6 = 0.002597 40 , R 7 = 0.003448 40 , R 8 = 0.00001799 40 , R 9 = 0.0002155 40 , R 10 = 0.00003846 40

Using X 0 = ( 0.6 , 33.2 , 0.6 , 1.5 , 0.7 ) T as initial starting point, 200 digits floating point arithmetics and G ( X k ) 10 50 as stopping criteria, the solution Φ in D = ( 1 , 1 ) × ( 33.5 , 35.5 ) × ( 1 , 1 ) × ( 0.8 , 1.8 ) × ( 1 , 1 ) approximated to 20 decimal places is

Φ = [ 0.00311410226598496012 34.59792453029012391022 0.06504177869743799154 0.85937805057794058144 0.03695185914804602454 ]

The computational results obtained for different methods are presented in Table 9.

Table 8. Computational results for Problem 2 using λ i = σ i = 1 / 8 .

Table 9. Computational results for Problem 3.

Results Discussion

The numerical results obtained on Tables 3-9, leads to the following observations about the effectiveness of the proposed methods in approximation of the solution of (1).

・ The numerical results obtained in Tables 3-9, clearly implies that the proposed methods are effective in approximation of solution of (1).

・ Most of the computational order of convergence ρ c o c of the proposed methods agrees with theoretical value.

・ It is observed that the proposed convergence order ρ = 2 method ( A 1 2 ) produce better precision compared with Wu method ( W 1 2 ) for small system. The reason is justifiable since G ( X ) λ is a dense matrix, more computation cost is incurred as the system become large.

・ Observe from Tables 3-8, Haijun method ( H 1 3 ) failed in Problem 1 and 2 while the proposed methods converged to solutions in few number of iterations.

・ The choice of λ , its magnitude should be less 1 to get better precision and convergence.

5. Conclusion

In this paper, multistep quadrature based methods for approximation of the solution of NLSE are proposed. The proposed methods require only first order Frechet derivative to attain convergence order ρ 4 and effectively approximate solution of NLSE with singular Jacobian. The proposed methods are applied on three standard problems in literature so as to describe their effectiveness. Judging from the computational results obtained and presented in tables, the proposed methods are competent compared to some existing methods.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Moré, J.J. (1990) A Collection of Nonlinear Model Problems. In: Allgower, E.L. and George, K., Eds., Computational Solution of Nonlinear Systems of Equations. Lectures in Applied Mathematics, Vol. 26, American Mathematical Society, Providence, RI, 723-762.
[2] Grosan, G. and Abraham, A. (2008) A New Approach for Solving Nonlinear Equation Systems. IEEE Transactions on Systems, Man, and Cybernetics. Part A: Systems and Humans, 38, 698-714.
https://doi.org/10.1109/TSMCA.2008.918599
[3] Awawdeh, F. (2010) On New Iterative Method for Solving Systems of Nonlinear Equations. Numerical Algorithms, 54, 395-409.
https://doi.org/10.1007/s11075-009-9342-8
[4] Tsoulos, I.G. and Staurakoudis, A. (2010) On Locating All Roots of Systems of Nonlinear Equations inside Bounded Domain Using Global Optimization Methods. Nonlinear Analysis: Real World Applications, 11, 2465-2471.
https://doi.org/10.1016/j.nonrwa.2009.08.003
[5] Lin, Y., Bao, L. and Jia, X. (2010) Convergence Analysis of a Variant of the Newton Method for Solving Nonlinear Equations. Computers & Mathematics with Applications, 59, 2121-2127.
https://doi.org/10.1016/j.camwa.2009.12.017
[6] Ortega, J.M. and Rheinboldt, W.C. (1970) Iterative Solution of Nonlinear Equation in Several Variables. Academic Press, Cambridge, MA.
[7] Babajee, D.K.R., Kalyanasundaram, M. and Jayakumar, J. (2015) On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations. Algorithms, 8, 895-909.
https://doi.org/10.3390/a8040895
[8] Ahmadabadi, M.N., Ahmad, F., Yuan, G. and Li, X. (2016) Solving Systems of Nonlinear Equations Using Decomposition Techniques. Journal of Linear and Topological Algebra, 5, 187-198.
[9] Montazeri, H., Soleymani, F., Shateyi, S. and Motsa, S.S. (2012) On a New Method for Computing the Numerical Solution of Systems of Nonlinear Equation. Journal of Applied Mathematics, 2012, Article ID: 751975.
https://doi.org/10.1155/2012/751975
[10] Xiao, X. and Yin, H. (2015) A New Class of Methods with Higher Order of Convergence for Solving Systems of Nonlinear Equations. Applied Mathematics and Computation, 264, 300-309.
https://doi.org/10.1016/j.amc.2015.04.094
[11] Noor, M.A., Waseem, M. and Noor, K.I. (2015) New Iterative Technique for Solving a Nonlinear Equations. Applied Mathematics and Computation, 265, 1115-1125.
https://doi.org/10.1016/j.amc.2015.05.129
[12] Noor, M.A., Waseem, M. and Noor, K.I. (2015) New Iterative Technique for Solving a System of Nonlinear Equations. Applied Mathematics and Computation, 271, 446-466.
https://doi.org/10.1016/j.amc.2015.08.125
[13] Chun, C. (2005) Iterative Methods Improving Newton’s Method by the Decomposition Method. Computers & Mathematics with Applications, 50, 1559-1568.
https://doi.org/10.1016/j.camwa.2005.08.022
[14] Park, C.H. and Shim, H.T. (2005) What Is the Homotopy Method for a System of Nonlinear Equations (Survey)? Journal of Applied Mathematics and Computing, 17, 689-700.
[15] Golbabai, A. and Javidi, M. (2007) A New Family of Iterative Methods for Solving System of Nonlinear Algebraic Equations. Applied Mathematics and Computation, 190, 1717-1722.
https://doi.org/10.1016/j.amc.2007.02.055
[16] Golbabai, A. and Javidi, M. (2007) Newton-Like Iterative Methods for Solving System of Nonlinear Equations. Applied Mathematics and Computation, 192, 546-551.
https://doi.org/10.1016/j.amc.2007.03.035
[17] Jafari, H. and Daftardar-Gejji, V. (2006) Revised Adomian Decomposition Method for Solving System of Nonlinear Equations. Applied Mathematics and Computation, 175, 1-7.
https://doi.org/10.1016/j.amc.2005.07.010
[18] Noor, M.A., Noor, K.I. and Waseem, M. (2013) Decomposition Method for Solving System of Nonlinear Equations. Engineering Mathematics Letters, 2, 34-41.
[19] Cordero, A. and Torregrosa, J.R. (2007) Variants of Newton Method Using Fifth-Order Quadrature Formulas. Applied Mathematics and Computation, 190, 686-698.
https://doi.org/10.1016/j.amc.2007.01.062
[20] Cordero, A., Hueso, J.L., Martinez, E. and Terregrosa, J.R. (2009) Iterative Methods of Order Four and Five for Systems of Nonlinear Equations. Journal of Computational and Applied Mathematics, 231, 541-551.
https://doi.org/10.1016/j.cam.2009.04.015
[21] Liu, Z. (2015) A New Cubic Convergence Method for Solving Systems of Nonlinear Equations. International Journal of Applied Science and Mathematics, 2, 2394-2894.
[22] Liu, Z. and Fang, Q. (2015) A New Newton-Type Method with Third-Order for Solving Systems of Nonlinear Equations. Journal of Applied Mathematics and Physics, 3, 1256-1261.
https://doi.org/10.4236/jamp.2015.310154
[23] Noor, M.A. (2007) New Family of Iterative Methods for Nonlinear Equations. Applied Mathematics and Computation, 190, 553-558.
https://doi.org/10.1016/j.amc.2007.01.045
[24] Biazar, J. and Ghanbari, B. (2008) A New Technique for Solving Systems of Nonlinear Equations. Applied Mathematical Sciences, 2, 2699-2703.
[25] Podisuk, M., Chundong, U. and Sanprasert, W. (2007) Single-Step Formulas and Multi-Step Formulas of the Integration Method for Solving the IVP of Ordinary Differential Equation. Applied Mathematics and Computation, 190, 1438-1444.
https://doi.org/10.1016/j.amc.2007.02.024
[26] Weerakoon, S. and Fernando, T.G.I. (2000) A Variant of Newton’s Method with Accelerated Third Order Convergence. Applied Mathematics Letters, 13, 87-93.
https://doi.org/10.1016/S0893-9659(00)00100-2
[27] Frontini, M. and Sormani, E. (2004) Third-Order Methods from Quadrature Formulae for Solving Systems of Nonlinear Equations. Applied Mathematics and Computation, 149, 771-782.
https://doi.org/10.1016/S0096-3003(03)00178-4
[28] Khirallah, M.Q. and Hafiz, M.A. (2012) Novel Three Order Methods for Solving a System of Nonlinear Equations. Bulletin of Mathematical Sciences and Applications, 2, 1-12.
https://doi.org/10.18052/www.scipress.com/BMSA.2.1
[29] Hafiz, M.A. and Bahgat, M.S.M. (2012) An Efficient Two-Step Iterative Method for Solving System of Nonlinear Equation. Journal of Mathematical Research, 4, 28-34.
[30] Wu, X. (2007) Note on the Improvement of Newton Method for System of Nonlinear Equations. Applied Mathematics and Computation, 189, 1476-1479.
https://doi.org/10.1016/j.amc.2006.12.035
[31] Haijun, W. (2009) New Third-Order Method for Solving Systems of Nonlinear Equations. Numerical Algorithms, 50, 271-282.
https://doi.org/10.1007/s11075-008-9227-2
[32] Singh, S. (2013) A System of Nonlinear Equations with Singular Jacobian. International Journal of Innovative Research in Science, Engineering and Technology, 2, 2650-2653.
[33] Ahmad, F., Ullah, M.Z., Ahmad, S., Alshomrani, A.S., Alqahtani, M.A. and Alzaben, L. (2017) Multi-Step Preconditioned Newton Methods for Solving Systems of Nonlinear Equations. SeMA Journal, 75, 127-137.
https://doi.org/10.1007/s40324-017-0120-6
[34] Argyros, I.K. (2017) Ball Convergence for a Family of Quadrature-Based Methods for Solving Equations in Banach Space. International Journal of Computational Methods, 14, Article ID: 1750017.
https://doi.org/10.1142/S0219876217500177
[35] Hueso, J.L., Martínez, E. and Torregrossa, J.R. (2009) Modified Newton’s Method for Systems of Nonlinear Equations with Singular Jacobian. Journal of Computational and Applied Mathematics, 224, 77-83.
https://doi.org/10.1016/j.cam.2008.04.013
[36] Sharma, J.R., Sharma, R. and Bahl, A. (2016) An Improved Newton-Traub Composition for Solving Systems of Nonlinear Equations. Applied Mathematics and Computation, 290, 98-100.
https://doi.org/10.1016/j.amc.2016.05.051
[37] Ostrowski, A.M. (1966) Solution of Equations and Systems of Equations. Academic Press, New York.
[38] Grau-Sanchez, M., Grau, A. and Noguera, M. (2012) On the Computational Efficiency Index and Some Iterative Methods for Solving Systems of Nonlinear Equations. Journal of Computational and Applied Mathematics, 236, 1259-1266.
https://doi.org/10.1016/j.cam.2011.08.008
[39] Decker, D. and Keller, C. (1980) Newton’s Method at Singular Points. SIAM Journal on Numerical Analysis, 17, 465-471.
https://doi.org/10.1137/0717039
[40] Meintjes, K. and Morgan, A.P. (1990) Chemical Equilibrium Systems as Numerical Test Problems. ACM Transactions on Mathematical Software, 16, 143-151.
https://doi.org/10.1145/78928.78930

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.