Multistep Quadrature Based Methods for Nonlinear System of Equations with Singular Jacobian ()
1. Introduction
System of equations that is used in describing real life phenomena is often nonlinear in nature. Examples of Mathematical models that are formulated using nonlinear system of equations (NLSE) include mathematical models that describe kinematics, combustion, chemical equilibrium, economic problem and neurophysiology problems, [1] [2] ; Reactor Steering problem, [3] [4] ; transportation problem, [5] . Indeed most real life problems are best described using NLSE.
Consider the NLSE,
(1)
where
, 0 is a null vector of dimension m,
is functional define by
are coordinate functions of G and D is an open domain in
.
The Newton method in m-dimension is a popular iterative method for approximating the solution of NLSE (1). The sequence of approximations
generated using the Newton method, converges to the solution
of the NLSE (1) with convergence order
, under the condition that det
, [6] . One setback of the Newton method is that it fails if at any iteration stage of computation, the Jacobian matrix
is singular [7] [8] .
The development of new iterative methods for approximating the solution of (1) has attracted the attention of researchers in recent years, as evident in the literature such as [9] [10] [11] [12] . One objective of developing iterative methods for the approximation of the solutions of NLSE (1) is to obtain methods with better convergence rate, computational efficiency or modified to solve certain problems. Recently, plethora numbers of iterative methods for approximating solution of (1) have been developed via diverse techniques. These techniques include Taylor series and homotopy [13] [14] [15] [16] ; decomposition technique in [3] [17] [18] and quadrature formulas technique [19] [20] [21] [22] .
Quadrature formulas are veritable tools for the evaluation of the integrals, [23] [24] [25] . The idea used in developing quadrature based iterative methods is the approximation of the integral in the Taylor expansion of vector function using quadrature formulas, [26] [27] . The quadrature based iterative functions are implicit-type, [19] [20] [21] [28] [29] . To implement the implicit iterative formula derived via quadrature formula, the predictor and corrector technique is utilized with the Newton method used often as predictor and the iterative function derived from quadrature formula as corrector. The quadrature based methods breakdown in implementation when the Jacobian
of the NLSE (1) is singular at iteration points. The presence of singular Jacobian
within the domain of evaluation does not suggest in practice the absence of solution to (1). In order to circumvent the problem of having singular Jacobian at a point in the vicinity of the solution
of (1), the Newton method is modified by introducing perturbation term (a diagonal matrix) to the Jacobian of its corrector factor [30] . In [31] , the idea of the perturbation term introduced in [30] is utilized to develop a Two-step iterative method for approximating the solution of (1), where the Jacobian
is singular at some iteration points. In [8] , similar perturbation term was introduced at every step of the corrector factor in the three step frozen Jocobian iterative method for approximating the solution of (1). Other articles like [32] [33] have also developed several iterative methods for solving (1) with the help of same perturbation term introduced to the target Jacobian of (1). It is worth of note that the diagonal matrix used as the perturbation term in literature has not been significantly modified since introduced in [30] . Also, its application has not been extended to quadrature based iterative methods in literature. Motivated and inspired by the work in [30] [31] [32] and [33] , to develop families of multi-step quadrature based iterative methods with infused perturbation term to its Jacobian in this paper. It is important to note that the perturbation term developed and used in this work is different from the diagonal matrix that is formed by the coordinates functions of the target NLSE (1) used in literature. To achieve this target, a continuous and differentiable auxiliary function is directly infused into (1). The resulting NLSE is thereafter expressed as coupled equations with generic quadrature formula (1). The decomposition technique is used to resolve the coupled equation, from which some iterative schemes that can be utilized in developing iterative methods for approximating the solution of (1) whose Jacobian is singular are proposed.
2. The Proposed Iterative Methods
Let
be the initial approximation close to
a solution of the NLSE (1.1.1) and
a function such that
(2)
where
is a differentiable nonzero scalar function. The notation
is component-wise element operator such that
(3)
The solution of
and
are same because
for all values of X.
With the aid of Taylor series expansion of a multi-dimensional function about
up to the second term and using the generic quadrature formula to approximate
, then (2) can be rewritten as
(4)
where
is higher order terms of the Taylor expansion, the division operator in
is element wise,
and
are knots and weights respectively such that
(5)
Equation (5) is consistency conditions, [34] .
The (4) is expressed into coupled equation given in (6) and (7).
(6)
(7)
In compact form, (6) can be expressed as
(8)
where
(9)
is a nonlinear function.
Applying the decomposition technique due to [17] to decompose the nonlinear function (9) as
(10)
where
is initial guess.
The idea here is to find the solution vector X of the NLSE (1) in series form through an iterative scheme, such that the solution X is the sum of the initial guess
and the sum of consecutive differences of successive and preceding iterate points approximations of X, that is;
(11)
Hence the following scheme from (11) is obtained:
(12)
The sum of the respective sides of (12) is
(13)
From (2) and (12), the solution X of the (1) is approximated as:
(14)
As s becomes large, the approximations of the solution X gets closer to the exact solution of (1).
From Equation (12)
(15)
Since
is initial guess, setting
in (7) yields
(16)
From (9)and (15),
(17)
For
in (14), and using (17), the following is obtained.
(18)
Using the formulation in (18), a One-step family of iterative scheme for approximating the solution of (1) is proposed as in Scheme 1.
Scheme 1 Assume
is an initial guess, approximate the solution
of (1.1.1) using the iterative scheme:
(19)
Scheme 1 is a family of One-step iterative scheme that can be used to propose iterative methods for solving (1) for some function
.
For
in (14), the solution X of (1) can be approximated as:
(20)
Set
in (7) implies
(21)
From (18),
(22)
substituting (22) into (21) yields
(23)
Inserting (23) into (20), gives the equation
(24)
Using (24), a Two-step iterative scheme for the approximation of solution
of (1) as given in Scheme 2 is proposed.
Scheme 2 Assume
is an initial guess, approximate the solution
of (1) using the iterative scheme:
(25)
Scheme 2 is used to propose Two-step iterative methods for approximating the solution
of the NLSE (1).
For
in (14), the solution of (1) can be approximated as follows:
(26)
Set
in (7) yields
(27)
From (26), (28) is obtained.
(28)
Substituting (28) into (27) yields
(29)
Substitute (29) in (26)
(30)
The formulation in (30) enable the proposal of the three-step iterative scheme for the solution of (1).
Scheme 3 Assume
is an initial guess, approximate the solution
of (1) using the iterative scheme:
(31)
Scheme 3 is used to propose Three-step iterative methods for approximating the solution
of the NLSE (1).
A suitable choice of the function
in the proposed Scheme 1, Scheme 2 and Scheme 3 yields families of quadrature based iterative methods for approximation of the solution
of (1). It is worthy of note that for
, Scheme 1 reduces to the classical Newton method, while Scheme 2 and Scheme 3 reduces to the family of approximation methods in [12] . One major target of proposing
in (2) is to discover the perturbation function
by retaining the solution
of the target NLSE (1). Recall that
must be chosen such that it is a nonzero scalar function and its first derivative
does not vanish. This way, the solution of (1) is unperturbed. One function and its first derivative that is nonzero is the exponential function, [30] [33] [35] . Suppose
is replace by
, then
(32)
From (32), a generalization can be made as
(33)
where
. Consequently, the following iterative algorithms are obtained from Scheme 1, Scheme 2 and Scheme 3 respectively.
Algorithm 1 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(34)
If the parameter
, Algorithm 1 reduces to the m-dimensional classical Newton method (1). The major difference between Algorithm 1 and the Wu method in [30] is the introduction of a dense matrix
in place of the diagonal matrix
in the Jacobian
of the target (1).
Algorithm 2 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(35)
The Algorithm 2 is a Two-step family of iterative method for approximating the solution of (1).
Algorithm 3 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(36)
Remark 1
For numerical implementation, the choice of
is subjectively chosen, however specific values of
(for reference purpose,
is denoted as
) are used such that the magnitude of their elements is less one in order to achieve better convergence rate and accuracy. Similarly the choice of
is also subjective but must satisfy the consistency condition in (5).
2.1. Convergence Analysis of the Proposed Iterative Methods
In this section, the convergence of the iterative methods (Algorithm 1, Algorithm 2 and Algorithm 3) are established using the Taylor series approach, [11] [12] [36] . In all the proofs, it is assumed that the function
is thrice Frechet differentiable.
2.2. Convergence Analysis of Algorithm 1
To establish the convergence of Algorithm 1, the proof of Theorem 1 is considered.
Theorem 1 Suppose the function
is continuous and differentiable in some neighborhood
of
. If
is an initial guess in the neighborhood of
, then the sequence of approximations
generated by (34) converges to
with convergence order
.
Proof. Let
be the error in the kth iteration point. Using the Taylor series expansion of
and
about
, the following equations are obtained.
(37)
(38)
Setting
in (37) and (38), implies
(39)
(40)
where I is an
identity matrix and
.
Using (39) and (40)
(41)
multiply (41) and (39), yields
(42)
substituting (42) in (34), the following equation is obtained.
(43)
The Equation (45) implies that the sequence of approximations generated by the iterative method (34) converges to the solution
of (1) with convergence order
.
2.3. Convergence Analysis of the Proposed Algorithm 2
Similar to the proof of Theorem 1, the convergence of Algorithm 2 is established in the proof of Theorem 2.
Theorem 2 Suppose the function
is continuously differentiable in some neighborhood
of
. If
is an initial guess in the neighborhood of
, then for
the sequence of approximations
generated by (35) converges to
with convergence order
.
Proof. From Equation (35),
is defined. Setting
in (37) lead to obtaining the following equation.
(44)
Similarly, set
in (38), then
(45)
Using (39) and (45);
(46)
From (46) and (44),
(47)
Using (47) in the second step of (35), with the expansion of
as in (43), the following equation is obtained.
(48)
Equation (48) implies that the sequence of approximations generated by the iterative method (35) converges to
with convergence order
.
2.4. Convergence Analysis of the Proposed Algorithm 3
The convergence of the propose Algorithm 3 is established by the proof of Theorem 3.
Theorem 3 Suppose the function
is continuously differentiable in some neighborhood
of its solution
. If
is an initial guess in the neighborhood of
, then the sequence of approximations
generated by (36) converges to
with convergence order
.
Proof. Set
and
in (37) and (38) respectively, where
is the second step of (35) then,
(49)
and
(50)
From (50)
(51)
By multiplying (51) by (49), the following equation is obtained.
(52)
Using (48) and (50) in the third step of (36), yields
(53)
Equation (53) implies that the sequence of approximations generated by the iterative method (36) converges to the solution
of the (1) with convergence order
.
2.5. Particular Forms of the Proposed Iterative Methods
Here, some particular forms of the iterative methods in Algorithm 2 and Algorithm 3 are developed by assigning arbitrary values to the parameters
and
satisfying the conditions given in (5).
2.6. Particular Forms of Algorithm 2
For
, in Algorithm 2 give rise to the following iterative method for approximating
of (1).
Algorithm 4 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(54)
Algorithm 4 is an iterative method for approximating the solution
of (1) with convergence order
and error equation satisfying
(55)
For
, in Algorithm 2 give rise to the following new iterative method for approximating
of (1) is obtained.
Algorithm 5 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(56)
Algorithm 5 is of convergence order
iterative method for approximating the solution
of (1) error equation satisfying
(57)
For
, and
in Algorithm 2 it reduces to the following new iterative method.
Algorithm 6 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(58)
Algorithm 6 is a convergence order
iterative method for approximating the solution
of (1) error equation satisfying
(59)
For
, and
in Algorithm 2 it reduces to the following new iterative method.
Algorithm 7 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(60)
Algorithm 7 is a convergence order
iterative method for approximating
of (1) having error equation satisfying
(61)
2.7. Particular Forms of Algorithm 3
Consider some particular forms of Algorithm 3. Set
, in Algorithm 3 leads to the iterative method.
Algorithm 8 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(62)
Algorithm 8 is an iterative method for approximating the solution
of (1) with convergence order
and error equation satisfying
(63)
For
, in Algorithm 3 leads to the following iterative method.
Algorithm 9 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(64)
Algorithm 9 is an iterative method for approximating the solution
of (1) with convergence order
. The error equation of Algorithm 9 is
(65)
For
, and
in Algorithm 3 the following iterative method is proposed:
Algorithm 10 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(66)
The Algorithm 10 is of convergence order
for approximating the solution
of (1). Its error equation is
(67)
For
, and
in Algorithm 2 it reduces to the following new iterative method.
Algorithm 11 Assume
is an initial guess, approximate the solution
of (1) using the iterative method:
(68)
The Algorithm 11 is a convergence order
for approximating the solution
of (1). Its error equation is
(69)
3. Efficiency Index
In this section, the efficiency index (EI) of the iterative methods proposed are established. Let
represents iterative method v with convergence order
. For reference purpose, the iterative methods proposed are denoted as indicated in Table 1. The formula
, is adopted to obtain the efficiency index (EI)
of the iterative methods, [37] . Assume that the cost of evaluation of the function
are equal, for any method the computation of
needs m functional evaluations of the scalar functions
. Similarly, if the cost of evaluation of the Jacobian
are equal, then the computation of
requires
evaluations of the scalar functions. For
requires m evaluation of
and
evaluation of
per iteration and its efficiency index is
, for
. This is same as the efficiency index (EI) of the classical
Newton method
and the Wu method with convergence order
developed in [30] . The performance with respect to efficiency index (EI) for the proposed iterative methods compared with the Wu method in [30] denoted as
is presented in Table 2, for
and 20. Where m is the dimension of the (1).
The Wu method is given as:
(70)
where the parameter
.
Table 1. Algorithms and their denotation.
Table 2. Efficiency Index for proposed methods and compared method.
From Table 2, observe that for
, the EI is monotonic decrease with increase in the step of the method and nodes (q) of the quadrature formula in the iterative method.
4. Numerical Experimentation
The developed iterative methods are tested on three standard problems in the literature, in order to illustrate their performance and confirm the theoretical convergence order (
). The computational performance of the iterative methods developed are compared with the performance of the Wu method in [30] and Haijun method proposed in [31] . The Haijun method is given as
(71)
where the parameter
and
is approximated using
.
For the implementation, Intel Celeron(R) CPU 1.6 GHz with 2 GB of RAM processor is used to execute PYTHON 2.7.12 programs. The stopping criterion used for computer programs is
, where
is error tolerance. The Metrics used in comparison are:
Number of iterations (IT), Central Processing Unit Time or Execution time (CPU-Time), Norm function of last iteration
, and Computational order of convergence (
) given in [38] as
(72)
To test the performance of proposed methods, the following problems are solved.
Problem 1 [39]
Consider the NLSE
where
The solutions of Problem 1 in the domain
are
and
. The initial approximation used is
. The numerical results obtained for each method using different values of the parameters
and
are presented in Tables 3-7. All computations are carried out with 200 digit precision and
.
Problem 2 [31]
,
.
Table 3. Computational results for Problem 1 using
.
Table 4. Computational results for Problem 1 using
.
Table 5. Computational results for Problem 1 using
.
Table 6. Computational results for Problem 1 using
.
Table 7. Computational results for Problem 1 using
.
The solutions of Problem 2 within the domain
are
and
. The numerical solutions to Problem 2 are presented in Table 8 for methods of orders
and 4.
Problem 3 [40]
Consider the chemical equilibrium system modeled in NLSE (1) with
where
Using
as initial starting point, 200 digits floating point arithmetics and
as stopping criteria, the solution
in
approximated to 20 decimal places is
The computational results obtained for different methods are presented in Table 9.
Table 8. Computational results for Problem 2 using
.
Table 9. Computational results for Problem 3.
Results Discussion
The numerical results obtained on Tables 3-9, leads to the following observations about the effectiveness of the proposed methods in approximation of the solution of (1).
・ The numerical results obtained in Tables 3-9, clearly implies that the proposed methods are effective in approximation of solution of (1).
・ Most of the computational order of convergence
of the proposed methods agrees with theoretical value.
・ It is observed that the proposed convergence order
method (
) produce better precision compared with Wu method (
) for small system. The reason is justifiable since
is a dense matrix, more computation cost is incurred as the system become large.
・ Observe from Tables 3-8, Haijun method (
) failed in Problem 1 and 2 while the proposed methods converged to solutions in few number of iterations.
・ The choice of
, its magnitude should be less 1 to get better precision and convergence.
5. Conclusion
In this paper, multistep quadrature based methods for approximation of the solution of NLSE are proposed. The proposed methods require only first order Frechet derivative to attain convergence order
and effectively approximate solution of NLSE with singular Jacobian. The proposed methods are applied on three standard problems in literature so as to describe their effectiveness. Judging from the computational results obtained and presented in tables, the proposed methods are competent compared to some existing methods.