Explicit Iterative Methods of Second Order and Approximate Inverse Preconditioners for Solving Complex Computational Problems

Explicit Exact and Approximate Inverse Preconditioners for solving complex linear systems are introduced. A class of general iterative methods of second order is presented and the selection of iterative parameters is discussed. The second order iterative methods behave quite similar to first order methods and the development of efficient preconditioners for solving the original linear system is a decisive factor for making the second order iterative methods superior to the first order iterative methods. Adaptive preconditioned Conjugate Gradient methods using explicit approximate preconditioners for solving efficiently large sparse systems of algebraic equations are also presented. The generalized Approximate Inverse Matrix techniques can be efficiently used in conjunction with explicit iterative schemes leading to effective composite semi-direct solution methods for solving large linear systems of algebraic equations.


Introduction
During the last decades, considerable research effort has been directed to the solution of complex linear and nonlinear systems of algebraic equation by using a class of iterative methods. This class includes the conjugate gradient method and its hybrid multi-variants. The conjugate gradient method originally introduced by Hestenes and Stiefel [1], was a direct solution method but later on has been extensively used as an iterative method for solving efficiently large sparse linear How to cite this paper: Lipitakis, A.-D.
Certain theoretical issues on this subject, such as 1) the stability and conditions of correctness of approximate system matrix in the form of product of triangular matrix factors, and 2) the convergence analysis of the iterative methods and the quantitative evaluation of the convergence rate of the iterative schemes 4, are of particular interest for the choice of the most efficient methods for systems with predetermined properties.
In the framework of this research work and in order to substantiate the main motivation for the efficient usage of the second order iterative method for solving nonlinear systems, the following basic related questions will be considered: 1) are the second order iterative methods comparable (or even superior) to first order iterative methods for solving nonlinear systems? A survey of related research work will be given; 2) are second order iterative schemes preferable than the first order iterative schemes for solving very large complex computational problems? The computational complexity per iterative step will be also examined.
The structure of this research paper is as follows: in Section 2, several advantages of second order iterative methods in comparison with the first order iterative methods for solving nonlinear systems are presented. In Section 3, a class of general iterative methods of second order is described, while in Section 4, certain explicit iterative schemes and approximate inverse preconditioners are introduced. In Section 5, exact and approximate inverse matrix algorithmic techniques are introduced, while in Section 6, some aspects of stability and correctness of incomplete factorization methods are presented. Finally, in Section 7, the convergence analysis and quantitative evaluation of convergence rate of incomplete factorization methods are discussed.

General Iterative Methods of Second Order: Part I
In recent years, considerable research effort has been focused in the topic of second order iterative methods. In order to substantiate our motivation in our research study, we present a synoptic survey of related computational methods developed on the subject. Several early iterative methods of second order with fixed parameters or variable parameters have been extensively studied [15] [16] [17] [18].
A class of (optimal) second degree iterative methods for accelerating basic linear stationary methods of first degree with real eigenvalues has been presented [18] [19] and has been extended as application of conformal mapping and summation techniques for the case when eigenvalues are contained in elliptical regions in the complex plane [20] [21] [22]. Another similar contribution on op-  [23]. A nonstationary iterative method of second order for solving nonlinear equations without requiring the use of any derivative has been presented. This method for algebraic equations coincides with Newton's method and is more efficiently [24]. Note that Newton's methods and high order iterative schemes (Householder's iterative methods), under some conditions of regularity of the given function and its derivatives, have been used for the numerical treatment of single nonlinear equations [25].
A three-step iterative method for solving nonlinear equations by using Steffensen's method in the third step having eight order convergence has been recently presented. This method requires a small number of calculations and does not require calculation of derivative in the third iterative step. A two-step iterative method for solving nonlinear equations, a modified Noor's method without computing the second derivatives and with fourth order convergence has been presented [26]. A second order iterative method for solving quasi-variational inequalities has been introduced and sufficient conditions for convergence rate have been given [27]. Two iterative methods of order four and five respectively by using modified homotopy perturbation techniques for solving nonlinear equations and the convergence analysis have been presented [28]. An efficient second order iterative method for IR drop analysis in power grid has been presented [29]. In this research study, they consider a first order iterative method where * 1 n x + denotes the first-order iteration, and a and b are real accelerating parameters effecting the convergence rate. For the consistency and convergence of the second order iterative methods the following statements hold: Preposition 1: If the 1st-order iterative method converges to the exact solution, then the 2nd-order method will converge to the same solution for any values of 0 a ≠ and 0 b ≠ . Preposition 2: The iterative matrix of the 2nd-order method is known. A necessary and sufficient condition that the iterative method converges for all initial conditions is that, if the spectral radius of matrix G is minimized then the convergence rate is maximized [29].

Some Problems in Solving Very Large Complex Computational Problems
It is known that due to the extremely large sizes of power grids, IR drop analysis has become a computationally challenging problem in terms of memory usage 2) Memory inefficiency (1 million nodes and trillion elements in matrix) 3) Trade-off between runtime and predetermined accuracy 4) Power delivery issues (increased complexity of VLSI circuits, increased power (current) consumption, decreasing supply voltage, reduced noise margin, increased gate delay) 5) Modelling and analysis of power grid network must be accurate and power grid networks tend to be very large [22].
Typical applications include also a large class of initial-boundary value problems of general form in 3 space dimensions with strong nonlinearities: where the positive perturbation parameter tends to zero [30].
The discrete analogues of Equations (1) (2) lead to the solution of the general where the coefficient matrix A is a large sparse unsymmetric real (n × n) matrix of irregular structure.

Computational Complexity per Iterative Step
Computational complexity of algorithms is an important subject in which considerable research has been focused in the last decades [6] [31] [32] [33] [34]. An interesting topic in the framework of comparison of the number of flops per iteration to be performed for several classes of solution methods, i.e. 1) direct second order methods (based on direct matrix decomposition), 2) iterative first order methods and 3) iterative second order methods, for the cases of dense and sparse problems reveals the following: 1) In first order iterative methods, the number of flops per iteration is generally much smaller than that of second order methods, while the former generally need many iterations to converge (maybe few hundreds up to few thousands) and also have difficulties to obtain highly accurate optimal solutions.
2) The second order iterative methods usually converge quickly (few tens iterations) to highly accurate solutions.
3) The first order iterative methods overall are considered to be much superior to second order methods based on direct factorization in solving large scale SDP problems. However, the latter methods in solving linear systems may be prohibitively expensive (even impossible) due to high number of flops required and the high amount of memory space needed for storing the coefficient matrix. Note that a logarithmic barrier term, which emphasizes the improvement of poor quality elements, solves the constrained optimization problem using the gradient of the objective function [35] [36].
Conclusively, the second order iterative methods, as far as the inner iterations are concerned, behave quite similar to first order methods. Furthermore, the development of efficient preconditioners for solving the original linear system is a decisive factor for making the second order iterative methods superior to the first order iterative methods.

General Iterative Methods of Second Order: Part II
In this section, a class of iterative methods of second order for solving large sparse linear systems of the form Au = b is presented and explicit preconditioned methods for approximating the solution of complex computational problems are given. Apart of the extensive research work that has been done for solving linear systems by using second order iterative methods as indicated in section 2, it is worthwhile to mention that the efficient usage of second order iterative schemes for solving nonlinear systems has been reported by Lipitakis (1990) and these iterative schemes are originated from the E-algorithm, a modified version of the well-known algorithm of Euclid, written in the form of a general second order iterative scheme [37]. This general iterative scheme is synoptically described in the following: Let us consider a class of non-stationary iterative methods of second order the form: where i π , i τ are real parameters, the so-called E-parameters [37], and i δ is a "correction" term. The iterative scheme is known as the E-iterative method [37].
These iterative schemes with appropriate selection of the parameters can be used in conjunction with various explicit preconditioned methods, such as expli- Let us consider an unsymmetric large sparse linear system Au = b. Then, the choice of E-parameters: where the non-singular matrix H is chosen such that the computational work required for solving the linear system Hu = s (where s is given) is considerably small compared to that required to solve the system Au = b, leading to the iterative procedure, Note that several choices of matrix H lead to well-known iterative methods, i.e.
Consider the unsymmetric linear system Au = b, a family of ellipses of centred with foci d ± c and assuming that the E-parameters are selected as where the acceleration parameters , i a ι β are defined as ( ) The iterative method (3) then can be equivalently written as a second order non-stationary iterative scheme (known as Chebychev iterative method): where r is the residual factor r b Au = − , and 0 1 , u u are arbitrary chosen initial values.
leads to second-order iterative scheme (known as the second-order Richardson's method): where the parameters α, β are the asymptotic values of , i i a β [19].
Let us consider the approximate factorization Note that the effectiveness of explicit iterative methods is mainly related to the fact that the exact inverse of the sparse matrix A, although is full, exhibits a similar "fuzzy" structure as A, i.e. the largest elements are clustered around the principal diagonal and main diagonals [13].
The selection of the E-parameters and where r M is an approximate form of the inverse of A and α, β and i a , i β are preconditioned parameters and sequences of parameters respectively, leads to the following explicit iterative schemes: which are known as the Explicit second order Richardson and Explicit Chebychev methods respectively [13].
In these explicit iterative schemes several approximate forms of the inverse can

A Class of Optimized Approximate Inverse Variants
A class of optimized approximate inverse variants can be obtained by considering a (near) optimized choice of the approximate inverse M depends on the selection of related parameters, i.e. fill-in parameters r 1 , r 2 , retention parameters δl 1 , δl 2 and entropy-adaptivity-uncertainty (EAU) parameters [30] (Figure 1). Note that the selection of retention parameter values as multiples of the corresponding semi-bandwidths of the original matrix leads to improved numerical results [40].
Then, the following sub-classes of approximate inverses, depending on the accuracy, storage and computational work requirements, can be derived

Explicit Iterative Schemes
Analogous relationships can be derived for various explicit CG-type methods as indicated in the following. The generalized alternating group explicit (AGE) iterative methods, based on the known ADI methods [12] where D is a non-negative diagonal matrix and D, 1 G , 2 G satisfy the condi- can be easily solved explicitly for any vectors v and z. Note that this statement holds only to certain model problems. Then, the following generalized AGE scheme can be obtained where ω is a non-negative acceleration parameter [1].
In an analogous manner can be derived the explicit preconditioned methods 1) Richardson + AGE method and 2) Chebychev semi-iterative method + AGE method [43]. Note that the AGE method which is based on the combination of certain elementary first order difference processes permitting the reduction of a given complicated problem into a sequence of simpler problems, can be considered as a fractional (splitting-up) method [44] [45].
The multiple explicit Jacobi (μ-EJ) method and its several parametric forms, provided that their spectral radius ρ satisfies the corresponding convergence condition, in combination with the Lanczos economized Chebychev polynomials of degree μ, has proved to be effective for solving elliptic boundary-value problems in parallel processors [39]. The multiple explicit Jacobi method in conjunction to economized Chebychev polynomial and Neumann series of certain degree can be effectively applied for solving explicitly large sparse linear systems resulting from the discretization of initial boundary-value problems [39].

Explicit Preconditioned Conjugate Gradients Method of Second Order
Let us assume that the E-parameters are selected as follows: ( ) where ( ) or equivalently the second order explicit preconditioned CG scheme can be obtained as ( ) In the following, the selection of E-parameters for various explicit preconditioned methods is presented.
The proposed explicit iterative methods can be used for solving large sparse linear systems and the E-iterative schemes can generate useful explicit iterative schemes of higher order with suitable selection of E-parameters for solving a wide class of very large sparse linear systems in multiprocessor systems.
Typical applications include a large class of initial-boundary value problems of general form in three space dimensions with strong nonlinearities where the positive perturbation parameter tends to zero [46]. The discrete analogues of Equation (25a) lead to the solution of the general linear system where the coefficient matrix A is a large sparse unsymmetric real (n × n) matrix of irregular structure.

Selection of Explicit Iterative Solver and Algorithmic Implementation
In this section an iterative solver of second order for solving linear and non-

On the Selection of Approximate Inverse Preconditioners
The selection of an efficient approximate inverse preconditioner for solving explicitly complex computational problems is an interesting research topic of critical importance. Let us assume that a non-singular large sparse unsymmetric matrix of irregular structure can be factorized as A = LU (Figure 2), where L and U are triangular matrices and

Au s
where the coefficient matrix A is a large sparse real (n × n) matrix of irregular structure.
The structure of A is shown in the following diagram: The elements of the decomposition factors L s and U s can be obtained from the algorithmic procedure FELUBOT [39].
A class of exact and approximate inverse matrix techniques can be considered containing several sub-classes of approximate inverses according to memory requirements, computational work, accuracy, as indicated in the following scheme: Let us assume that

The Exact and Approximate Inverse Preconditioners
The elements of the exact and approximate inverse of a given unsymmetric and irregular structure can be obtained as follows and the explicit banded approximate inverse algorithm can be described by the following algorithmic procedure in pseudocode form: Input: given matrix A; n order of A; submatrices F, H, Γ, Z; parameter εEM (indicating the exact inverse or approximate inverse matrix), s m, p; l 1 and l 2 numbers of diagonals retained in semi-bandwidths m and p respectively, δl 1 and δl 2 widths of bands in A Output: elements μ i,j of the exact inverse M Computational Procedure: Step 0: Read the value of adaptive parameter εEM //for the appropriate value of adaptive parameter εEM the algorithm computes the exact inverse matrix or approximate inverse matrices// Call module exactmode-1(εEM) //If the module exactmode-1is activated then the algorithm EBAIM-1 computes the exact inverse of a given unsymmetric matrix of irregular structure using an exact LU factorization, otherwise the algorithm can be used for computing an approximate inverse matrix of the given coefficient matrix// Step 40:  multiplications, while the memory requirements are (n × n) words. In the case of very large systems, the memory requirements could be prohibitively high and the usage of efficient memory requirements of approximate inverse matrices is desirable.

An adaptive Preconditioned Conjugate Gradient Method Using the Explicit Approximate Preconditioner
The preconditioned PCG method can solve the problem where R is the sparse, non-singular QR factor, while the preconditioned CGLS method can solve the equations: Note that the factor Q cannot be stored, while the only additional computational work is solving the two equations T R w v = and Rz w = . All the factorization processes are numerically stable. It should be pointed out that the sparse preconditioner M * used in the modified PCG method is the approximate inverse of factor R, which using is closely related with the so-called mrTIGO method.
In order to compute efficiently the solution of the linear system Ax = b, a modified Preconditioned Conjugate Gradient (mPCG) method in conjunction to the modified rTIGO method [47] is applied in the following algorithmic form.
Algorithm mPCG (A, b, tol, x 0 , M * , x) Purpose: a modified PCG method is used for solving a given system of linear equations Input: A is a symmetric and positive definite coefficient matrix, b is the right hand side vector, tol is the predetermined tolerance, 0 x is the initial guess, M * is the required preconditioner Output: x the solution vector Computational Procedure: Step 1: Given 0 x and preconditioner M * Step 2: Set 0 0 r Ax b = − Step 3: Solve 0 0

My r =
Step 4: Set 0 0 , 0 p y k = − = Step 5: While 0 k r ≠ once per iteration. Therefore, the preconditioner M * should be chosen such that can be done easily and efficiently.
The preconditioner M * = G that results in a minimal memory use. The storage requirement was the vectors r, x, y, p and the upper triangular matrix G, in the data implementation. The convergence rate of preconditioned CG is independent of the order of equations and the matrix vector products are orthogonal and independent. The preconditioned CG method in not self-correcting and the numerical errors accumulate every round. Therefore, to minimize the numerical errors in the pCG, it was used double precision variables at the cost of memory use. The explicit pCG method of second order can be alternatively used in conjunction with the explicit approximate inverse Mμ * for solving complex computational problems with the appropriate selection of the E-parameters of Table 1. Note that the topics of stability and correctness of incomplete factorization methods have been discussed in a recent research work [48].
The presented second order iterative schemes can be efficiently used for solving 2d and 3d initial and boundary value problems.

Conclusion
A class of general iterative methods of second order is described. Explicit adaptive iterative schemes and exact/approximate inverse preconditioners have been introduced and the selection of iterative parameters has been discussed. Exact and Approximate Inverse Matrix Algorithmic Techniques have been presented.
Exact and Approximate Inverse Preconditioners have been described in adaptive algorithmic form. Explicit preconditioned Conjugate Gradients methods of Table 1. Certain cases of the E-iterative scheme which are no more than various explicit preconditioned methods. second order are given. An Adaptive Preconditioned Conjugate Gradient Method using explicit approximate preconditioners has been also presented. Future research work will be focused on the implementation of the presented methods to parallel computer environments and related applications.