Improving AOR Method for a Class of Two-by-Two Linear Systems

In this paper, the preconditioned accelerated overrelaxation (AOR) method for solving a class of two-by-two linear systems is presented. A new preconditioner is proposed according to the idea of [1] by Wu and Huang. The spectral radii of the iteration matrix of the preconditioned and the original methods are compared. The comparison results show that the convergence rate of the preconditioned AOR methods is indeed better than that of the original AOR methods, whenever the original AOR methods are convergent under certain conditions. Finally, a numerical example is presented to confirm our results.


Introduction
Sometimes we have to solve the following linear systems , , , .
Systems such as (1.1) are important and appear in many different applications of scientific computing.For example, (1.1) are usually faced when the following generalized linear-squares problem is considered where W is the variance-covariance matrix.One can see [2][3][4][5] for details.
As is known, the linear systems (1.1) can be solved by direct methods or iterative methods.Direct methods are widely employed when the order of the coefficient matrix H is not too large, and are often regarded as robust methods.The memory and the computational require-ments for solving the large linear systems may seriously challenge the most efficient direct methods available today.The alternative is to use iterative methods established for solving the large linear systems.Naturally, it is necessary that we make the use of iterative methods instead of direct methods to solve the large sparse linear systems.Meanwhile, iterative methods are easier to implement efficiently on high performance computers than direct methods.
As is known, there exist three well-known classical iterative methods, i.e., Jacobi, Gauss-Seidel and successive overrelaxation (SSOR) method, which were fully covered in the excellent books by Varge [6] and Young [7].To make the convergence rate of SSOR method better, accelerated overrelaxation (AOR) method was proposed in [8] by Hadjidimos.
To solve the linear systems (1.1) with the AOR iterative method, based on the structure of the matrix H , the matrix H is split as follows The AOR iterative method for solving (1.1) is established as follows is iteration matrix and is of the following form Obviously, if w r  , then the AOR method reduces to the SOR method.
The spectral radii of the iteration matrix is decisive for the convergence and stability of the method, and the smaller it is, the faster the method converges when the spectral radii is smaller than 1.To accelerate the convergence rate of the iterative method solving the linear systems (1.1), preconditioned methods are often used.That is,

PHx Pf 
where the preconditioner P is a non-singular matrix.
If the matrix PH is expressed as then the preconditioned AOR method can be defined by where In this paper, according to the idea of [1] by Wu and Huang, a new preconditioner is proposed to improve the convergence rate of the AOR method.Be similar to the work of [1] and [9], we compare the spectral radii of the iteration matrix of the preconditioned and the original methods.The comparison results show that the convergence rate of the preconditioned AOR methods is indeed superior to that of the original AOR methods, whenever the original AOR methods are convergent (to see the next section).
For convenience, we shall now briefly explain some of the terminology and lemmas.Let if and only if A B  These definitions carry immediately over to vectors by identifying them with 1 n  matrices.    denotes the spectral radius of a matrix.
be a nonnegative and irreducible n n  matrix.Then 1) A has a positive real eigenvalue equal to its spec- for some nonnegative vector x , then and x is a positive vector.The outline of this paper is as follows.In Section 2, the spectral radii of the iteration matrix of the original and the preconditioned methods are compared.In Section 3, a numerical example is presented to illustrated our results.

Preconditioned AOR Methods and Comparisons
Now, let us consider the preconditioned linear systems, where According to [1], here S is taken as as follows Naturally, we assume that there at least exists a nonzero number in the elements of S .
By simple computations, we obtain 2), H  can be expressed as Then the preconditioned AOR method for (2.1) is defined as follows: where

T w I w B S I B w r C wrC B S I B w I S D w I wB wrC I S D
The following theorem is given by comparing the spectral radii of the iteration matrix Since the matrix H is irreducible, by observing the structure of (2.3), it is not difficult to get that the matrix   is impossible, otherwise the matrix H becomes singular.So we will mainly discuss two cases: 1  Since 0 S  and 0 S  , then we get 0 0 0 and 0. 0 0 but not equal to zero vector.By Lemma 1.2, we get Similarly, 2) holds with 1,   which completes the proof.□ It is well known that when w r  , AOR iteration is reduced to SOR iteration.The following corollary is easily obtained.
Corollary 2.1 Let the coefficient matrix H be irreducible, 1 0 B  with Naturally, we assume that there at least exists a nonzero number in the elements of S .For the sake of simplicity, we assume that , n p p   H  can be expressed as The matrix H  is split as follows Then the preconditioned AOR method for (2.1) is of the following form: Similarly, the following theorem and corollary are given by comparing the spectral radii of the iteration matrix , w r T and the original iteration matrix , r T .Theorem 2.2 Let the coefficient matrix H be irreducible, 1 0 B  with

A Numerical Example
Now let us consider the following example to illustrate the results.Example 3.1 Tables 1, 2 display the spectral radii of the corresponding iteration matrix with different parameters w , r and p .These calculations are performed using Matlab 7.1.
Obviously, from

1
Let the coefficient matrix H be irreducible, 1 0 B  with

1 0
diag B  .By Lemma 1.1, there is a positive vector x such that , ,


consider the following preconditioners.Let the matrix S  in (2.1) be defined by There exist the following three forms for S , that is, 1) If n p p   , then

Table 1 ,
it easy to known that

Table 2 ,
it is easy to know that    