JAMPJournal of Applied Mathematics and Physics2327-4352Scientific Research Publishing10.4236/jamp.2016.44067JAMP-65442ArticlesPhysics&Mathematics A Note on Parameterized Preconditioned Method for Singular Saddle Point Problems YueyanLv1NaiminZhang1School of Mathematics and Information Science, Wenzhou University, Wenzhou, China1304201604046086133 December 2015accepted 6 April 13 April 2016© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/

Recently, some authors (Li, Yang and Wu, 2014) studied the parameterized preconditioned HSS (PPHSS) method for solving saddle point problems. In this short note, we further discuss the PPHSS method for solving singular saddle point problems. We prove the semi-convergence of the PPHSS method under some conditions. Numerical experiments are given to illustrate the efficiency of the method with appropriate parameters.

Singular Saddle Point Problems Hermitian and Skew-Hermitian Splitting Preconditioning Iteration Methods Semi-Convergence
1. Introduction

We consider the iterative solution of the following linear system:

where is Hermitian positive definite, is rank-deficient, i.e., , denotes the conjugate transpose of E, and. Linear systems of the form (1) are called saddle point problems. They arise in many application areas, including computational fluid dynamics, constrained optimization and weighted least-squares problem, see, e.g.,  .

We review the Hermitian and skew-Hermitian splitting (HSS)  of coefficient matrix A:

where

,.

The PPHSS Iteration Method (): Denote. Let be an arbitrary initial guess, compute for by the following iteration scheme until converges,

where, are given positive constants and

matrix C is Hermitian positive definite.

Evidently, the iteration scheme (2) of PPHSS method can be rewritten as

here, is the iteration matrix of the PPHSS method. In fact, Equation (4) may also result from the splitting

with

Evidently, the matrix can act as a preconditioner for solving the linear system (1), which is called the PPHSS preconditioner. The PPHSS method is a special case of the generalized preconditioned HSS (GHSS) method . When, we can obtain a special case of the PPHSS (SPPHSS) method. In order to analyze the semi-convergence of the PPHSS iteration, we let

where is the identity matrix of order p and. In the same way, we denote

Owing to the similarity of the matrices and, we only need to study the spectral properties of matrix in order to analyze the semi-convergence of the PPHSS iteration.

2. The Semi-Convergence of the PPHSS Method

As the coefficient matrix A is singular, then the iteration matrix T has eigenvalue 1, and the spectral radius of matrix T cannot be small than 1. For the iteration matrix T of the singular linear systems, we introduce its pseudo-spectral radius by follows,

,

where is the set of eigenvalues of.

For a matrix, the smallest nonnegative integer i such that is called the index of K, and we denote it by. In fact, is the size of the largest Jordan block corresponding to the zero eigenvalue of K.

Lemma 2.1 (). The iterative method (4) is semi-convergent, if and only if,

and.

Lemma 2.2 ()., if and only if, for any,.

Theorem 2.3. Assume that B and C be Hermitian positive definite, E be of rank-deficient. Then.

Proof. The proof is similar to the proof of Lemma 2.8 in , here is omitted.

Lemma 2.4 (). Let B and C be Hermitian positive definite, E be of rank-deficient. Assume that. Then, we can partition in Equation (7) as

.

Let be the singular value decomposition  of E, where and are unitary matrices, and

,

are the singular values of.

Lemma 2.5. The eigenvalues of the iteration matrix of PPHSS iteration method are with multiplicity, and the roots of quadratic equation

, (8)

Proof. Notice the similarity of matrices and. The proof is essentially analogous to the proof of Lemma 2.3 in  with only technical modifications. So, it is omitted.

Lemma 2.6. If, then the eigenvalue of the iteration matrix satisfies; if, then or.

Proof. If, we give the proof by contradiction. By Lemma 2.5, obviously, when, it can not be equal to 1. We assume, by some algebra, it can be reduced to

,

here, and. It is equivalent to , so, which is in contradiction with.

If, we have and, which finishes the proof.

Lemma 2.7 (). Both roots of the real quadratic equation are less than one in modulus if and only if and.

Theorem 2.8. If the iteration parameters and

, (9)

then, the pseudo-spectral radius of the PPHSS method satisfies.

Proof. Using condition (9), it follows that. According to Lemma 2.5, if, we can obtain that

,

and

.

By Lemma 2.7, for the eigenvalues of, it holds.

If, by Lemma 2.6, the eigenvalues of, except 1 are. According to the definition of pseudo-spectral, we get.

Theorem 2.9. Let and. Then, the optimal value of the iteration parameter for the SPPHSS iteration method is given by

,

and correspondingly,

Proof. According to Lemma 2.5 and Lemma 2.6, we know that the eigenvalues of the iteration matrix are with multiplicity p, and

, (11)

If, the eigenvalues with the form of Equation (11) are 1, which can not affect the value of. Therefore, without loss of generality, here we only need to discuss the case. The rest is similar to that of the proof of Theorem 3.1 in , here is omitted.

3. Numerical Results

In this section, we use an example to demonstrate the numerical results of the PPHSS method as a solver by comparing its iteration steps (IT), elapsed CPU time in seconds (CPU) and relative residual error (RES) with other methods. The iteration is terminated once the current iterate satisfies or the number of the prescribed iteration steps are exceeded. All the computations are implemented in MATLAB on a PC computer with Intel (R) Celeron (R) CPU 1000M @ 1.80 GHz, and 2.00 GB memory.

Example 3.1 (). Consider the saddle point problem (1), with the following block form of coefficient matrix:

, ,

where symbol denotes the Kronecker product, and

, , , ,

, , ,

the right-hand side vector b is chosen by, where, ,.

For the Example 3.1, we choose where is the block diagonal matrix of B. In Table 1, it is

clear to see that the pseudo-spectral radius of the PPHSS and the SPPHSS methods are much smaller than of the PHSS method when the optimal parameters are employed. In Table 2, we list numerical results with respect to

The optimal iteration parameters and pseudo-spectral radius
Method8162432
PHSS1.63282.19992.65113.0367
0.67560.81120.86670.8969
SPPHSS1.02161.00591.00271.0015
0.49470.49850.49930.4996
PPHSS1.98152.69902.59762.9953
0.68530.78451.03361.1234
0.52090.52580.53400.5452
Method8162432
PHSSIT26374754
CPU0.3991.5487.28627.178
RES ()6.69147.22507.37119.3294
SPPHSSIT26262626
CPU1.0751.5834.8798.610
RES ()5.37816.71987.06897.2124
PPHSSIT16161616
CPU0.2201.0083.62012.558
RES ()7.87837.67047.75358.1446
GMRESIT8832560545010376
CPU0.5243.17912.57216.264
RES ()9.82439.99259.99039.9950
PHSS-GMRESIT22354450
CPU0.1110.4671.6493.751
RES ()9.67108.08749.39909.8588
SPPHSS-GMRESIT10111314
CPU0.0820.3721.8313.892
RES ()4.71786.74876.24984.2212
PPHSS-GMRESIT11131316
CPU0.0700.4281.5643.366
RES ()5.45342.39759.84499.8690

IT, CPU and RES of the texting methods with different problem sizes l. We see that the PPHSS and SPPHSS methods with appropriate parameters always outperforms the PHSS method both as a solver and as a preconditioner for GMRES in iteration steps and CPU times. Notice

where. To compute the matrix-vector products with, we make incomplete LU factorization of B and with drop tolerance 0.001. In the two tables, we use restarted GMRES (18) and preconditioned GMRES (18).

Cite this paper

Yueyan Lv,Naimin Zhang, (2016) A Note on Parameterized Preconditioned Method for Singular Saddle Point Problems. Journal of Applied Mathematics and Physics,04,608-613. doi: 10.4236/jamp.2016.44067

NOTESReferencesElman, H.C., Ramage, A. and Silvester, D.J. (2007) Algorithm 866, IFISS, a MatLab Toolbox for Modelling Imcompressible Flow. ACM Trans. Math. Softw., 33, 1-18.Bjorck, A. (1996) Numerical Methods for Least Squares Problems. SIAM, Philadelphia. http://dx.doi.org/10.1137/1.9781611971484Bai, Z.Z., Golub, G.H. and Ng, M.K. (2003) Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Definite Linear Systems. SIAM J. Matrix Anal. Appl., 24, 603-626. http://dx.doi.org/10.1137/S0895479801395458Li, X., Yang, A.L. and Wu, Y.J. (2014) Parameterized Preconditioned Hermitian and Skew-Hermitian Splitting Iteration Method for Saddle-Point Problems. Int. J. Comput. Math., 91, 1224-1238. http://dx.doi.org/10.1080/00207160.2013.829216Chao, Z. and Zhang, N.M. (2014) A Generalized Preconditioned HSS Method for Singular Saddle Point Problems. Numer. Algorithms, 66, 203-221. http://dx.doi.org/10.1007/s11075-013-9730-yBerman, A. and Plemmons, R. (1979) Nonnegative Matrices in Mathematical Science. Academic Press, New York.Zhang, N.M. and Wei, Y.M. (2010) On the Convergence of General Stationary Iterative Methods for Range-Hermitian Singular Linear Systems. Numer. Linear Algebra Appl., 17, 139-154. http://dx.doi.org/10.1002/nla.663Chen, Y. and Zhang, N.M. (2014) A Note on the Generalization of Parameterized Inexact Uzawa Method for Singular Saddle Point Problems. Appl. Math. Comput., 325, 318-322. http://dx.doi.org/10.1016/j.amc.2014.02.089Golub, G.H. and Van Loan, C.F. (1996) Matrix Computions. 3rd Edition, The Johns Hopkins University Press, Baltimore.Young, D.M. (1971) Iterative Solution of Large Linear Systems. Academic Press, New York.Zheng, B., Bai, Z.Z. and Yang, X. (2009) On Semi-Convergence of Parameterized Uzawa Methods for Singular Saddle Point Problems. Linear Algebra Appl., 431, 808-817. http://dx.doi.org/10.1016/j.laa.2009.03.033