A Note on Parameterized Preconditioned Method for Singular Saddle Point Problems ()

1. Introduction
We consider the iterative solution of the following linear system:
(1)
where
is Hermitian positive definite,
is rank-deficient, i.e.,
,
denotes the conjugate transpose of E,
and
. Linear systems of the form (1) are called saddle point problems. They arise in many application areas, including computational fluid dynamics, constrained optimization and weighted least-squares problem, see, e.g., [1] [2].
We review the Hermitian and skew-Hermitian splitting (HSS) [3] of coefficient matrix A:

where
,
.
The PPHSS Iteration Method ([4]): Denote
. Let
be an arbitrary initial guess, compute
for
by the following iteration scheme until
converges,
(2)
where
,
are given positive constants and
(3)
matrix C is Hermitian positive definite.
Evidently, the iteration scheme (2) of PPHSS method can be rewritten as
(4)
here,
is the iteration matrix of the PPHSS method. In fact, Equation (4) may also result from the splitting
(5)
with
(6)
Evidently, the matrix
can act as a preconditioner for solving the linear system (1), which is called the PPHSS preconditioner. The PPHSS method is a special case of the generalized preconditioned HSS (GHSS) method [5]. When
, we can obtain a special case of the PPHSS (SPPHSS) method. In order to analyze the semi-convergence of the PPHSS iteration, we let
![]()
where
is the identity matrix of order p and
. In the same way, we denote
(7)
Owing to the similarity of the matrices
and
, we only need to study the spectral properties of matrix
in order to analyze the semi-convergence of the PPHSS iteration.
2. The Semi-Convergence of the PPHSS Method
As the coefficient matrix A is singular, then the iteration matrix T has eigenvalue 1, and the spectral radius of matrix T cannot be small than 1. For the iteration matrix T of the singular linear systems, we introduce its pseudo-spectral radius
by follows,
,
where
is the set of eigenvalues of
.
For a matrix
, the smallest nonnegative integer i such that
is called the index of K, and we denote it by
. In fact,
is the size of the largest Jordan block corresponding to the zero eigenvalue of K.
Lemma 2.1 ([6]). The iterative method (4) is semi-convergent, if and only if,
and
.
Lemma 2.2 ([7]).
, if and only if, for any
,
.
Theorem 2.3. Assume that B and C be Hermitian positive definite, E be of rank-deficient. Then
.
Proof. The proof is similar to the proof of Lemma 2.8 in [8], here is omitted.
Lemma 2.4 ([4]). Let B and C be Hermitian positive definite, E be of rank-deficient. Assume that
. Then, we can partition
in Equation (7) as
.
Let
be the singular value decomposition [9] of E, where
and
are unitary matrices, and
,
are the singular values of
.
Lemma 2.5. The eigenvalues of the iteration matrix
of PPHSS iteration method are
with multiplicity
, and the roots of quadratic equation
,
(8)
Proof. Notice the similarity of matrices
and
. The proof is essentially analogous to the proof of Lemma 2.3 in [4] with only technical modifications. So, it is omitted.
Lemma 2.6. If
, then the eigenvalue
of the iteration matrix
satisfies
; if
, then
or
.
Proof. If
, we give the proof by contradiction. By Lemma 2.5, obviously, when
, it can not be equal to 1. We assume
, by some algebra, it can be reduced to
,
here,
and
. It is equivalent to
, so
, which is in contradiction with
.
If
, we have
and
, which finishes the proof.
Lemma 2.7 ([10]). Both roots of the real quadratic equation are less than one in modulus if and only if
and
.
Theorem 2.8. If the iteration parameters
and ![]()
,
(9)
then, the pseudo-spectral radius of the PPHSS method satisfies
.
Proof. Using condition (9), it follows that
. According to Lemma 2.5, if
, we can obtain that
,
and
.
By Lemma 2.7, for the eigenvalues
of
, it holds
.
If
, by Lemma 2.6, the eigenvalues of
, except 1 are
. According to the definition of pseudo-spectral, we get
.
Theorem 2.9. Let
and
. Then, the optimal value of the iteration parameter
for the SPPHSS iteration method is given by
,
and correspondingly,
. (10)
Proof. According to Lemma 2.5 and Lemma 2.6, we know that the eigenvalues of the iteration matrix
are
with multiplicity p, and
,
(11)
If
, the eigenvalues with the form of Equation (11) are 1, which can not affect the value of
. Therefore, without loss of generality, here we only need to discuss the case
. The rest is similar to that of the proof of Theorem 3.1 in [4], here is omitted.
3. Numerical Results
In this section, we use an example to demonstrate the numerical results of the PPHSS method as a solver by comparing its iteration steps (IT), elapsed CPU time in seconds (CPU) and relative residual error (RES) with other methods. The iteration is terminated once the current iterate satisfies
or the number of the prescribed iteration steps
are exceeded. All the computations are implemented in MATLAB on a PC computer with Intel (R) Celeron (R) CPU 1000M @ 1.80 GHz, and 2.00 GB memory.
Example 3.1 ([11]). Consider the saddle point problem (1), with the following block form of coefficient matrix:
,
,
where symbol
denotes the Kronecker product, and
,
,
,
,
,
,
,
the right-hand side vector b is chosen by
, where
,
,
.
For the Example 3.1, we choose
where
is the block diagonal matrix of B. In Table 1, it is
clear to see that the pseudo-spectral radius of the PPHSS and the SPPHSS methods are much smaller than of the PHSS method when the optimal parameters are employed. In Table 2, we list numerical results with respect to
![]()
Table 1. The optimal iteration parameters and pseudo-spectral radius.
IT, CPU and RES of the texting methods with different problem sizes l. We see that the PPHSS and SPPHSS methods with appropriate parameters always outperforms the PHSS method both as a solver and as a preconditioner for GMRES in iteration steps and CPU times. Notice
![]()
where
. To compute the matrix-vector products with
, we make incomplete LU factorization of B and
with drop tolerance 0.001. In the two tables, we use restarted GMRES (18) and preconditioned GMRES (18).
NOTES
![]()
*Corresponding author. This author is supported by National Natural Science Foundation of China under grant No. 61572018 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LY15A010016.