Generalized Shift-Splitting Preconditioner for Saddle Point Problems with Block Three-by-Three Structure ()

Linna Wang^{}, Ke Zhang^{*}

Department of Mathematics, Shanghai Maritime University, Shanghai, China.

**DOI: **10.4236/oalib.1105968
PDF
HTML XML
350
Downloads
895
Views
Citations

Department of Mathematics, Shanghai Maritime University, Shanghai, China.

We propose a generalized shift-splitting iteration method for saddle point problems with block three-by-three structure. As a new iteration method, the method converges to the unique solution of the saddle point problem unconditionally. When exploited as a preconditioner, the spectral distribu-tion of the preconditioned matrix is investigated. Numerical experiments show that the new variant is efficient in speeding up GMRES for solving the block three-by-three saddle point problem.

Share and Cite:

Wang, L. and Zhang, K. (2019) Generalized Shift-Splitting Preconditioner for Saddle Point Problems with Block Three-by-Three Structure. *Open Access Library Journal*, **6**, 1-14. doi: 10.4236/oalib.1105968.

1. Introduction

We consider the saddle point problems with the following block three-by-three structure

$\left[\begin{array}{ccc}A& {B}^{\text{T}}& 0\\ -B& 0& -{C}^{\text{T}}\\ 0& C& 0\end{array}\right]\left[\begin{array}{c}x\\ y\\ z\end{array}\right]=\left[\begin{array}{c}f\\ g\\ h\end{array}\right]\mathrm{}\text{or}\mathrm{}\mathcal{A}u=b\mathrm{,}$ (1)

where $A\in {\mathbb{R}}^{n\times n}$ is a symmetric positive definite matrix, $B\in {\mathbb{R}}^{m\times n}$ and $C\in {\mathbb{R}}^{s\times m}$ are matrices of full row rank, $f\in {\mathbb{R}}^{n}$, $g\in {\mathbb{R}}^{m}$ and $h\in {\mathbb{R}}^{s}$ are given vectors. The linear system of equations (1) has many practical applications in scientific computing and engineering fields, including the constrained quadratic optimization problem, the constrained least squares problem and the computational fluid dynamics [1].

In practice, the coefficient matrix $\mathcal{A}$ in (1) is generally sparse and of large size. Therefore, iterative methods are usually preferred for their low memory requirement per iteration. Among these candidates, the Uzawa-type methods [2], the matrix splitting methods [3] and the Krylov subspace methods [4] have received much attention during the past decades. For instance, the Uzawa method [2] is a classical approach which is known for its low computational overhead and easy implementation. To improve the numerical performance, some Uzawa-type variants have been proposed recently which include the inexact Uzawa [5] [6] [7], Uzawa-SOR [8] and Uzawa-PSS [9], etc. The other way for solving the saddle point problems is to exploit the matrix splitting. By using the Hermitian and skew-Hermitian splitting (HSS), Bai et al. [3] propose the HSS iteration method. Following this reasoning, some efficient variants such as the accelerated Hermitian and skew-Hermitian splitting (AHSS) [10], the modified Hermitian and skew-Hermitian splitting (MHSS) [11] and the relaxed Hermitian and skew- Hermitian splitting (RHSS) [12] have been proposed henceforth. Recently, the sophisticated Krylov subspace methods have been used to solve the saddle point problems. Due to the structure of the saddle point matrices, however, a slow convergence or even stagnation is often witnessed during the actual computation. Such drawback can be alleviated by employing an appropriate preconditioner. In the context of saddle point problems, many efficient preconditioners, to name just a few, the block diagonal preconditioners, the constrained preconditioners, the HSS-based preconditioners, have been developed; see [13] [14] [15] [16] and the more recent monograph [17] for detail.

In [18], Bai et al. propose a class of shift-splitting iteration method for solving non-Hermitian positive definite linear system $Au=b$. It can be recast in the following way. Suppose that $A\in {\mathbb{R}}^{n\times n}$ is a non-Hermitian positive definite matrix. Then the shift-splitting of the coefficient matrix A is given by

$A=M\left(\alpha \right)-N\left(\alpha \right)=\frac{1}{2}\left(\alpha I+A\right)-\frac{1}{2}\left(\alpha I-A\right)\mathrm{,}$

which leads naturally to the shift-splitting iteration scheme

${u}^{k+1}={M}^{-1}\left(\alpha \right)N\left(\alpha \right){u}^{k}+{M}^{-1}\left(\alpha \right)b,\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,1,2,\cdots $

Theoretical analysis shows that shift-splitting iteration is endowed with an unconditional convergence. By using this sound convergence property, Cao et al. come up with the shift-splitting to solve the saddle point problems and propose a shift-splitting preconditioner and a local shift-splitting variant [19]. Numerical experiments show that the proposed two shift-splitting-type preconditioners are superior to the HSS preconditioner in terms of CPU time and iteration steps. The shift-splitting technique has also been generalized by incorporating two relaxation parameters [20]. Furthermore, Salkuyeh et al. extend it to solve the generalized saddle point problems and propose a modified generalized shift-splitting preconditioner [21].

By taking into account the special structure of (1), Cao [22] proposes shift- splitting preconditioners for solving (1). Motivated by [20] and [22], we devise a new generalized shift-splitting (GSS) iteration scheme for the structured linear system (1). A corresponding GSS preconditioner is induced to accelerate the convergence of GMRES. Theoretical results on the convergence of GSS iteration method and the eigenvalue distribution of the preconditioned matrix are also given.

The organization of this paper is as follows. In Section 2, we introduce the generalized shift-splitting iteration method for solving the block three-by-three saddle point problem and look into its convergence property. In Section 3, we obtain a GSS preconditioner derived from the GSS iteration and analyze the spectrum of the preconditioned matrix. In Section 4, some numerical experiments are carried out to illustrate the effectiveness of GSS by comparing it with other shift-splitting preconditioners given in [22]. Finally, some conclusions are given in Section 5.

2. Generalized Shift-Splitting Iteration Method and Its Convergence

The proposed generalized shift-splitting (GSS) iteration is based on the work of [22]. Therefore, it is reasonable for us to recap briefly the shift splitting used in [22] before introducing the new iteration method in this section.

In its simplest form, the shift splitting (SS) of the coefficient matrix $\mathcal{A}$ in (1) goes as follows

$\begin{array}{c}\mathcal{A}=\frac{1}{2}\left(\alpha I+\mathcal{A}\right)-\frac{1}{2}\left(\alpha I-\mathcal{A}\right)\\ =\frac{1}{2}\left[\begin{array}{ccc}\alpha I+A& {B}^{\text{T}}& 0\\ -B& \alpha I& -{C}^{\text{T}}\\ 0& C& \alpha I\end{array}\right]-\frac{1}{2}\left[\begin{array}{ccc}\alpha I-A& -{B}^{\text{T}}& 0\\ B& \alpha I& {C}^{\text{T}}\\ 0& -C& \alpha I\end{array}\right]\mathrm{,}\end{array}$ (2)

where I is the identity matrix with conforming size. The parameter $\alpha $ in (2) is used to monitor the convergence of the SS iteration method and hence the spectrum of the preconditioned matrix. In other words, $\alpha $ plays the same role in adjusting the two block diagonal matrices in (2), i.e.,

$\begin{array}{l}\left[\begin{array}{cc}A& {B}^{\text{T}}\\ -B& 0\end{array}\right]\hfill \end{array}\in {\mathbb{R}}^{\left(m+n\right)\times \left(m+n\right)}\mathrm{}\text{and}0\in {\mathbb{R}}^{s\times s}\mathrm{.}$ (3)

However, the reason for choosing a single parameter to control simultaneously these two block diagonal matrices above remains elusive. Instead, it seems more appealing to investigate the convergence if the two block diagonal matrices are affected by two different parameters separately. This is the main motivation of our work. To this end, we refine the splitting (2) by embedding an extra parameter $\beta $ in the $\left(\mathrm{3,3}\right)$ -block, which yields the following generalized shift- splitting

$\begin{array}{c}\mathcal{A}=M\left(\alpha \mathrm{,}\beta \right)-N\left(\alpha \mathrm{,}\beta \right)=\frac{1}{2}\left(\Omega +\mathcal{A}\right)-\frac{1}{2}\left(\Omega -\mathcal{A}\right)\\ =\frac{1}{2}\left[\begin{array}{ccc}\alpha I+A& {B}^{\text{T}}& 0\\ -B& \alpha I& -{C}^{\text{T}}\\ 0& C& \beta I\end{array}\right]-\frac{1}{2}\left[\begin{array}{ccc}\alpha I-A& -{B}^{\text{T}}& 0\\ B& \alpha I& {C}^{\text{T}}\\ 0& -C& \beta I\end{array}\right]\mathrm{.}\end{array}$ (4)

Here $\alpha >0$, $\beta >0$ and $\Omega =\left[\begin{array}{ccc}\alpha I& & \\ & \alpha I& \\ & & \beta I\end{array}\right]$. Then it is trivial to find that the generalized shift-splitting (4) reduces to the shift-splitting (2) if $\alpha =\beta $.

Using the splitting (4), we obtain the following stationary iteration

${u}^{k+1}={\Gamma}_{GSS}{u}^{k}+c\mathrm{,}$ (5)

where ${\Gamma}_{GSS}={M}^{-1}\left(\alpha \mathrm{,}\beta \right)N\left(\alpha \mathrm{,}\beta \right)$, $c={M}^{-1}\left(\alpha \mathrm{,}\beta \right)b$ and $k=\mathrm{0,1,}\cdots $. To put it another way, the generalized shift-splitting method (GSS) for the block three-by- three saddle point problem (1) that can be stated as follows. Given an initial vector ${u}^{0}$, compute

$\frac{1}{2}\left[\begin{array}{ccc}\alpha I+A& {B}^{\text{T}}& 0\\ -B& \alpha I& -{C}^{\text{T}}\\ 0& C& \beta I\end{array}\right]{u}^{k+1}=\frac{1}{2}\left[\begin{array}{ccc}\alpha I-A& -{B}^{\text{T}}& 0\\ B& \alpha I& {C}^{\text{T}}\\ 0& -C& \beta I\end{array}\right]{u}^{k}+\left[\begin{array}{c}f\\ g\\ h\end{array}\right]$ (6)

until ${u}^{k}$ converges, where $k=\mathrm{0,1,}\cdots $.

It is known that the GSS iteration method (5) enjoys the unconditional convergence if the spectral radius of the iteration matrix ${\Gamma}_{GSS}$ is less than 1. The following results are adapted from [22] and [23] which are instrumental in proving such sound convergence property.

Lemma 1. Suppose that $A\in {\mathbb{R}}^{n\times n}$ is a symmetric positive definite matrix, $B\in {\mathbb{R}}^{m\times n}$ and $C\in {\mathbb{R}}^{s\times m}$ are of full row rank. Let $\lambda $ be an eigenvalue of the saddle point matrix $\mathcal{A}$, then

$Re\left(\lambda \right)>0.$

Proof. The proof can be found in ( [22], Lemma 2.2).

Lemma 2. Let $W\in {\mathbb{R}}^{n\times n}$ be a non-Hermitian positive semidefinite matrix and $E\left(W\mathrm{,}\delta \right)={\left(\delta I+W\right)}^{-1}\left(\delta I-W\right)$ be the extrapolated Cayley transform with $\delta >0$. Then

1) the spectral radius of $E\left(W\mathrm{,}\delta \right)$ is bounded by 1, i.e., $\rho \left(E\left(W\mathrm{,}\delta \right)\right)\le 1$ ;

2) $\rho \left(E\left(W\mathrm{,}\delta \right)\right)=1$ if and only if the matrix W has a (reducing) eigenvalue of the form $\iota \xi $ with $\xi \in \mathbb{R}$ and $\iota $ the imaginary unit.

Proof. See ( [23], Theorem 2.1) for detail.

The following theorem on the convergence of GSS iteration method comes out as a result of Lemma 1 and Lemma 2.

Theorem 1. Suppose that $A\in {\mathbb{R}}^{n\times n}$ is a symmetric positive definite matrix, $B\in {\mathbb{R}}^{m\times n}$ and $C\in {\mathbb{R}}^{s\times m}$ are matrices of full row rank. Let $\alpha >0$ and $\beta >0$. Then the spectral radius of ${\Gamma}_{GSS}$ defined in (5) satisfies

$\rho \left({\Gamma}_{GSS}\right)<\mathrm{1,}$

i.e., the GSS iteration method converges to the unique solution of (1) unconditionally.

Proof. By (4) and (5), we rewrite the iteration matrix ${\Gamma}_{GSS}$ as

$\begin{array}{l}{\Gamma}_{GSS}={\left(\Omega +\mathcal{A}\right)}^{-1}\left(\Omega -\mathcal{A}\right)={\Omega}^{-\frac{1}{2}}{\left(I+{\Omega}^{-\frac{1}{2}}\mathcal{A}{\Omega}^{-\frac{1}{2}}\right)}^{-1}\left(I-{\Omega}^{-\frac{1}{2}}\mathcal{A}{\Omega}^{-\frac{1}{2}}\right){\Omega}^{\frac{1}{2}}\mathrm{.}\hfill \end{array}$

Let $\stackrel{^}{\mathcal{A}}={\Omega}^{-\frac{1}{2}}\mathcal{A}{\Omega}^{-\frac{1}{2}}$. It is easy to see that ${\Gamma}_{GSS}$ is similar to the matrix ${\stackrel{^}{\Gamma}}_{GSS}={\left(I+\stackrel{^}{\ufffd}\right)}^{-1}\left(I-\stackrel{^}{\ufffd}\right)$. Therefore, it suffices to prove $\rho \left({\stackrel{^}{\Gamma}}_{GSS}\right)<1$. By the construction of $\stackrel{^}{\mathcal{A}}={\Omega}^{-\frac{1}{2}}\mathcal{A}{\Omega}^{-\frac{1}{2}}$, we have

$\begin{array}{c}\stackrel{^}{\mathcal{A}}={\left[\begin{array}{ccc}\alpha I& & \\ & \alpha I& \\ & & \beta I\end{array}\right]}^{-\frac{1}{2}}\left[\begin{array}{ccc}A& {B}^{\text{T}}& 0\\ -B& 0& -{C}^{\text{T}}\\ 0& C& 0\end{array}\right]{\left[\begin{array}{ccc}\alpha I& & \\ & \alpha I& \\ & & \beta I\end{array}\right]}^{-\frac{1}{2}}\\ =\left[\begin{array}{ccc}{\alpha}^{-1}A& {\alpha}^{-1}{B}^{\text{T}}& 0\\ -{\alpha}^{-1}B& 0& -{\left(\alpha \beta \right)}^{-\frac{1}{2}}{C}^{\text{T}}\\ 0& {\left(\alpha \beta \right)}^{-\frac{1}{2}}C& 0\end{array}\right]\mathrm{.}\end{array}$

Since $\alpha $ and $\beta $ are positive, then ${\alpha}^{-1}A$ is a symmetric positive definite matrix, ${\alpha}^{-1}B$ and ${\left(\alpha \beta \right)}^{-\frac{1}{2}}C$ are matrices of full row rank. Thus $\stackrel{^}{\mathcal{A}}$ is a

nonsymmetric positive semi-definite matrix. Moreover, it follows from Lemma 1 that $Re\left(\lambda \left(\stackrel{^}{\ufffd}\right)\right)>0$. Therefore, it can be deduced from Lemma 2 that $\rho \left({\stackrel{^}{\Gamma}}_{GSS}\right)\le 1$. The equality $\rho \left({\stackrel{^}{\Gamma}}_{GSS}\right)=1$ holds if and only if $\stackrel{^}{\mathcal{A}}$ has a (reducing) eigenvalue of the form $\iota \xi $, which is impossible because $Re\left(\lambda \left(\stackrel{^}{\mathcal{A}}\right)\right)>0$. It turns out that $\rho \left({\stackrel{^}{\Gamma}}_{GSS}\right)<1$. As a result, $\rho \left({\Gamma}_{GSS}\right)<1$ which completes the proof.

3. GSS Preconditioner and Its Properties

Though the GSS iteration method is proved to converge unconditionally, its convergence can be rather slow when compared with the involved Krylov subspace methods [24]. In the context of linear system solvers, it is more interesting to exploit the generalized shift-splitting as a preconditioner to accelerate the Krylov subspace methods like GMRES.

In this section, we present the following generalized shift-splitting preconditioner (GSS)

${\mathcal{P}}_{GSS}=\frac{1}{2}\left[\begin{array}{ccc}\alpha I+A& {B}^{\text{T}}& 0\\ -B& \alpha I& -{C}^{\text{T}}\\ 0& C& \beta I\end{array}\right]\mathrm{.}$ (7)

When applying a preconditioner, the convergence of the preconditioned linear system is closely related with the eigenvalue distribution. The following finding shows that the eigenvalues of GSS-preconditioned matrix $\mathcal{A}{\mathcal{P}}_{GSS}^{-1}$ cluster in a circle centered at the point $\left(\mathrm{1,0}\right)$ with radius less than 1. We adopt the right preconditioning here such that the residual of the original saddle point problem is the same as that of its right-preconditioned counterpart.

Theorem 2. Suppose that $A\in {\mathbb{R}}^{n\times n}$ is a symmetric positive definite matrix, $B\in {\mathbb{R}}^{m\times n}$ and $C\in {\mathbb{R}}^{s\times m}$ are matrices of full row rank. Let $\theta $ be an eigenvalue of $\mathcal{A}{\mathcal{P}}_{GSS}^{-1}$, then

$\left|1-\theta \right|<1.$

Proof. Since the preconditioned matrix $\mathcal{A}{\mathcal{P}}_{GSS}^{-1}$ is similar to ${\mathcal{P}}_{GSS}^{-1}\mathcal{A}$, then it is only necessary to investigate the spectrum of ${\mathcal{P}}_{GSS}^{-1}\mathcal{A}$. Since ${\mathcal{P}}_{GSS}=M\left(\alpha \mathrm{,}\beta \right)$, then it follows from (4) that

$\begin{array}{c}{\mathcal{P}}_{GSS}^{-1}\mathcal{A}={\mathcal{P}}_{GSS}^{-1}\left(M\left(\alpha \mathrm{,}\beta \right)-N\left(\alpha \mathrm{,}\beta \right)\right)\\ =M{\left(\alpha \mathrm{,}\beta \right)}^{-1}\left(M\left(\alpha \mathrm{,}\beta \right)-N\left(\alpha \mathrm{,}\beta \right)\right)\\ =I-{\Gamma}_{GSS}\mathrm{.}\end{array}$

From Theorem 1, we know that $\left|1-\theta \right|<1$, where $\theta $ is an eigenvalue of ${\mathcal{P}}_{GSS}^{-1}\mathcal{A}$.

Before ending this section, we give some implementation details when applying the GSS preconditioner. In actual computations, a matrix-vector product ${\mathcal{P}}_{GSS}^{-1}r$ is often needed which can be solved from the linear system ${\mathcal{P}}_{GSS}z=r$. Note that the preconditioner ${\mathcal{P}}_{GSS}$ can be decomposed as

$\begin{array}{c}{\mathcal{P}}_{GSS}=\frac{1}{2}\left[\begin{array}{ccc}I& {B}^{\text{T}}{\left(\alpha I+\frac{1}{\beta}{C}^{\text{T}}C\right)}^{-1}& 0\\ 0& I& -\frac{1}{\beta}{C}^{\text{T}}\\ 0& 0& I\end{array}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdot \left[\begin{array}{ccc}\alpha I+A+{B}^{\text{T}}{\left(\alpha I+\frac{1}{\beta}{C}^{\text{T}}C\right)}^{-1}B& 0& 0\\ 0& \alpha I+\frac{1}{\beta}{C}^{\text{T}}C& 0\\ 0& 0& \beta I\end{array}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdot \left[\begin{array}{ccc}I& 0& 0\\ -{\left(\alpha I+\frac{1}{\beta}{C}^{\text{T}}C\right)}^{-1}B& I& 0\\ 0& \frac{1}{\beta}C& I\end{array}\right]\mathrm{.}\end{array}$ (8)

Let $r={\left[{r}_{1}^{\text{T}}\mathrm{,}{r}_{2}^{\text{T}}\mathrm{,}{r}_{3}^{\text{T}}\right]}^{\text{T}}$ and $z={\left[{z}_{1}^{\text{T}}\mathrm{,}{z}_{2}^{\text{T}}\mathrm{,}{z}_{3}^{\text{T}}\right]}^{\text{T}}$, where ${r}_{1}\mathrm{,}{z}_{1}\in {\mathbb{R}}^{n}$, ${r}_{2}\mathrm{,}{z}_{2}\in {\mathbb{R}}^{m}$ and ${r}_{3}\mathrm{,}{z}_{3}\in {\mathbb{R}}^{s}$. By using the decomposition (8), we obtain the following subroutine for solving $z={\mathcal{P}}_{GSS}^{-1}r$.

1) Solve ${w}_{1}$ from $\left(\alpha I+\frac{1}{\beta}{C}^{\text{T}}C\right){w}_{1}=2{r}_{2}+\frac{2}{\beta}{C}^{\text{T}}{r}_{3}$ ;

2) Solve ${z}_{1}$ from $\left(\alpha I+A+{B}^{\text{T}}{\left(\alpha I+\frac{1}{\beta}{C}^{\text{T}}C\right)}^{-1}B\right){z}_{1}=2{r}_{1}-{B}^{\text{T}}{w}_{1}$ ;

3) Solve ${w}_{2}$ from $\left(\alpha I+\frac{1}{\beta}{C}^{\text{T}}C\right){w}_{2}=B{z}_{1}$ ;

4) Update ${z}_{2}={w}_{1}+{w}_{2}$ ;

5) Update ${z}_{3}=\frac{1}{\beta}\left(2{r}_{3}-C{z}_{2}\right)$.

In computing ${\mathcal{P}}_{GSS}^{-1}r$, the computational overhead by GSS is the same as that by SS [22]. As indicated in the numerical examples, however, an appropriate choice of $\left(\alpha \mathrm{,}\beta \right)$ generally yields better numerical performance (concerning CPU time and iteration steps) than using a single parameter $\alpha $ (as in SS). In this sense, the GSS can be regarded as a more promising variant than its predecessor SS.

4. Numerical Experiments

In this section, we give some numerical experiments to illustrate the effectiveness of the new preconditioner GSS in terms of the number of iteration steps (iter), CPU time in seconds (time) and the relative residual norm (res). For completeness, we display both the number of restarts and the number of inner steps in the last restart; see, for instance, Table 3. The unconditioned GMRES(k), the shift-spliting-preconditioned GMRES(k) (SS) [22] and the relaxed shift- splitting-preconditioned GMRES(k) (RSS) [22] are employed for comparison, where k is the restarting frequency. In what follows, the right preconditioning is used for SS, RSS and GSS, the reason of which is given in Section 3. We set the restarting frequency k to be 5 and the initial guess ${u}^{0}$ to be the zero vector. The right-hand side vector b in (1) is given by $10\cdot rand\left(m+n+s\mathrm{,1}\right)$. All algorithms mentioned above terminate if ${\Vert b-\mathcal{A}{u}^{i}\Vert}_{2}/{\Vert b\Vert}_{2}<{10}^{-6}$ within 1500 restarts, where ${u}^{i}$ is the ith iterate. Numerical experiments are carried out on a laptop using MATLAB 2014b with Intel Core (8G RAM) under Windows 10 system.

The testing problem is adapted from [25]. It is a block three-by-three saddle point problem with A, B and C of the following structure

$\begin{array}{l}A=\left[\begin{array}{cc}I\otimes T+T\otimes I& 0\\ 0& I\otimes T+T\otimes I\end{array}\right]\in {\mathbb{R}}^{2{p}^{2}\times 2{p}^{2}},\\ B=\left[\begin{array}{cc}I\otimes F& F\otimes I\end{array}\right]\in {\mathbb{R}}^{{p}^{2}\times 2{p}^{2}},\\ C=\begin{array}{c}E\otimes F\end{array}\in {\mathbb{R}}^{{p}^{2}\times {p}^{2}},\end{array}$

where $T=\frac{1}{{h}^{2}}\cdot \text{tridiag}\left(-\mathrm{1,2,}-1\right)\in {\mathbb{R}}^{p\times p}$, $F=\frac{1}{h}\cdot \text{tridiag}\left(\mathrm{0,1,}-1\right)\in {\mathbb{R}}^{p\times p}$, $E=\text{diag}\left(\mathrm{1,}p+\mathrm{1,}\cdots \mathrm{,}{p}^{2}-p+1\right)$ with $\otimes $ being the Kronecker product and $h=\frac{1}{p+1}$ the grid size.

4.1. Choice of the Parameter Pair $\left(\alpha ,\beta \right)$

As mentioned in Section 2, the main motivation for embracing an extra parameter $\beta $ in GSS is to strike a balance in controlling the sub-matrices in (3). A proper choice of $\alpha $ and $\beta $ will make the GSS preconditioner more appealing. As shown in (7), the GSS preconditioner ${\mathcal{P}}_{GSS}$ becomes closer to (half of) the saddle point matrix $\mathcal{A}$ as $\alpha $ and $\beta $ decrease. This implies that the choice $\alpha =\beta =0$ seems to be the optimal one. However, we do not suggest using extremely small values of $\alpha $ and $\beta $ for numerical concerns. Instead, we use the experimental (optimal) values of $\alpha $ and $\beta $ in the following experiments.

To this end, we first tabulate the CPU time of the GSS preconditioner with varying
$\alpha $ and
$\beta $ for
$p=16$ and
$p=32$ in Table 1 and Table 2, respectively. The parameter pair
$\left(\alpha \mathrm{,}\beta \right)=\left(\mathrm{0.01,0.001}\right)$ yields the shortest CPU time in both tables. To further verify this finding, we carry out more experiments in Figure 1 and Figure 2 by depicting the number of total iteration steps^{1} and CPU time of GSS with varying
$\beta $ when
$\alpha $ is chosen to be 0.01, 0.1 and 1, respectively. Some remarks are in order. Loosely speaking, for a fixed value of
$\alpha $, the number of total iteration steps and CPU time increase as
$\beta $ goes up although there are some zigzags; see Figure 1 and Figure 2. Moreover, a similar pattern is also observed for
$\beta $ when the value of
$\beta $ is fixed while
$\alpha $ varies; for instance, when
$\beta =50$, the total iteration steps increase as
$\alpha $ changes from 0.01 to 1, which reads readily by comparing the subplots (a), (c) and (e) in Figure 1. This justifies our choice of using
$\left(\alpha \mathrm{,}\beta \right)=\left(\mathrm{0.01,0.001}\right)$ as the experimental optimal value.

Based on the above experiments, we use $\left(\alpha \mathrm{,}\beta \right)=\left(\mathrm{0.01,0.001}\right)$ as the experimental optimal choice for $\alpha $ and $\beta $ in GSS. As for the optimal value of $\alpha $

Table 1. CPU time of the GSS preconditioner with different values of $\alpha $ and $\beta $ ( $p=16$ ).

in SS and RSS, we follow the choice $\alpha =0.01$ used in [22] and regard it as the empirical optimal value.

4.2. Comparison with Other SS-Type Methods

In this subsection, we compare the numerical performance of GSS with SS and

Table 2. CPU time of the GSS preconditioner with different values of $\alpha $ and $\beta $ ( $p=32$ ).

Figure 1. Total iteration steps (left) and CPU time (right) of the GSS preconditioner for different values of $\alpha $ and $\beta $ ( $p=16$ ).

Figure 2. Total iteration steps (left) and CPU time (right) of the GSS preconditioner for different values of $\alpha $ and $\beta $ ( $p=32$ ).

RSS regarding CPU time and iteration steps. It should be noted that all sub-linear equations involved in computing matrix-vector product ${\mathcal{P}}^{-1}r$ are solved by direct methods.

It is known that the spectrum imposes a great effect on the convergence of the preconditioned linear system. To begin with the discussion, we plot respectively the spectral distributions of the original saddle point matrix $\mathcal{A}$, the GSS-pre- conditioned matrix $\mathcal{A}{\mathcal{P}}_{GSS}^{-1}$, the SS-preconditioned matrix $\mathcal{A}{\mathcal{P}}_{SS}^{-1}$, and the RSS-preconditioned matrix $\mathcal{A}{\mathcal{P}}_{RSS}^{-1}$ in Figure 3. As illustrated, the eigenvalues of the original saddle point matrix $\mathcal{A}$ are well separated, which makes the unpreconditioned GMRES method cumbersome. After employing the SS-type preconditioners, however, the eigenvalues of the preconditioned matrices become much more clustered. In particular, the eigenvalues of the GSS-precondi- tioned matrix are more clustered than those of SS and RSS; see Figure 3. Thus it can be expected that GSS will bring about an improvement over SS and RSS in solving (1). This intuition is echoed by Table 3, where the saddle point problem (1) of four different sizes are considered. In [22], it is stated that the iteration steps of both the SS and RSS preconditioned iteration methods remain constant

Figure 3. Eigenvalue distributions of the unpreconditioned saddle point matrix and of three SS-type preconditioned matrices ( $p=32$ ).

Table 3. Numerical results for three preconditioned GMRES methods with different grid sizes.

even if the problem size $4{p}^{2}$ soars. Fortunately, this desirable property has also been carried over to GSS. Furthermore, GSS outperforms SS and RSS in that it requires the shortest CPU time and the least number of iteration steps in all test problems, as observed in Table 3. In light of this, we conclude that GSS is a competitive shift-splitting variant for solving the structured saddle point problem (1).

5. Conclusion

The shift-splitting iteration method/preconditioner for solving the block three- by-three saddle point problem (1) uses a single parameter to control the convergence. Being aware of the different roles of the block diagonal matrices in the saddle point matrix, we introduce an additional parameter to monitor the $\left(\mathrm{3,3}\right)$ - block, which yields a generalized shift-splitting iteration method and a corresponding auxiliary preconditioner. By looking into the spectrum of the iteration matrix, we prove the unconditional convergence of the proposed method. We also apply the induced generalized shift-splitting preconditioner to speed up the convergence of GMRES. It is proved that the preconditioned matrix using the new preconditioner has a well-clustered spectrum which is verified by the numerical examples. Future works include finding the theoretically optimal choice of the parameters in the new scheme and develop other efficient shift-splitting preconditioners.

Acknowledgements

This work is supported by the National Natural Science Foundation (grant number: 11601323) and the Key Discipline Fund (2018) of College of Arts and Sciences in Shanghai Maritime University. The authors would like to thank the referees for their constructive suggestions.

NOTES

^{1}By “total iteration steps”, we mean the total number of matrix-vector products associated with
$\mathcal{A}{\mathcal{P}}_{GSS}^{-1}$ ; for instance, if “iter”, as in Table 3, is denoted by 10(3), then the number of total iteration steps will be
$\left(10-1\right)\times 5+3=48$. In general, if “iter” is denoted by
$i\left(j\right)$, then the number of total iteration steps will be
$\left(i-1\right)\times 5+j$.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] | Benzi, M., Golub, G.H. and Liesen, J. (2005) Numerical Solution of Saddle Point Prob-lems. Acta Numerica, 14, 1-137. https://doi.org/10.1017/S0962492904000212 |

[2] | Arrow, K.J., Hurwicz, L. and Uzawa, H. (1958) Studies in Nonlinear Programming. Stanford University Press, Redwood City, CA. |

[3] |
Bai, Z.-Z., Golub, G.H. and Ng, M.K. (2003) Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Definite Linear Sys-tems. SIAM Journal on Matrix Analysis and Applications, 24, 603-626. https://doi.org/10.1137/S0895479801395458 |

[4] | Bai, Z.-Z. (2015) Motiva-tions and Realizations of Krylov Subspace Methods for Large Sparse Linear Systems. Journal of Computational and Applied Mathematics, 283, 71-78. https://doi.org/10.1016/j.cam.2015.01.025 |

[5] | Bramble, J.H., Pasciak, J.E. and Vassilev, A.T. (1997) Analysis of the Inexact Uzawa Algorithm for Saddle Point Prob-lems. SIAM Journal on Numerical Analysis, 34, 1072-1092. https://doi.org/10.1137/S0036142994273343 |

[6] |
Cao, Z.-H. (2003) Fast Uzawa Algorithm for Generalized Saddle Point Problems. Applied Numerical Mathematics, 46, 157-171. https://doi.org/10.1016/S0168-9274(03)00023-0 |

[7] | Bai, Z.-Z. and Wang, Z.-Q. (2008) On Parameterized Inexact Uzawa Methods for Generalized Saddle Point Problems. Linear Algebra and Its Applications, 428, 2900- 2932. https://doi.org/10.1016/j.laa.2008.01.018 |

[8] |
Zhang, J. and Shang, J. (2010) A Class of Uzawa-SOR Methods for Saddle Point Problems. Applied Mathematics and Computation, 216, 2163-2168. https://doi.org/10.1016/j.amc.2010.03.051 |

[9] | Cao, Y. and Yi, S.-C. (2016) A Class of Uzawa-PSS Iteration Methods for Nonsingular and Singular Non-Hermitian Saddle Point Problems. Applied Mathematics and Computation, 275, 41-49. https://doi.org/10.1016/j.amc.2015.11.049 |

[10] | Bai, Z.-Z. and Golub, G.H. (2007) Accelerated Hermitian and Skew-Hermitian Splitting Iteration Methods for Saddle-Point Problems. IMA Journal of Numerical Analysis, 27, 1-23. https://doi.org/10.1093/imanum/drl017 |

[11] |
Bai, Z.-Z., Benzi, M. and Chen, F. (2010) Modified HSS Iteration Methods for a Class of Complex Symmetric Linear Sys-tems. Computing, 87, 93-111. https://doi.org/10.1007/s00607-010-0077-0 |

[12] | Cao, Y., Yao, L., Jiang, M. and Niu, Q. (2013) A Relaxed HSS Preconditioner for Saddle Point Problems from Meshfree Discretization. Journal of Computational and Applied Mathematics, 31, 398-421. https://doi.org/10.4208/jcm.1304-m4209 |

[13] |
Pan, J.-Y., Ng, M.K. and Bai, Z.-Z. (2006) New Preconditioners for Saddle Point Problems. Applied Mathematics and Computation, 172, 762-771. https://doi.org/10.1016/j.amc.2004.11.016 |

[14] |
Benzi, M. and Guo, X.-P. (2011) A Dimensional Split Preconditioner for Stokes and Linearized Na-vier?-Stokes Equations. Applied Numerical Mathematics, 61, 66-76. https://doi.org/10.1016/j.apnum.2010.08.005 |

[15] |
Zhang, G.-F., Ren, Z.-R. and Zhou, Y.-Y. (2011) On HSS-Based Constraint Preconditioners for Generalized Saddle-Point Problems. Numerical Algorithms, 57, 273-287. https://doi.org/10.1007/s11075-010-9428-3 |

[16] | Wang, N.-N. and Li, J.-C. (2019) A Class of New Extended Shift-Splitting Preconditioners for Saddle Point Problems. Journal of Computational and Applied Mathematics, 357, 123-145. https://doi.org/10.1016/j.cam.2019.02.015 |

[17] | Rozlo?nk, M. (2018) Saddle-Point Problems and Their Iterative Solution. Birkh?user. https://doi.org/10.1007/978-3-030-01431-5 |

[18] | Bai, Z.-Z., Yin, J.-F. and Su, Y.-F. (2006) A Shift-Splitting Preconditioner for Non-Hermitian Positive Definite Matrices. Journal of Computational and Applied Mathematics, 24, 539-552. |

[19] |
Cao, Y., Du, J. and Niu, Q. (2014) Shift-Splitting Preconditioners for Saddle Point Problems. Journal of Computational and Applied Mathematics, 272, 239-250. https://doi.org/10.1016/j.cam.2014.05.017 |

[20] | Cao, Y., Tao, H. and Jiang, M. (2014) Generalized Shift Splitting Preconditioners for Saddle Point Problems, in Chinese. Mathematica Numerica Sinica, 36, 16-26. |

[21] |
Salkuyeh, D.K., Masoudi, M. and Hezari, D. (2015) On the Generalized Shift-Splitting Preconditioner for Saddle Point Problems. Applied Mathematics Letters, 48, 55-61. https://doi.org/10.1016/j.aml.2015.02.026 |

[22] |
Cao, Y. (2019) Shift-Splitting Preconditioners for a Class of Block Three-by-Three Saddle Point Problems. Applied Mathematics Letters, 96, 40-46. https://doi.org/10.1016/j.aml.2019.04.006 |

[23] | Bai, Z.-Z. and Hadjidimos, A. (2014) Optimization of Extrapolated Cayley Transform with Non-Hermitian Positive Definite Matrix. Linear Algebra and Its Applications, 463, 322-339. https://doi.org/10.1016/j.laa.2014.08.021 |

[24] | Zhang, K., Zhang, J.-L. and Gu, C.-Q. (2017) A New Relaxed PSS Preconditioner for Nonsymmetric Saddle Point Problems. Applied Mathematics and Computation, 308, 115-129. https://doi.org/10.1016/j.amc.2017.03.022 |

[25] |
Huang, N. and Ma, C.-F. (2019) Spectral Analysis of the Preconditioned System for the 3 × 3 Block Saddle Point Prob-lem. Numerical Algorithms, 81, 421-444. https://doi.org/10.1007/s11075-018-0555-6 |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.