A Relaxed Greedy Block Kaczmarz Method for Solving Large Consistent Linear Systems

Many problems in science and engineering require solving large consistent linear systems. This paper presents a relaxed greedy block Kaczmarz method (RGBK) and an accelerated greedy block Kaczmarz method (AGBK) for solving large-size consistent linear systems. The RGBK algorithm extends the greedy block Kaczmarz algorithm (GBK) presented by Niu and Zheng in [1] by introducing a relaxation parameter to the iteration formulation of GBK, and the AGBK algorithm uses different iterative update rules to minimize the running time. The convergence of the RGBK is proved and a method to determine an optimal parameter is provided. Several examples are presented to show the effectiveness of the proposed methods for overdetermined and underdetermined consistent linear systems with dense and sparse coefficient matrix.


Introduction
We are concerned with the solution of the large consistent linear system , x b = A (1) where m n × ∈ A  , and m b ∈  . The Kaczmarz method in [2] is possible one of There are many extended Kaczmarz algorithms [8]- [16] developed to solve (1).
Strohmer and Vershynin in [17] introduced a randomized Kaczmarz method (RK) for consistent overdetermined systems (1). The RK method has a convergence bound with the expected exponential convergence, which was called linear convergence. Zhang [18] proposed an improved greedy Kaczmarz (GK) method for solving (1). Bai and Wu in [19] presented a greedy randomized Kaczmarz algorithm (GRK) for (1) when the system is consistent. In each step of iteration, GRK is based on a probability criterion trying to grasp larger entries of the residual vector. Bai and Wu [20] further developed a relaxed GRK method for large sparse Equations (1). Due to its fast convergence, the block method [21] [22] [23] [ 24] has also been extensively developed in linear or nonlinear optimization problems. Recently, Liu and Zheng in [1] presented a greedy block Kaczmarz algorithm (GBK) with the iteration where the index of the selected row with the parameter

The Relaxed Greedy Block Kaczmarz Algorithm
Replacing the left side 1 k x + in (2) with the combination of k x and 1 k x + in (2) by introducing a relaxed parameter The method presented by (6) is called a relaxed greedy block Kaczmarz algorithm, which is abbreviated by RGBK. Algorithm 1 summarizes the RGBK algorithm.
We remark that the iteration formulation (6) reduces to (2) when 1 λ = , thus the GBK method in [1] is a special case of Algorithm 1.
The results below give the convergence of Algorithm 1.  Proof x e e e λ λ λ λ According to the definition of k  at Step 3 in Algorithm 1 and the fact that Thus (8) together with (9) and (10) implies (7). This completes the proof. □ We derive easily the results below from Theorem 1.
Under the conditions of Theorem 1, we obtain an upper bound of error Proof. Using (7) iteratively for 1, k = , we have (11) with the definition of ( ) The upper bound of error below is independent of k  .
Corollary 2. Under the conditions of Theorem 1, (7) becomes Proof. We notice that ( ) ( ) Therefore, (12) results from (12) together with (7). □ Remark 1. The RGBK method reduces to the GBK method in [1] when 1 λ = . Examples in Section 3 provide a way to determine the optimal relaxed parameter value of λ that minimizes the CPU time or the number of iteration for both overdetermined and underdetermined systems. Journal of Applied Mathematics and Physics Remark 2. Taking into account the limitation of computer memory space, we use Gaussian Kaczmarz method defined in [22] instead of (6), which could avoid the calculation of † A , , this method abbreviated as AGBK.

Numerical Examples
In this section, we give several examples to show the efficiency of our RGBK and AGBK methods and compare them with GBK in [1]. All experiments are carried out with the MATLAB 2020b on a computer with 3.00 GHz processing unit and 16 GB RAM.
We compute the solution of the consistent system (1)  , which are randomly dense matrices generated by the MATLAB function randn, respectively, then determine an optimal parameter η that minimizes the IT or CPU. Figure 1 shows the plot of CPU versus η . From Figure 1, we can choose the optimal parameter 0.2 η = , which minimizes CPU by using Algorithm 1 for the consistent systems (1)  In this case, we give the same way to determine the optimal relaxation parameters λ of RGBK as shown in Figure 2, and strictly abide by the method of controlling variables. We default that GBK and RGBK have the same η .       [25]. Table 3    x * = A , RGBK and AGBK are used to recover x * from b and compared with the GBK method. Table 5 reports the IT and CPU(s), and RSE of RGBK and AGBK compared with GBK for overdetermined consistent sparse matrix, where 6 RSE 10 − ≤ . Figure 4 shows the recovered images by GBK, RGBK and AGBK together with the original image.
It can be seen from Table 5 that RGBK(1.3) and AGBK obtain lower IT and CPU(s) than GBK(0.2) for restoring CT images, which show that RGBK and AGBK converge faster than GBK dose if the parameter λ is selected appropriately.

Conclusion
We present a relaxed GBK algorithm abbreviated as RGBK for solving large consistent linear systems. The RGBK method extends the GBK method in [17]. The convergence is provided and a method is provided to determine an optimal relaxed parameter for the RGBK method. In addition, AGBK effectively accelerates the convergence of RGBK in running time. The examples for different cases show the advantage of the proposed RGBK and AGBK methods as long as the optimal relaxation parameter opt λ is determined.