BLU Factorization for Block Tridiagonal Matrices and Its Error Analysis ()

Chi-Ye Wu

Shenzhen Tourism College, Jinan University, Shenzhen, China.

**DOI: **10.4236/alamt.2012.24006
PDF
HTML XML
5,273
Downloads
11,976
Views
Citations

Shenzhen Tourism College, Jinan University, Shenzhen, China.

A block representation of the *BLU* factorization for block tridiagonal matrices is presented. Some properties on the factors obtained in the course of the factorization are studied. Simpler expressions for errors incurred at the process of the factorization for block tridiagonal matrices are considered.

Share and Cite:

C. Wu, "BLU Factorization for Block Tridiagonal Matrices and Its Error Analysis," *Advances in Linear Algebra & Matrix Theory*, Vol. 2 No. 4, 2012, pp. 39-42. doi: 10.4236/alamt.2012.24006.

1. Introduction

Tridiagonal matrices are connected with different areas of science and engineering, including telecommunication system analysis [1] and finite difference methods for solving partial differential equations [2-4].

The backward error analysis is one of the most powerful tools for studying the accuracy and stability of numerical algorithms. A backward analysis for the LU factorization and for the solution of the associated triangular linear systems is presented by Amodio and Mazzia [5]. BLU factorization appears to have first been proposed for block tridiagonal matrices, which frequently arise in the discretization of partial differential equations. References relevant to this application include Isaacson and Keller [6], Bank and Rose [7], Mattheij [8], Concus, Golub and Meurant [9], Varah [10], Bank and Rose [11], and Yalamov and Plavlov [12]. For a block dense matrix, Demmel and Higham [13] presented error analysis of BLU factorization, and Demmel, Higham and Shreiber [14] also extended earlier analysis.

This paper is organized as follows. We begin, in section 2 by showing the representation of BLU factorization for block tridiagonal matrices. In section 3 some properties on the factors associated with the factorization are presented. Finally, by the use of BLAS3 based on fast matrix multiplication techniques, an error analysis of the factorization is given in section 4.

Throughout, we use the “standard model” of floatingpoint arithmetic in which the evaluation of an expression in floating-point arithmetic is denoted by, with

(see Higham [15] for details). Here is the unit rounding-off associated with the particular machine being used. Unless otherwise stated, in this section an unsubscripted norm denotes.

2. Representation of BLU Factorization for Block Tridiagonal Matrices

Consider a nonsingular block tridiagonal matrix

, (1)

where, are nonsingular,

and with and

are arbitrary. We present the following factorization of. The first step is represented as follows:

whereis the identity matrix of order, and

The second step of the factorization is applied to in order to obtain a matrix with a sub-block, then

Applying the method recursively, it follows that

After steps the block is and the factorization ends, we obtain

where and. From the process of the representation obtained, we get the results as follows:

1) Taking the second step for example, if is nonsingular then we can factor and in a similar manner, and this process can be continued recursively to obtain the complete block LU factorization;

2) There exists obvious difference between partitioned LU factorization (see [15] for further details), GE and block LU factorization in this paper.

3. Some Properties on the Factors of BLU Factorization

The usual property on Schur complements under BLU factorization for block diagonal dominance by rows is similar to that of point diagonal dominance, i.e., Schur complements under BLU factorization for block diagonal dominance by rows inherit the key property on original matrices. For the factors, and, we have the following theorem.

Theorem 3.1. Let in (1) be nonsingular and block diagonally dominant by rows (columns). Then the factors, and also preserve the similar property.

Proof. By the process of the factorization, it follows that

.

Since is block diagonally dominant, by the definition of block diagonal dominance, preserves the same property as the matrix. The proof for and is as follows. The definition of block diagonal dominance, we have

Thus the matrices and are also block diagonally dominant. The result follows by induction, that is, also preserves the same property as the matrix. For the matrix U, we have

By the above proof, it follows that the matrix is also block diagonally dominant. □

The problem is whether the matrices for all and can inherit the same property as the matrix. The result is negative. Take the following block tridiagonal matrix and for example,

where and, and are matrices. Since the following inequalities

then the matrix is block diagonally dominant by rows. Thus the matrix is also block diagonally dominant by rows. However,

thus and are not block diagonally dominant by rows. □

Only if the matrix in (1) is block diagonally dominant by columns, the matrices for all and can preserve the key property of. The reason is as follows.

Based on the definition of block diagonal dominance by columns and the key property of, we have

Therefore the matrices and are also block diagonally dominant by columns. Similarly, for all block diagonally dominant by columns by induction. Then can also preserve the key property of.

4. Error Analysis

The use of BLAS3 based on fast matrix multiplication techniques affects the stability only insofar as it increases the constant terms in the normwise backward error bounds [13]. We make assumption about the underlying level-3 BLAS (matrix-matrix operations).

If and then the computed approximation to satisfies

, (2)

where denotes a constant depending on and. For conventional BLAS3 implementations, (2) holds with [13,15].

The computed solution to the triangular systems, with and, satisfies

where denotes a constant depending on and. In this section, we present the backward error analysis for the block LU factorization by applying BLAS3 based on fast matrix multiplication techniques.

Theorem 4.1. Let and be the computed BLU factors of in (1). Then

where

Proof. Applying the standard analysis of errors, we can obtain the above result. Thus we omit it. □

Let and. The multiplications and do not produce errors because of their structures. Thus the errors of and can be represented as and . Then

where and are the maximum values of

, and

, respectively, when the value ranges from

to. Although the above error bounds are similar to those of and, in the bounds for and satisfies. On the other hand, based on the structure, the error bounds for and is different from those of Theorem 4.1 and we can also obtain the bound for.

Since the factors arising in the factorization in this paper are triangular matrices, from (2) we have

where. Note that the multiplication do not produce error because of the structure of and. Then

Thus

where and.

Compared to the proof of standard analysis of errors, there is a great different in form and the simpler proof of the latter embodies whose superiority. For the former, the error bound does not include, which makes the computation easier.

Applying the result of Theorem 4.1, we have the following theorem.

Theorem 4.2. Let and be the computed BLU factors of in (1). Then

where

Proof. To save clutter we will omit “” from each bound. For the expression arising in, if is sufficiently small, the term is small with respect to the other error matrices, in first order approximation, we obtain

where

Similarly, we have

Therefore the result holds. □

From Theorems 4.1 and 4.2, the bounds for and just depend on one of factors and of elements, which make the computation of error bounds simpler.

5. Acknowledgements

The authors would like to thank the committee of CET 2011 Conference very much for their help. Moreover, this research was supported by Guangdong NSF (32212070) and the Fundamental Research Funds for the Central Universities (12KYFC007).

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | L. Z. Lu and W. Sun, “The Minimal Eigenvalues of a Class of Block-Tridiagonal Matrices,” IEEE Transactions on Information Theory, Vol. 43, No. 2, 1997, pp. 787-791. doi:10.1109/18.556141 |

[2] | C. F. Fischer and R. A. Usmani, “Properties of Some Tridiagonal Matrices and Their Application to Boundary Value Problems,” SIAM Journal on Numerical Analysis, Vol. 6, No. 1, 1969, pp. 127-142. doi:10.1137/0706014 |

[3] | G. D. Simth, “Numerical Solution of Partial Differential Equations: Finite Difference Methods,” 2nd Edition, Oxford University Press, New York, 1978. |

[4] | R. M. M. Mattheij and M. D. Smooke, “Estimates for the Inverse of Tridiagonal Matrices Arising in Boundary-Value Problems,” Linear Algebra and Its Applications, Vol. 73, 1986, pp. 33-57. doi:10.1016/0024-3795(86)90232-6 |

[5] | P, Amodio and F. Mazzia, “A New Approach to the Backward Error Analysis in the LU Factorization Algorithm,” BIT Numerical Mathematics, Vol. 39, No. 3, 1999, pp. 385-402. doi:10.1023/A:1022358300517 |

[6] | E. Isaacson and H. B. Keller, “Analysis of Numerical Methods,” Wiley, New York, 1966. |

[7] | R. E. Bank and D. J. Rose, “Marching Algorithms for Elliptic Boundary Value Problems. I: The Constant Coefficient Case,” SIAM Journal on Numerical Analysis, Vol. 14, No. 5, 1977, pp. 792-829. doi:10.1137/0714055 |

[8] | R. M. M. Mattheij, “The Stability of LU-Decompostions of Block Tridiagonal Matrices,” Bulletin of the Australian Mathematical Society, Vol. 29, No. 2, 1984, pp. 177-205. doi:10.1017/S0004972700021432 |

[9] | P. Concus, G. H. Golub and G. Meurant, “Block Preconditioning for the Conjugate Method,” SIAM Journal on Scientific and Statistical Computing, Vol. 6, No. 1, 1985, pp. 220-252. doi:10.1137/0906018 |

[10] | J. M. Varah, “On the Solution of Block-Tridiagonal Systems Arising from Certain Finite-Difference Equations,” Mathematics of Computation, Vol. 26, No. 120, 1972, pp. 859-868. doi:10.1090/S0025-5718-1972-0323087-4 |

[11] | R. E. Bank and D. J. Rose, “Marching Algorithms for Elliptic Boundary Value Problems. I: The Constant Coefficient Case,” SIAM Journal on Numerical Analysis, Vol. 14, No. 5, 1977, pp. 792-829. doi:10.1137/0714055 |

[12] | P. Yalamov and V. Pavlov, “Stability of the Block Cyclic Reduction,” Linear Algebra and Its Applications, Vol. 249, No. 1-3, 1996, pp. 341-358. doi:10.1016/0024-3795(95)00392-4 |

[13] | J. W. Demmel and N. J. Higham, “Stability of Block Algorithms with Fast Level-3 BLAS,” ACM Transactions on Mathematical Software, Vol. 18, No. 3, 1992, pp. 274-291. doi:10.1145/131766.131769 |

[14] | J. W. Demmel, N. J. Higham and R. S. Schreiber, “Stability of Block LU Factorizaton, Numerical Linear Algebra with Applications,” Vol. 2, No. 2, 1995, pp. 173-190. doi:10.1002/nla.1680020208 |

[15] | N. J. Higham, “Accuracy and Stability of Numerical Algorithm,” Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 1996. |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.