AJCMAmerican Journal of Computational Mathematics2161-1203Scientific Research Publishing10.4236/ajcm.2015.52010AJCM-56998ArticlesPhysics&Mathematics Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations ijianWang1*School of Mathematics and Computational Science, Wuyi University, Jiangmen, China* E-mail:wangxj1980426@163.com1305201505021131262 March 2015accepted 5 June 9 June 2015© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/

The paper first introduces two-dimensional convection-diffusion equation with boundary value condition, later uses the finite difference method to discretize the equation and analyzes positive definite, diagonally dominant and symmetric properties of the discretization matrix. Finally, the paper uses fixed point methods and Krylov subspace methods to solve the linear system and compare the convergence speed of these two methods.

Finite Difference Method Convection-Diffusion Equation Discretization Matrix Iterative Method Convergence Speed
1. Introduction

In the case of a linear system, the two main classes of iterative methods are the stationary iterative methods (fixed point methods)  -  , and the more general Krylov subspace methods  -  . When these two classical iterative methods suffer from slow convergence for problems which arise from typical applications such as fluid dynamics or electronic device simulation, preconditioning  -  is a key ingredient for the success of the convergent process.

The goal of this paper is to find an efficient iterative method combined with preconditioning for the solution of the linear system which is related to the following two-dimensional boundary value problem (BVP)  :

For parameter, we use finite difference method to discretize the Equation (1). Take points in both the x- and y-direction and number the related degrees of freedom first left-right and next bottom-

top. Now, we take, thus the grid size. For, the convection term

equals to 0, using central difference method to the diffusion term, we get the discretization matrix of the Equation (1)

For, the diffusion term use a central difference and for the convection term use central differences scheme as the following:

we obtain the discretization matrix of the Equation (1)

where

For, the diffusion term use a central difference and for the convection term use upwind differences scheme as the following:

we obtain the discretization matrix of the Equation (1)

where

2. Properties of the Discretization Matrix

In this section, we would first compute the eigenvalues of the discretization matrices, and of Equation (1), later analyze positive definite, diagonally dominant and symmetric properties of these matrices.

2.1. Eigenvalues

Using MATLAB, the eigenvalues of the discretization matrices, and of Equation (1) are the following:

2.2. Definite Positiveness

Matrix A is positive definite if and only if the symmetric part of A i.e. is positive definite. Since

we have all the eigenvalues of (i = 0, 1, 2) are the following:

Therefore (i = 0, 1, 2) is positive definite, which means the discretization matrices, and of Equation (1) are positive definite.

2.3. Diagonal Dominance

For all the discretization matrices, and of Equation (1),

Therefore, the discretization matrices, and of Equation (1) are diagonally dominant.

2.4. Symmetriness

It is easy to see that only is symmetric.

3. Stationary Iteration Methods and Krylov Subspace Methods

The goal of this section is to find an efficient iterative method for the solution of the linear system. Since are positive definite and diagonally dominant, we would use fixed point methods and Krylov subspace methods. In this section, first find the suitable convergence tolerance, later use numerical experiments to compare the convergence speed of various iteration methods.

3.1. Convergence Tolerance

Without loss of generality, take, convergence tolerance and assume that, then

In order to achieve in the numerical experiments, we would set convergence tolerance.

3.2. Numerical Experiments

Computational results using fixed point methods such as Jacobi, Gauss-Seidel, SOR etc. and projection methods such as PCG, BICG, BICGSTAB, CGS, GMRES and QMR are listed out in figures (Figures 1-21), for all three different values. The projection methods are performed with different preconditioning methods such as Jacobi preconditioning, luinc and cholinc preconditioning.

The tables (Table 1 and Table 2) containing relevant computational details are also given below.

4. Conclusions

From the figures (Figures 1-21) and tables (Table 1 and Table 2), we obtain the following conclusions:

• The convergence speeds of SOR, Backward SOR and SSOR are faster than that of GS and Backward GS; while GS and Backward GS are faster than Jacobi;

• If matrix A is symmetric, the convergence speeds of SOR, Backward SOR and SSOR are the same; otherwise, SSOR is faster than Backward SOR and SOR;

• The convergence speeds of SOR and Backward SOR are the same, also for GS and Backward GS;

• From Table 2, the iteration steps and the time for all fixed point methods of the case in which are less than the case in which, and also the case in which are less than the case in which;

• The upwind difference method is more suitable to be applied to convection dominant problem than the cen-

GammaFunctionPreconditioningflagNo. of iterationsRelresDelta (tol)
0PCGjacobi0521.31E−061.55E−06
luinc0231.38E−06
cholinc0259.05E−07
BICGjacobi0521.31E−06
luinc0231.38E−06
cholinc0259.05E−07
BICGSTABjacobi040.51.42E−06
luinc015.59.36E−07
cholinc0161.54E−06
CGSjacobi0431.20E−06
luinc0168.27E−07
cholinc0161.18E−06
GMRESjacobi0491.32E−06
luinc0239.01E−07
cholinc0241.21E−06
QMRjacobi0521.05E−06
luinc0231.12E−06
cholinc0241.37E−06
16PCGjacobi140.4836077571.14E−05
luinc120.312145391
cholinc120.38563672
bicgjacobi0897.33E−06
luinc0251.11E−05
cholinc0354.37E−06
BICGSTABjacobi056.51.03E−05
luinc016.56.30E−06
cholinc023.53.98E−06
CGSjacobi0616.61E−06
luinc0201.01E−08
cholinc0275.29E−06
GMRESjacobi0689.70E−06
luinc0227.08E−06
cholinc0301.10E−05
QMRjacobi0896.43E−06
luinc0252.64E−06
cholinc0339.57E−06
64PCGjacobi110.7109780374.13E−05
luinc110.547110057
cholinc110.660611555
BICGjacobi0703.15E−05
luinc0211.21E−05
cholinc0443.47E−05
BICGSTABjacobi059.52.12E−05
luinc016.55.27E−06
cholinc027.51.60E−05
CGSjacobi1870.000123357
luinc0196.36E−07
cholinc0353.14E−06
GMRESjacobi0632.66E−05
luinc0192.50E−05
cholinc0373.11E−05
QMRjacobi0693.56E−05
luinc0211.25E−05
cholinc0453.31E−05
Computational details of different fixed point methods
The fixed point methodNo. of iterations
Gamma = 0Jacobi2246
Gauss-Seidel1124
SOR370
Backward Gauss-Seidel1124
Backward SOR370
SSOR370
Gamma = 16Jacobi459
Gauss-Seidel231
SOR68
Backward Gauss-Seidel231
Backward SOR68
SSOR42
Gamma = 64Jacobi191
Gauss-Seidel97
SOR54
Backward Gauss-Seidel97
Backward SOR54
SSOR20

tral difference method;

• The convergence speed of the six projection methods including PCG, BICG, BICGSTAB, CGS, GMRES and QMR under luinc preconditioning are faster than under cholinc preconditioning, while under cholinc preconditioning are faster than Jacobi preconditioning;

• The six projection methods under Jacobi, luinc and cholinc are convergent when, however, for, the PCG method are not convergent and also the CGS method under Jacobi preconditioning are not convergent when.

Acknowledgements

I thank the editor and the referee for their comments. I would like to express deep gratitude to my supervisor Prof. Dr. Mark A. Peletier whose guidance and support were crucial for the successful completion of this paper. This work was completed with the financial support of Foundation of Guangdong Educational Committee (2014KQNCX161, 2014KQNCX162).

ReferencesSaad, Y. (2003) Iterative Methods for Sparse Linear Systems. Siam, Bangkok. http://dx.doi.org/10.1137/1.9780898718003Varga, R.S. (2009) Matrix Iterative Analysis. Volume 27, Springer Science & Business Media, Heidelberger.Young, D.M. (2014) Iterative Solution of Large Linear Systems. Elsevier, Amsterdam.Lanczos, C. (1952) Solution of Systems of Linear Equations by Minimized Iterations. Journal of Research of the National Bureau of Standards, 49, 33-53. http://dx.doi.org/10.6028/jres.049.006Hestenes, M.R. and Stiefel, E. (1952) Methods of Conjugate Gradients for Solving Linear Systems.Walker, H.F. (1988) Implementation of the GMRES Method Using Householder Transformations. SIAM Journal on Scientific and Statistical Computing, 9, 152-163. http://dx.doi.org/10.1137/0909010Sonneveld, P. (1989) CGS, a Fast Lanczos-Type Solver for Nonsymmetric Linear Systems. SIAM Journal on Scientific and Statistical Computing, 10, 36-52. http://dx.doi.org/10.1137/0910004Freund, R.W. and Nachtigal, N.M. (1991) QMR: A Quasi-Minimal Residual Method for Non-Hermitian Linear Systems. Numerische Mathematik, 60, 315-339. http://dx.doi.org/10.1007/BF01385726van der Vorst, H.A. (1992) Bi-CGSTAB: A Fast and Smoothly Converging Variant of Bi-CG for the Solution of Nonsymmetric Linear Systems. SIAM Journal on Scientific and Statistical Computing, 13, 631-644. http://dx.doi.org/10.1137/0913035Brezinski, C., Zaglia, M.R. and Sadok, H. (1992) A Breakdown-Free Lanczos Type Algorithm for Solving Linear Systems. Numerische Mathematik, 63, 29-38. http://dx.doi.org/10.1007/BF01385846Chan, T.F., Gallopoulos, E., Simoncini, V., Szeto, T. and Tong, C.H. (1994) A Quasi-Minimal Residual Variant of the Bi-CGSTAB Algorithm for Nonsymmetric Systems. SIAM Journal on Scientific Computing, 15, 338-347. http://dx.doi.org/10.1137/0915023Gutknecht, M.H. (1992) A Completed Theory of the Unsymmetric Lanczos Process and Related Algorithms, Part I. SIAM Journal on Matrix Analysis and Applications, 13, 594-639. http://dx.doi.org/10.1137/0613037Gutknecht, M.H. (1994) A Completed Theory of the Unsymmetric Lanczos Process and Related Algorithms. Part II. SIAM Journal on Matrix Analysis and Applications, 15, 15-58. http://dx.doi.org/10.1137/S0895479890188803Eisenstat, S.C. (1981) Efficient Implementation of a Class of Preconditioned Conjugate Gradient Methods. SIAM Journal on Scientific and Statistical Computing, 2, 1-4. http://dx.doi.org/10.1137/0902001Meijerink, J.V. and van der Vorst, H.A. (1977) An Iterative Solution Method for Linear Systems of Which the Coefficient Matrix Is a Symmetric M-Matrix. Mathematics of Computation, 31, 148-162.Ortega, J.M. (1988) Efficient Implementations of Certain Iterative Methods. SIAM Journal on Scientific and Statistical Computing, 9, 882-891. http://dx.doi.org/10.1137/0909060Fowler, A.C. (1997) Mathematical Models in the Applied Sciences. Volume 17, Cambridge University Press, Cambridge.