Journal of Applied Mathematics and Physics
Vol.06 No.02(2018), Article ID:82705,17 pages
10.4236/jamp.2018.62040

A Block-Preconditioned Inexact Linear Solver for Computing the Complex Eigenpairs of a Large Sparse Matrix

Richard Olatokunbo Akinola1*, Stephen Yakubu Kutchin1, Ayodeji Sunday Ayodele1, Kingsley Obiajulu Muka2

1Department of Mathematics, Faculty of Natural Sciences, University of Jos, Jos, Nigeria

2Department of Mathematics, University of Benin, Benin, Nigeria

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 21, 2017; Accepted: February 25, 2018; Published: February 28, 2018

ABSTRACT

In computing eigenpairs of the matrix pencil, one obtains a linear system of equations. In this work, we show how a block triadiagonal preconditioner in GMRES can be used to solve a large sparse unsymmetric system of equations inexactly using a fixed and decreasing tolerance. While the fixed tolerance solver converged superlinearly to the eigenvalue of interest, the decreasing one converged quadratically. This surpasses an earlier result which converged harmonically.

Keywords:

Preconditioner, Superlinear Convergence, Quadratic Convergence, Eigenvalues

1. Introduction

Let A be a large sparse, real n by n nonsymmetric matrix and B n × n a symmetric positive definite matrix. In this paper, we consider the problem of computing the eigenpair ( z , λ ) from the following generalized complex eigenvalue problem

A z = λ B z , z n , z 0 , (1)

where λ is the eigenvalue of the pencil ( A , B ) and z its corresponding complex eigenvector. We assume that the eigenpair of interest ( z , λ ) is algebraically simple, with ψ H the corresponding left eigenvector such that [1]

ψ H B z 0. (2)

By adding the normalization

z H B z = 1, (3)

to (1), the combined system of equations can be expressed in the form F ( z ) = 0 as

F ( z ) = [ ( A λ B ) z 1 2 z H B z + 1 2 ] = 0 . (4)

Note that z H B z is real since B is symmetric and positive definite. This results in solving a system of nonlinear n complex and one real equation for the n + 1 complex unknowns v = [ z , λ ] T . The reason we cannot use Newton’s method to solve (4) is because z ¯ in the normalization z H B z = z ¯ T B z is not differentiable.

Recall that for a real eigenpair ( z , λ ) , (4) is ( n + 1 ) real equations for ( n + 1 ) real unknowns and Newton’s method for solving (4) involves the solution of the ( n + 1 ) square linear systems

[ A λ ( k ) B B z ( k ) ( B z ( k ) ) T 0 ] [ Δ z ( k ) Δ λ ( k ) ] = [ ( A λ ( k ) B ) z ( k ) 1 2 z ( k ) T B z ( k ) + 1 2 ] , (5)

for the ( n + 1 ) real unknowns Δ v ( k ) = [ Δ z ( k ) T , Δ λ ( k ) ] , and updating v ( k + 1 ) = v ( k ) + Δ v ( k ) . Secondly, if instead of the normalization (3), we add c H z = 1 , where c is a fixed complex vector (see, for example, [2] ), (1) and c H z = 1 provide ( n + 1 ) complex equations for ( n + 1 ) complex unknowns, and the Jacobian of this new system is

[ ( A λ B ) B z c H 0 ] .

The above Jacobian is square and can be easily shown to be nonsingular, using the ABCD Lemma [3] if the eigenvalue of interest is algebraically simple and c H z 0 . Thirdly, if ( z , λ ) is complex, then, as stated earlier, we have n complex and one real equation. Also, if z solves (4), then so does z e i θ for any θ , such that 0 θ 2 π .

Our approach for analyzing the solution of (4) for v begins by splitting the eigenpair ( z , λ ) into their real and imaginary parts: z = z 1 + i z 2 , λ = α + i β where z 1 , z 2 n , and α , β . After expanding (4), we will obtain a real 2 n + 1 under-determined system of nonlinear equations in 2 n + 2 real unknowns v = [ z 1 , z 2 , α , β ] T , and it is natural to use the Gauss-Newton method (see, for example, Deuflhard ( [4] , pp. 222-223)) to obtain a solution (see also, [5] [6] [7] [8] ). By linearizing the under-determined system of nonlinear equations, we obtain an under-determined system of linear equations involving the Jacobian. This paper is structured as follows: in Section 2, we show that the Jacobian has a unique nullvector at the root. This is then followed in Section 3, we present two orthogonality results and Algorithm 1 is given. In Section 4, we present an inexact inverse iteration algorithm with preconditioning for solving the large system of equations encountered.

Algorithm 1. Computing the complex eigenvalues of the pencil ( A , B ) .

The main mathematical tools used in this paper are the LU factorization, inexact inverse iteration and preconditioned Generalized Minimal Residual (GMRES) [9] . The main reason for using inexact inverse iteration is due to the fact that as mentioned in an earlier paper, we do not solve M u = B 1 w , in practice but we will use it to show that the solution is possible. Theorem 2.1 shows that the Jacobian has a single nullvector at the root, while Theorem 3.1 gives an important orthogonality result. Algorithms 1-3 are presented. We remark that in the limit, the approximate nullvector converges to the exact. A numerical example is given which supports the validity of the algorithms presented though, as usual, relies on good initial guesses to the desired eigenpair. The classical inverse iteration for the matrix pencil converges slowly for some eigenvalue problems while we present algorithms that converge quadratically. Throughout this paper, unless otherwise stated all norms are the 2-norm.

The following result helps to enforce the validity of the results in this paper.

Lemma 1.1: [10] Let F w ( w ) be of full rank. If F w ( w ) Δ w = F ( w ) , is an under-determined linear system of equations, then its least squares solution

Δ w = F w ( w ) T [ F w ( w ) F w ( w ) T ] 1 F ( w ) , is orthogonal to the nullspace of F w ( w ) .

Algorithm 2. Inexact inverse iteration algorithm.

Algorithm 3. Complex eigenvalues of the pencil ( A , B ) using Inexact Inverse Iteration with preconditioning.

In the next section, we will express both z and λ as λ = α + i β and z = z 1 + i z 2 , convert the nonlinear system (4) to a real under-determined system of nonlinear equations and prove some important results.

2. Computation of Complex Eigenpairs by Solving an Under-Determined System of Nonlinear Equations

In this section, we will expand the system of nonlinear n complex and one real equations in n + 1 complex unknowns (4) by writing z and λ as z = z 1 + i z 2 and λ = α + i β , respectively. The reason for having an underdetermined system of equations instead of a square system of equations is because, expanding z H B z = 1 gives only one real equation, since B is symmetric positive definite, while ( A λ B ) z = 0 results in 2n real equations. This results in a 2 n + 1 real under-determined system of nonlinear equations in 2 n + 2 real unknowns. This will then be followed by presenting the underdetermined system of nonlinear equations and explicit expression for its Jacobian. Furthermore, we will show in the main result of this section-Theorem 2.1 that if the eigenvalue of interest in ( A , B ) is algebraically simple, then the Jacobian has linearly independent rows. We will find the right nullvector of the Jacobian at the root and proof that it is unique.

If we let z = z 1 + i z 2 and λ = α + i β , then the square nonlinear system of Equations (4) can be written as

( A λ B ) z = [ A ( α + i β ) B ] ( z 1 + i z 2 ) = ( A α B ) z 1 + β B z 2 + i [ ( A α B ) z 2 β B z 1 ] , (6)

and

z H B z = z 1 T B z 1 + z 2 T B z 2 . (7)

Hence,

1 2 z H B z + 1 2 = 1 2 ( z 1 T B z 1 + z 2 T B z 2 ) + 1 2 = 0 .

Since ( A λ B ) z = 0 , we equate the real and imaginary parts of (6) to zero and obtain the 2n real equations ( A α B ) z 1 + β B z 2 = 0 and

( A α B ) z 2 β B z 1 = 0 . This means, F ( v ) consists of the 2n real equations

arising from (6) and one real equation 1 2 ( z 1 T B z 1 + z 2 T B z 2 ) + 1 2 = 0 ;

F ( v ) = [ ( A α B ) z 1 + β B z 2 β B z 1 + ( A α B ) z 2 1 2 ( z 1 T B z 1 + z 2 T B z 2 ) + 1 2 ] = 0 , (8)

where F : ( 2 n + 2 ) ( 2 n + 1 ) . The Jacobian F v ( v ) of F ( v ) which has the following explicit expression

F v ( v ) = [ ( A α B ) β B B z 1 B z 2 β B ( A α B ) B z 2 B z 1 ( B z 1 ) T ( B z 2 ) T 0 0 ] , (9)

is a 2 n + 1 by 2 n + 2 real matrix. From the Jacobian (9) above, we define the real 2n by 2n matrix M as

M = [ ( A α B ) β B β B ( A α B ) ] . (10)

Also, we form the 2n by 2 real matrix

N = [ B z 1 B z 2 B z 2 B z 1 ] = [ B 1 w B 1 w 1 ] , (11)

consisting of the product of B 1 = [ B O O B ] and the matrix of right nullvectors of M at the root, where

w = [ z 1 z 2 ] , w 1 = [ z 2 z 1 ] , (12)

and O is the n by n zero matrix. The Jacobian (9) can be rewritten in the following partitioned form

F v ( v ) = [ M B 1 w B 1 w 1 ( B 1 w ) T 0 0 ] = [ M N ( B 1 w ) T 0 T ] , (13)

with M , N are as defined in (10) and (11) respectively. Note that because at the root,

[ ( A α B ) β B β B ( A α B ) ] [ z 1 z 2 ] = [ ( A α B ) z 1 + β B z 2 ( A α B ) z 2 β B z 1 ] = 0 ,

this implies that [ z 1 z 2 ] or its nonzero scalar multiple is a right nullvector of M . In the same vein, we find

[ ( A α B ) β B β B ( A α B ) ] [ z 2 z 1 ] = [ ( A α B ) z 2 β B z 1 { ( A α B ) z 1 + β B z 2 } ] = 0 ,

and [ z 2 z 1 ] or its nonzero scalar multiple is also a right nullvector of M at the

root. Since the eigenvalue λ of ( A , B ) is algebraically simple by assumption, then by (2), we need to give explicit expressions for the left nullvector of ( A λ B ) in order to prove that the Jacobian has full row rank at the root. Observe that if we define ψ = ψ 1 + i ψ 2 , where ψ 1 , ψ 2 n \ { 0 } for all ψ N ( A λ B ) H \ { 0 } , then this implies

ψ H ( A λ B ) = ( ψ 1 T i ψ 2 T ) [ ( A α B ) i β B ] = ψ 1 T ( A α B ) β ψ 2 T B i [ β ψ 1 T B + ψ 2 T ( A α B ) ] = 0 T .

Hence, ψ 1 T ( A α B ) β ψ 2 T B = 0 T and β ψ 1 T B + ψ 2 T ( A α B ) = 0 T . The implication of this is that

[ ψ 1 T ψ 2 T ] M = [ ψ 1 T ψ 2 T ] [ ( A α B ) β B β B ( A α B ) ] = [ ψ 1 T ( A α B ) β ψ 2 T B β ψ 1 T B + ψ 2 T ( A α B ) ] = 0 T .

which means, [ ψ 1 T , ψ 2 T ] or its nonzero scalar multiple is a left nullvector of M . Similarly,

[ ψ 2 T ψ 1 T ] M = [ ψ 2 T ψ 1 T ] [ ( A α B ) β B β B ( A α B ) ] = [ β ψ 1 T B + ψ 2 T ( A α B ) { ψ 1 T ( A α B ) β ψ 2 T B } ] = 0 T ,

and it shows that [ ψ 2 T , ψ 1 T ] is also a left nullvector of M .

So we form the matrix C consisting of the 2-dimensional left nullvectors of M at the root (in practice C is not computed), as

C = [ ψ 1 ψ 2 ψ 2 ψ 1 ] . (14)

Now, observe that the condition (2), implies

ψ H B z = [ ψ 1 T B z 1 + ψ 2 T B z 2 ] + i [ ψ 1 T B z 2 ψ 2 T B z 1 ] 0.

Therefore, at the root, either ψ 1 T B z 1 + ψ 2 T B z 2 0 or ψ 1 T B z 2 ψ 2 T B z 1 = 0 ; ψ 1 T B z 1 + ψ 2 T B z 2 = 0 or ψ 1 T B z 2 ψ 2 T B z 1 0 or both ψ 1 T B z 1 + ψ 2 T B z 2 and ψ 1 T B z 2 ψ 2 T B z 1 are nonzero. It excludes the possibility that they are both zero.

Before we continue with the rest of the analysis, we pause a little to present the main result of this section which shows that the Jacobian (9) has a one dimensional nullvector at the root.

Theorem 2.1 Assume that the eigenpairs ( z , λ ) of the pencil ( A , B ) are algebraically simple. If z 1 and z 2 are nonzero vectors, then ϕ = τ [ z 2 T , z 1 T ,0,0 ] , τ is the unique nonzero nullvector of F v ( v ) at the root.

Proof: See [5] .

After linearizing F ( v ) = 0 , we have the following under-determined linear system of equations

F v ( v ( k ) ) Δ v ( k ) = F ( v ( k ) ) . (15)

Let n ( k ) be the exact nullvector of the Jacobian F v ( v ( k ) ) . By adding the extra condition n T Δ v ( k ) = 0 , which stems from Lemma 1.1 to the underdetermined linear system of Equations (15), we obtain the following square linear system of equations

[ F v ( v ( k ) ) n ( k ) T ] Δ v ( k ) = [ F ( v ( k ) ) 0 ] . (16)

3. Square System of Equations for the Numerical Computation of the Complex Eigenvalues of a Matrix

In the preceding section, we saw that by adding an extra equation to the under-determined linear system of equations 15, we obtained a square system of equations (16). However, in practice we would never compute n ( k ) , but Theorem 2.1 guarantees the existence of a unique nullvector ϕ ( k ) at the root. This is the motivation for the discussion in this section. In this section, we will use ϕ ( k ) defined by ϕ ( k ) = [ z 2 ( k ) , z 1 ( k ) ,0,0 ] as an approximation to n in (16) and show that the solution obtained by solving (15) is equivalent to the solution obtained by solving

[ F v ( v ( k ) ) ϕ ( k ) T ] Δ v ( k ) = [ F ( v ( k ) ) 0 ] , (17)

in the absence of round off errors. To do this, we will show that ϕ ( k ) T Δ v ( k ) = 0 for each k, and this is presented in the main result of this section: Theorem 3.1. Algorithm 1 is given for computing an algebraically simple eigenpair of the pencil ( A , B ) . Note that since M has been shown to be singular at the root in section 2, this section is anchored on the assumption that when v is not at the root, M is nonsingular.

First, we define the 2n by 2n matrix J as (see, also [11] )

J = [ 0 I I 0 ] , (18)

and

J w = [ 0 I I 0 ] [ z 1 z 2 ] = [ z 2 z 1 ] . (19)

The matrix J satisfies the following properties:

1) J T = J .

2) J T J = I 2 n , where I 2 n is the 2n by 2n identity matrix.

3) J 2 = I 2 n .

4) The matrix J commutes with M , i.e., J M = M J .

5) For w 2 n , w T B 1 J w = w T J B 1 w = 0 .

6) Let u be an unknown vector that solves M u = B 1 w . By premultiplying both sides by J we obtain J M u = J B 1 w and hence M J u = J B 1 w because of the commutativity of M and J . Therefore, if M u = B 1 w , then J u solves M ( J u ) = J B 1 w .

We begin by writing the linear system of Equations (15) explicitly. For ease of notation, we shall drop the superscripts and define w + = w + Δ w where w + = w ( k + 1 ) , and replace w ( k ) and [ Δ z 1 ( k ) T , Δ z 2 ( k ) T ] with w and Δ w respectively. This means that (15) can now be rewritten as:

[ M B 1 w B 1 J w ( B 1 w ) T 0 0 ] [ Δ w Δ α Δ β ] = [ M w 1 2 w T B 1 w + 1 2 ] , (20)

which is equivalent to the following system of equations

M Δ w Δ α B 1 w + Δ β B 1 J w = M w

w T B 1 Δ w = 1 2 w T B 1 w 1 2 .

After rearrangement, the first n-equation reduces to

M w + Δ α B 1 w + Δ β B 1 J w = 0 . (21)

By multiplying both sides of the ( n + 1 ) t h equation by 2, we obtain:

2 w T B 1 Δ w + w T B 1 w = 1.

This in turn reduces to

w T B 1 ( w + 2 Δ w ) = 1. (22)

Observe that since w + = w + Δ w , 2 Δ w = 2 w + 2 w and w + 2 Δ w = 2 w + w . Now, w T B 1 ( w + 2 Δ w ) = w T B 1 ( 2 w + w ) = 2 w T B 1 w + w T B 1 w . Consequently,

w T B 1 w + = 1 2 ( w T B 1 w + 1 ) . (23)

The combined set of Equations (21) and (23), which is the simplified form of (20), can be expressed as:

[ M B 1 w B 1 J w ( B 1 w ) T 0 0 ] [ w + Δ α Δ β ] = [ 0 1 2 ( w T B 1 w + 1 ) ] . (24)

We assume that the 2n by 2n matrix M is nonsingular except at the root. This is what forms the basis for the following discussion. That is to say, we want to show that when not at the root, ϕ ( k ) T Δ v ( k ) = 0 .

First of all, let the exact nullvector n of

F v ( v ) = [ M B 1 w B 1 J w ( B 1 w ) T 0 0 ] ,

be defined as n = [ n w T , n α , n β ] , where n w 2 n , n α , n β are real scalars, J w and M are defined respectively by (19) and (10). Hence,

[ M B 1 w B 1 J w ( B 1 w ) T 0 0 ] [ n w n α n β ] = 0 ,

then after expanding the matrix-vector multiplication, we obtain

M n w n α B 1 w + n β ( B 1 J w ) = 0 (25)

w T B 1 n w = 0. (26)

We make distinctly clear at this juncture, that the nullvector n = [ n w T , n α , n β ] is not exactly the same as ϕ = [ ( J w ) T ,0,0 ] because, the later has the form of the exact nullvector at the root, but is evaluated at the kth iterate while the former is the nullvector even when not at the root.

Another way of writing (24) is as follows

M w + = Δ α B 1 w Δ β B 1 J w . (27)

This means that we could solve (24) by solving

M u = B 1 w , (28)

for u . After which the solution of (27) is given by

w + = Δ α u Δ β J u . (29)

With this expression for w + , it can be observed that

M w + = Δ α M u Δ β M J u = Δ α B 1 w Δ β J M u = Δ α B 1 w Δ β J B 1 w = Δ α B 1 w Δ β B 1 J w .

Which means that w + is well defined. Furthermore, from (25)

M n w = n α B 1 w n β ( B 1 J w ) ,

using the fact that J commutes with B 1 and (28) gives

n w = n α u n β J u . (30)

Since w is B 1 -orthogonal to n w by virtue of Equation (26), taking the B 1 -inner product of both sides of the above with w yields

w T B 1 n w = n α w T B 1 u n β w T B 1 J u = 0.

From which we deduce

n α = w T B 1 J u and n β = w T B 1 u . (31)

Consider the problem of solving the under-determined linear system of Equations (20) for the 2 n + 2 real unknowns Δ v = [ Δ w T , Δ α , Δ β ] . It was stated in Lemma 1.1 that the minimum norm solution to an under-determined linear system of equations is orthogonal to the nullspace. It is an application of this result that yields the following important relationship:

0 = n T Δ v = n w T Δ w + n α Δ α + n β Δ β . (32)

If we add the nullvector n to the last row of (24), then

[ M B 1 w B 1 J w ( B 1 w ) T 0 0 n w T n α n β ] [ w + Δ α Δ β ] = [ 0 1 2 ( w T B 1 w + 1 ) n w T w ] . (33)

By expanding the second to the last row, w T B 1 w + = 1 2 ( w T B 1 w + 1 ) . But from

(29), w + = Δ α u Δ β J u . This implies that, by taking the inner product of both sides with w , yields

w T B 1 w + = Δ α ( w T B 1 u ) Δ β ( w T B 1 J u ) = 1 2 ( w T B 1 w + 1 ) .

Using the definition (31) for n α and n β , we obtain

n β Δ α n α Δ β = 1 2 ( w T B 1 w + 1 ) , (34)

where the unknown quantities Δ α and Δ β are to be determined, so we need an extra equation to be able to do so. The last row of the matrix-vector multiplication (cf. (33)) above comes from (32) since

n w T w + + n α Δ α + n β Δ β = n w T ( w + Δ w ) + n α Δ α + n β Δ β = n w T w + ( n w T Δ w + n α Δ α + n β Δ β ) = 0 = n w T w .

If we substitute the expression (30) for n w and (29) for w + into the left hand side, then one obtains

[ n α u T n β ( J u ) T ] [ Δ α u Δ β J u ] + n α Δ α + n β Δ β = n w T w .

Furthermore, by expanding the first term on the left hand side, using the properties of J , then

[ n α u T n β ( J u ) T ] ( Δ α u Δ β J u ) = n α Δ α u T u + n β Δ β u T J T J u = n α Δ α u 2 + n β Δ β u 2 .

Consequently,

n α Δ α u 2 + n β Δ β u 2 + n α Δ α + n β Δ β = ( 1 + u 2 ) ( n α Δ α + n β Δ β ) = n w T w .

Observe that because u is real, ( 1 + u 2 ) is nonzero. Accordingly, after dividing both sides by ( 1 + u 2 )

n α Δ α + n β Δ β = n w T w ( 1 + u 2 ) . (35)

We combine the two equations (34) and (35) below

[ n β n α n α n β ] [ Δ α Δ β ] = [ 1 2 ( w T B 1 w + 1 ) n w T w ( 1 + u 2 ) ] , (36)

to compute Δ α and Δ β simultaneously. The matrix on the left hand side is always nonsingular except at the root (in which case all entries are zero), this is because, its determinant is n α 2 + n β 2 . Equation (35) can now be applied to simplify

w T B 1 J w + = w T B 1 J ( Δ α u Δ β J u ) = w T B 1 ( Δ α J u + Δ β u ) = Δ α ( w T B 1 J u ) + Δ β ( w T B 1 u ) = n α Δ α + n β Δ β = 0. (37)

Notice that we have used the property J 2 = I 2 n to arrive at the second step above and the definition (29) for w + .

Next, we want to establish the orthogonality of ϕ and Δ v in the next key result. Before we do that, notice from Theorem (2.1) that ϕ , at the root, is a scalar multiple of [ z 2 T , z 1 T ,0,0 ] and by the definition of J in (18), we can also write ϕ = [ ( J w ) T ,0,0 ] , with w = [ z 1 T , z 2 T ] . This result holds when v = v ( k ) is at the root or not, but because the analysis used to establish the orthogonality is based on the assumption that M is nonsinsingular when not at the root. As a result of this, after presenting Algorithm (1) (to follow shortly), we then show that the same result holds when M is singular at the root.

Theorem 3.1 Let ϕ be an approximation to the exact nullvector n of F v ( v ) . Then, ϕ is orthogonal to Δ v .

Proof: To proof this, recall that v = [ w T , α , β ] , v + = [ w + T , α + , β + ] and ϕ = [ ( J w ) T ,0,0 ] . This implies

ϕ T Δ v = ϕ T ( v + v ) = ( J w ) T ( w + w ) = w T J T w + w T J T w = w T J w w T J w + = w T J w + = 0, (38)

showing that ϕ T Δ v = 0 . In arriving at the last step above, we have used the properties of J and a special case of (37) where B 1 = I 2 n .

We present Algorithm (1), which involves the solution of two linear systems. The first is the 2n by 2n linear system of equations in (28), while the second is the 2 by 2 linear system (36).

Stop Algorithm 1 as soon as

Δ v ( k ) t o l ,

where Δ v = [ w + w , ω ] . The above analysis shows that ϕ T Δ v = 0 when v ( k ) is not at the root. Next, we want to show that the same result holds at the root.

In a manner analogous to the proof of Lemma (2.1), we postmultiply both sides of the above system of equation by C T where C is the 2n by 2 real matrix defined by (14), consisting of the left nullvectors of M . If M and N are as defined respectively in (10) and (11), then, this is the same as

C T M w + + C T N [ Δ α Δ β ] = 0 .

But by the definition of C , the first term on the left hand side of the equation above is zero, since C T M = 0 T . It can be recalled from the proof of Lemma 2.1 that the 2 by 2 real matrix H = C T N is nonsingular at the root. This implies

C T N [ Δ α Δ β ] = H [ Δ α Δ β ] = [ ( ψ 1 T B z 1 + ψ 2 T B z 2 ) ψ 1 T B z 2 ψ 2 T B z 1 ψ 1 T B z 2 ψ 2 T B z 1 ψ 1 T B z 1 + ψ 2 T B z 2 ] [ Δ α Δ β ] = 0 .

Accordingly, Δ α = Δ β = 0 because of the nonsingularity of H . Therefore,

M w + = 0 . (39)

From the property of M at the root, it has two nonzero nullvectors and hence singular. The implication of this fact and the above is that, w + is in the nullspace of M . But, we have already established that the nullspace of M consists of w T = [ z 1 T , z 2 T ] and w 1 = J w = [ z 2 T , z 1 T ] . Hence, we can write

w + = μ w + τ w 1 .

Now, from the last equation in (24),

1 2 ( w T B 1 w + 1 ) = w T B 1 w + = μ w T B 1 w + τ w T B 1 w 1 = μ w T B 1 w + τ ( z 1 T B z 2 z 2 T B z 1 ) = μ w T B 1 w .

Consequently,

μ = w T B 1 w + 1 2 w T B 1 w .

But at the root, w T B 1 w = z 1 T B z 1 + z 2 T B z 2 = 1 . Therefore, μ = 1 , w + = w , z 1 + = z 1 and z 2 + = z 2 . This will now be used to deduce the following corollary at the root.

Corollary 3.1 Let ϕ = [ ( J w ) T ,0,0 ] . Let v + = [ w + , α + , β + ] T . Then, ϕ is orthogonal to Δ v at the root.

Proof: This follows from the second to the last line of the proof of Theorem 3.1 (cf., Equation (38)) where w + = w . Hence,

ϕ T Δ v = w T J w + = w T J w = 0.

4. Inexact Inverse Iteration with Preconditioning for Solving (28)

In Section 2, we found two nonzero nullvectors for M at the root. As a result of this property of M at the root, in this section, we will describe an inexact inverse iteration technique for solving the large sparse system of Equations (28) in step 2 of Algorithm 1 and present Algorithm 2 and Algorithm 3. Result of a numerical experiment is given which supports the theory in Section 5.

We give the following version of inexact inverse iteration in Algorithm 2. We will use a fixed tolerance. Note that because of the special nature of M at the root, the choice of a preconditioner is crucial for convergence to the desired accuracy to be achieved. The inexact linear solver that we use is the preconditioned GMRES [9] where we define the following block tridiagonal preconditioner,

P [ A α B β B A α B ] . (40)

Next, we present Algorithm 3, which is the inexact inverse iteration equivalence of Algorithm 1. The stopping criterion for the outer iteration in Algorithm 3 depends on the norm of the eigenvalue residuals, that is

r 1 ( k ) = ( A α ( k ) ) z 1 ( k ) + β ( k ) B z 2 ( k ) t o l ,

and

r 2 ( k ) = ( A α ( k ) ) z 2 ( k ) β ( k ) B z 1 ( k ) t o l ,

and

[ Δ α Δ β ] t o l and w + w t o l .

5. Numerical Experiments

As mentioned earlier, the sparse linear system of equations in step 2 of Algorithm 1 is solved with an LU type factorization of M which is expensive and besides the L and U factors may have more nonzero elements than M . In this section, our main goal is to use preconditioned GMRES with the block triangular preconditioner P in (40) to solve the system M u = B 1 w inexactly. We do this by considering a single numerical experiment with a fixed and a decreasing tolerance. Results are presented by means of tables and figures.

Example 5.1

Consider the 200 by 200 matrix A bwm200.mtx from the matrix market library [12] . It is the discretised Jacobian of the Brusselator wave model for a chemical reaction. The resulting eigenvalue problem with B = I was also studied in [13] and we are interested in finding the rightmost eigenvalue of A which is closest to the imaginary axis and its corresponding eigenvector.

In this example, we take α ( 0 ) = 0.0 , β ( 0 ) = 2.5 in line with [13] and took z 1 ( 0 ) = 1 / 2 1 and z 2 ( 0 ) = 1 / 1 , where 1 is the vector of all ones.

In Table 1 and Table 2, we present results for the computation of the eigenpairs ( z , λ ) for the matrix in Example 5.1, and stop once the norms of [ Δ α , Δ β ] T and the eigenvalue residuals r 1 ( k ) and r 2 ( k ) are smaller than the outer tolerance τ outer = 2.5 × 10 14 . f represents the number of inner iterations used by preconditioned GMRES to satisfy the fixed inner tolerance τ = 0.6 in Table 1, while d in Table 2 represents number of inner iterations used by preconditioned GMRES to satisfy the decreasing inner tolerance τ inner = m i n { 0.6,0.6 r 1 ( k ) } . Quadratic convergence to λ = 1.81999 e 05 + 2.13950 i is easily observed from the second up to the seventh iterates in columns five and six of Table 2. However, this quadratic convergence is lost in the last iterate. This could be due to the fact that we are solving a nearly singular system as we approach the root. As shown in columns five and six of Table 1, as we approach the root, more number of inner iterations

Table 1. Table showing convergence to the eigenvalue λ = 1.81999 e 05 + 2.13950 i with a fixed inner tolerance for Example 5.1. The last column shows the number of inner iterations it took to satisfy the fixed inner tolerance τ = 0.6 .

Table 2. Table showing quadratic convergence to the eigenvalue λ = 1.81999 e 05 + 2.13950 i with a decreasing tolerance for Example 5.1. The last column shows the number of inner iterations it took to satisfy the decreasing inner tolerance τ = m i n { 0.6,0.6 r 1 ( k ) } .

Figure 1. Convergence history of the eigenvalue residuals on Example 5.1 with a fixed tolerance τ inner = 0.6 .

were needed to satisfy the decreasing inner tolerance as against the fixed tolerance. From Table 1, we observed superlinear convergence in columns five and six.

Figure 1 and Figure 2 shows a plot of the norm of the eigenvalue residuals against the outer iterations with fixed and decreasing inner tolerances respectively. While the norm of the eigenvalue residuals decayed almost

Figure 2. Convergence history of the eigenvalue residuals on Example 5.1 with a decreasing tolerance τ inner = m i n { 0.6,0.6 r 1 ( k ) } .

superlinearly in Figure 1, we observed a quadratic decrease in the norm of the eigenvalue residuals in Figure 2. It is quite surprising to see that Algorithm 3 works because M is singular at the root, which means we solved a singular system at the root.

6. Conclusions

While Ruhe ( [2] Section 3) used the normalisation c H z = 1 and solved the resulting ( n + 1 ) by ( n + 1 ) nonlinear system of equations to obtain [ z , λ ] T . We have been able to show that, with the addition of the non differentiable normalisation z H z = 1 , it is still possible to convert the resulting system of under-determined nonlinear equations into a square one.

Nevertheless, in this work, we obtained Algorithm 1 which consists of a combination of a 2n -by-2n system of equations (which is the same as those in [13] ) and a 2-by-2 system. A numerical example show that using an LU-solver on the one hand and preconditioned GMRES as an inexact solver on the other hand to solve the large sparse 2n-by-2n system of Equations (28), followed by the 2 by 2 system in each case, give similar results in the limit. By and large, the algorithms presented in this paper relies on good initial guesses to the desired eigenpair of interest.

Acknowledgements

The first author acknowledges funds provided by the University of Bath, United Kingdom during his PhD as well as an anonymous referee for useful comments.

Cite this paper

Akinola, R.O., Kutchin, S.Y., Ayodele, A.S. and Muka, K.O. (2018) A Block-Preconditioned Inexact Linear Solver for Computing the Complex Eigenpairs of a Large Sparse Matrix. Journal of Applied Mathematics and Physics, 6, 429-445. https://doi.org/10.4236/jamp.2018.62040

References

  1. 1. Stewart, G.W. (2001) Matrix Algorithms. Vol. II, Eigensystems, SIAM.

  2. 2. Ruhe, A. (1973) Algorithms for the Nonlinear Eigenvalue Problem. SIAM Journal on Matrix Analysis and Applications, 10, 674-689. https://doi.org/10.1137/0710059

  3. 3. Keller, H.B. (1977) Numerical Solution of Bifurcation and Nonlinear Eigenvalue Problems. In: Rabinowitz, P., Ed., Applications of Bifurcation Theory, Academic Press, New York, 359-384.

  4. 4. Deuflhard, P. (2004) Newton Methods for Nonlinear Problems. Springer, 174-175.

  5. 5. Akinola, R.O. (2014) Theoretical Expression for the Nullvector of the Jacobian: Inverse Iteration with a Complex Shift. International Journal of Innovation in Science and Mathematics, 2, 367-371.

  6. 6. Akinola, R.O. and Spence, A. (2014) Two-Norm Normalization for the Matrix Pencil: Inverse Iteration with a Complex Shift. International Journal of Innovation in Science and Mathematics, 2, 435-439.

  7. 7. Akinola, R.O. and Spence, A. (2015) Numerical Computation of the Complex Eigenvalues of a Matrix by solving a Square System of Equations. Journal of Natural Sciences Research, 5, 144-157.

  8. 8. Akinola, R.O. (2015) Computing the Complex Eigenpair of a Large Sparse Matrix in Complex Arithmetic. International Journal of Pure & Engineering Mathematics (IJPEM), 3, 137-158.

  9. 9. Saad, Y. and Schultz, M.H. (1986) GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems. SIAM Journal on Scientific and Statistical Computing, 7, 856-869. https://doi.org/10.1137/0907058

  10. 10. Boyd, S. (2008/2009) Lecture 8: Least Norm Solutions of Under-Determined Equations, EE263 Autumn.

  11. 11. Freitag, M.A. and Spence, A. (2009) The Calculation of the Distance to Instability by the Computation of a Jordan Block, Linear and Nonlinear Eigen Problems for PDEs, 274-275.

  12. 12. Biosvert, B., Pozo, R., Remington, K., Miller, B. and Lipman, R. Matrix Market. http://math.nist.gov/MatrixMarket/

  13. 13. Parlett, B.N. and Saad, Y. (1987) Complex Shift and Invert Strategies for Real Matrices. Linear Algebra and Its Applications, 575-595.