^{1}

^{*}

^{1}

^{1}

^{2}

In computing eigenpairs of the matrix pencil, one obtains a linear system of equations. In this work, we show how a block triadiagonal preconditioner in GMRES can be used to solve a large sparse unsymmetric system of equations inexactly using a fixed and decreasing tolerance. While the fixed tolerance solver converged superlinearly to the eigenvalue of interest, the decreasing one converged quadratically. This surpasses an earlier result which converged harmonically.

Let A be a large sparse, real n by n nonsymmetric matrix and B ∈ ℝ n × n a symmetric positive definite matrix. In this paper, we consider the problem of computing the eigenpair ( z , λ ) from the following generalized complex eigenvalue problem

A z = λ B z , z ∈ ℂ n , z ≠ 0 , (1)

where λ ∈ ℂ is the eigenvalue of the pencil ( A , B ) and z its corresponding complex eigenvector. We assume that the eigenpair of interest ( z , λ ) is algebraically simple, with ψ H the corresponding left eigenvector such that [

ψ H B z ≠ 0. (2)

By adding the normalization

z H B z = 1, (3)

to (1), the combined system of equations can be expressed in the form F ( z ) = 0 as

F ( z ) = [ ( A − λ B ) z − 1 2 z H B z + 1 2 ] = 0 . (4)

Note that z H B z is real since B is symmetric and positive definite. This results in solving a system of nonlinear n complex and one real equation for the n + 1 complex unknowns v = [ z , λ ] T . The reason we cannot use Newton’s method to solve (4) is because z ¯ in the normalization z H B z = z ¯ T B z is not differentiable.

Recall that for a real eigenpair ( z , λ ) , (4) is ( n + 1 ) real equations for ( n + 1 ) real unknowns and Newton’s method for solving (4) involves the solution of the ( n + 1 ) square linear systems

[ A − λ ( k ) B − B z ( k ) − ( B z ( k ) ) T 0 ] [ Δ z ( k ) Δ λ ( k ) ] = − [ ( A − λ ( k ) B ) z ( k ) − 1 2 z ( k ) T B z ( k ) + 1 2 ] , (5)

for the ( n + 1 ) real unknowns Δ v ( k ) = [ Δ z ( k ) T , Δ λ ( k ) ] , and updating v ( k + 1 ) = v ( k ) + Δ v ( k ) . Secondly, if instead of the normalization (3), we add c H z = 1 , where c is a fixed complex vector (see, for example, [

[ ( A − λ B ) − B z c H 0 ] .

The above Jacobian is square and can be easily shown to be nonsingular, using the ABCD Lemma [

Our approach for analyzing the solution of (4) for v begins by splitting the eigenpair ( z , λ ) into their real and imaginary parts: z = z 1 + i z 2 , λ = α + i β where z 1 , z 2 ∈ ℝ n , and α , β ∈ ℝ . After expanding (4), we will obtain a real 2 n + 1 under-determined system of nonlinear equations in 2 n + 2 real unknowns v = [ z 1 , z 2 , α , β ] T , and it is natural to use the Gauss-Newton method (see, for example, Deuflhard ( [

Algorithm 1. Computing the complex eigenvalues of the pencil ( A , B ) .

The main mathematical tools used in this paper are the LU factorization, inexact inverse iteration and preconditioned Generalized Minimal Residual (GMRES) [

The following result helps to enforce the validity of the results in this paper.

Lemma 1.1: [

Δ w = − F w ( w ) T [ F w ( w ) F w ( w ) T ] − 1 F ( w ) , is orthogonal to the nullspace of F w ( w ) .

Algorithm 2. Inexact inverse iteration algorithm.

Algorithm 3. Complex eigenvalues of the pencil ( A , B ) using Inexact Inverse Iteration with preconditioning.

In the next section, we will express both z and λ as λ = α + i β and z = z 1 + i z 2 , convert the nonlinear system (4) to a real under-determined system of nonlinear equations and prove some important results.

In this section, we will expand the system of nonlinear n complex and one real equations in n + 1 complex unknowns (4) by writing z and λ as z = z 1 + i z 2 and λ = α + i β , respectively. The reason for having an underdetermined system of equations instead of a square system of equations is because, expanding z H B z = 1 gives only one real equation, since B is symmetric positive definite, while ( A − λ B ) z = 0 results in 2n real equations. This results in a 2 n + 1 real under-determined system of nonlinear equations in 2 n + 2 real unknowns. This will then be followed by presenting the underdetermined system of nonlinear equations and explicit expression for its Jacobian. Furthermore, we will show in the main result of this section-Theorem 2.1 that if the eigenvalue of interest in ( A , B ) is algebraically simple, then the Jacobian has linearly independent rows. We will find the right nullvector of the Jacobian at the root and proof that it is unique.

If we let z = z 1 + i z 2 and λ = α + i β , then the square nonlinear system of Equations (4) can be written as

( A − λ B ) z = [ A − ( α + i β ) B ] ( z 1 + i z 2 ) = ( A − α B ) z 1 + β B z 2 + i [ ( A − α B ) z 2 − β B z 1 ] , (6)

and

z H B z = z 1 T B z 1 + z 2 T B z 2 . (7)

Hence,

− 1 2 z H B z + 1 2 = − 1 2 ( z 1 T B z 1 + z 2 T B z 2 ) + 1 2 = 0 .

Since ( A − λ B ) z = 0 , we equate the real and imaginary parts of (6) to zero and obtain the 2n real equations ( A − α B ) z 1 + β B z 2 = 0 and

( A − α B ) z 2 − β B z 1 = 0 . This means, F ( v ) consists of the 2n real equations

arising from (6) and one real equation − 1 2 ( z 1 T B z 1 + z 2 T B z 2 ) + 1 2 = 0 ;

F ( v ) = [ ( A − α B ) z 1 + β B z 2 − β B z 1 + ( A − α B ) z 2 − 1 2 ( z 1 T B z 1 + z 2 T B z 2 ) + 1 2 ] = 0 , (8)

where F : ℝ ( 2 n + 2 ) → ℝ ( 2 n + 1 ) . The Jacobian F v ( v ) of F ( v ) which has the following explicit expression

F v ( v ) = [ ( A − α B ) β B − B z 1 B z 2 − β B ( A − α B ) − B z 2 − B z 1 − ( B z 1 ) T − ( B z 2 ) T 0 0 ] , (9)

is a 2 n + 1 by 2 n + 2 real matrix. From the Jacobian (9) above, we define the real 2n by 2n matrix M as

M = [ ( A − α B ) β B − β B ( A − α B ) ] . (10)

Also, we form the 2n by 2 real matrix

N = [ − B z 1 B z 2 − B z 2 − B z 1 ] = [ − B 1 w B 1 w 1 ] , (11)

consisting of the product of B 1 = [ B O O B ] and the matrix of right nullvectors of M at the root, where

w = [ z 1 z 2 ] , w 1 = [ z 2 − z 1 ] , (12)

and O is the n by n zero matrix. The Jacobian (9) can be rewritten in the following partitioned form

F v ( v ) = [ M − B 1 w B 1 w 1 − ( B 1 w ) T 0 0 ] = [ M N − ( B 1 w ) T 0 T ] , (13)

with M , N are as defined in (10) and (11) respectively. Note that because at the root,

[ ( A − α B ) β B − β B ( A − α B ) ] [ z 1 z 2 ] = [ ( A − α B ) z 1 + β B z 2 ( A − α B ) z 2 − β B z 1 ] = 0 ,

this implies that [ z 1 z 2 ] or its nonzero scalar multiple is a right nullvector of M . In the same vein, we find

[ ( A − α B ) β B − β B ( A − α B ) ] [ z 2 − z 1 ] = [ ( A − α B ) z 2 − β B z 1 − { ( A − α B ) z 1 + β B z 2 } ] = 0 ,

and [ z 2 − z 1 ] or its nonzero scalar multiple is also a right nullvector of M at the

root. Since the eigenvalue λ of ( A , B ) is algebraically simple by assumption, then by (2), we need to give explicit expressions for the left nullvector of ( A − λ B ) in order to prove that the Jacobian has full row rank at the root. Observe that if we define ψ = ψ 1 + i ψ 2 , where ψ 1 , ψ 2 ∈ ℝ n \ { 0 } for all ψ ∈ N ( A − λ B ) H \ { 0 } , then this implies

ψ H ( A − λ B ) = ( ψ 1 T − i ψ 2 T ) [ ( A − α B ) − i β B ] = ψ 1 T ( A − α B ) − β ψ 2 T B − i [ β ψ 1 T B + ψ 2 T ( A − α B ) ] = 0 T .

Hence, ψ 1 T ( A − α B ) − β ψ 2 T B = 0 T and β ψ 1 T B + ψ 2 T ( A − α B ) = 0 T . The implication of this is that

[ ψ 1 T ψ 2 T ] M = [ ψ 1 T ψ 2 T ] [ ( A − α B ) β B − β B ( A − α B ) ] = [ ψ 1 T ( A − α B ) − β ψ 2 T B β ψ 1 T B + ψ 2 T ( A − α B ) ] = 0 T .

which means, [ ψ 1 T , ψ 2 T ] or its nonzero scalar multiple is a left nullvector of M . Similarly,

[ ψ 2 T − ψ 1 T ] M = [ ψ 2 T − ψ 1 T ] [ ( A − α B ) β B − β B ( A − α B ) ] = [ β ψ 1 T B + ψ 2 T ( A − α B ) − { ψ 1 T ( A − α B ) − β ψ 2 T B } ] = 0 T ,

and it shows that [ ψ 2 T , − ψ 1 T ] is also a left nullvector of M .

So we form the matrix C consisting of the 2-dimensional left nullvectors of M at the root (in practice C is not computed), as

C = [ ψ 1 ψ 2 ψ 2 − ψ 1 ] . (14)

Now, observe that the condition (2), implies

ψ H B z = [ ψ 1 T B z 1 + ψ 2 T B z 2 ] + i [ ψ 1 T B z 2 − ψ 2 T B z 1 ] ≠ 0.

Therefore, at the root, either ψ 1 T B z 1 + ψ 2 T B z 2 ≠ 0 or ψ 1 T B z 2 − ψ 2 T B z 1 = 0 ; ψ 1 T B z 1 + ψ 2 T B z 2 = 0 or ψ 1 T B z 2 − ψ 2 T B z 1 ≠ 0 or both ψ 1 T B z 1 + ψ 2 T B z 2 and ψ 1 T B z 2 − ψ 2 T B z 1 are nonzero. It excludes the possibility that they are both zero.

Before we continue with the rest of the analysis, we pause a little to present the main result of this section which shows that the Jacobian (9) has a one dimensional nullvector at the root.

Theorem 2.1 Assume that the eigenpairs ( z , λ ) of the pencil ( A , B ) are algebraically simple. If z 1 and z 2 are nonzero vectors, then ϕ = τ [ z 2 T , − z 1 T ,0,0 ] , τ ∈ ℝ is the unique nonzero nullvector of F v ( v ) at the root.

Proof: See [

After linearizing F ( v ) = 0 , we have the following under-determined linear system of equations

F v ( v ( k ) ) Δ v ( k ) = − F ( v ( k ) ) . (15)

Let n ( k ) be the exact nullvector of the Jacobian F v ( v ( k ) ) . By adding the extra condition n T Δ v ( k ) = 0 , which stems from Lemma 1.1 to the underdetermined linear system of Equations (15), we obtain the following square linear system of equations

[ F v ( v ( k ) ) n ( k ) T ] Δ v ( k ) = − [ F ( v ( k ) ) 0 ] . (16)

In the preceding section, we saw that by adding an extra equation to the under-determined linear system of equations 15, we obtained a square system of equations (16). However, in practice we would never compute n ( k ) , but Theorem 2.1 guarantees the existence of a unique nullvector ϕ ( k ) at the root. This is the motivation for the discussion in this section. In this section, we will use ϕ ( k ) defined by ϕ ( k ) = [ z 2 ( k ) , − z 1 ( k ) ,0,0 ] as an approximation to n in (16) and show that the solution obtained by solving (15) is equivalent to the solution obtained by solving

[ F v ( v ( k ) ) ϕ ( k ) T ] Δ v ( k ) = − [ F ( v ( k ) ) 0 ] , (17)

in the absence of round off errors. To do this, we will show that ϕ ( k ) T Δ v ( k ) = 0 for each k, and this is presented in the main result of this section: Theorem 3.1. Algorithm 1 is given for computing an algebraically simple eigenpair of the pencil ( A , B ) . Note that since M has been shown to be singular at the root in section 2, this section is anchored on the assumption that when v is not at the root, M is nonsingular.

First, we define the 2n by 2n matrix J as (see, also [

J = [ 0 I − I 0 ] , (18)

and

J w = [ 0 I − I 0 ] [ z 1 z 2 ] = [ z 2 − z 1 ] . (19)

The matrix J satisfies the following properties:

1) J T = − J .

2) J T J = I 2 n , where I 2 n is the 2n by 2n identity matrix.

3) J 2 = − I 2 n .

4) The matrix J commutes with M , i.e., J M = M J .

5) For w ∈ ℝ 2 n , w T B 1 J w = w T J B 1 w = 0 .

6) Let u be an unknown vector that solves M u = B 1 w . By premultiplying both sides by J we obtain J M u = J B 1 w and hence M J u = J B 1 w because of the commutativity of M and J . Therefore, if M u = B 1 w , then J u solves M ( J u ) = J B 1 w .

We begin by writing the linear system of Equations (15) explicitly. For ease of notation, we shall drop the superscripts and define w + = w + Δ w where w + = w ( k + 1 ) , and replace w ( k ) and [ Δ z 1 ( k ) T , Δ z 2 ( k ) T ] with w and Δ w respectively. This means that (15) can now be rewritten as:

[ M − B 1 w B 1 J w − ( B 1 w ) T 0 0 ] [ Δ w Δ α Δ β ] = − [ M w − 1 2 w T B 1 w + 1 2 ] , (20)

which is equivalent to the following system of equations

M Δ w − Δ α B 1 w + Δ β B 1 J w = − M w

− w T B 1 Δ w = 1 2 w T B 1 w − 1 2 .

After rearrangement, the first n-equation reduces to

M w + − Δ α B 1 w + Δ β B 1 J w = 0 . (21)

By multiplying both sides of the ( n + 1 ) t h equation by 2, we obtain:

2 w T B 1 Δ w + w T B 1 w = 1.

This in turn reduces to

w T B 1 ( w + 2 Δ w ) = 1. (22)

Observe that since w + = w + Δ w , 2 Δ w = 2 w + − 2 w and w + 2 Δ w = 2 w + − w . Now, w T B 1 ( w + 2 Δ w ) = w T B 1 ( 2 w + − w ) = 2 w T B 1 w + − w T B 1 w . Consequently,

w T B 1 w + = 1 2 ( w T B 1 w + 1 ) . (23)

The combined set of Equations (21) and (23), which is the simplified form of (20), can be expressed as:

[ M − B 1 w B 1 J w ( B 1 w ) T 0 0 ] [ w + Δ α Δ β ] = [ 0 1 2 ( w T B 1 w + 1 ) ] . (24)

We assume that the 2n by 2n matrix M is nonsingular except at the root. This is what forms the basis for the following discussion. That is to say, we want to show that when not at the root, ϕ ( k ) T Δ v ( k ) = 0 .

First of all, let the exact nullvector n of

F v ( v ) = [ M − B 1 w B 1 J w − ( B 1 w ) T 0 0 ] ,

be defined as n = [ n w T , n α , n β ] , where n w ∈ ℝ 2 n , n α , n β are real scalars, J w and M are defined respectively by (19) and (10). Hence,

[ M − B 1 w B 1 J w − ( B 1 w ) T 0 0 ] [ n w n α n β ] = 0 ,

then after expanding the matrix-vector multiplication, we obtain

M n w − n α B 1 w + n β ( B 1 J w ) = 0 (25)

w T B 1 n w = 0. (26)

We make distinctly clear at this juncture, that the nullvector n = [ n w T , n α , n β ] is not exactly the same as ϕ = [ ( J w ) T ,0,0 ] because, the later has the form of the exact nullvector at the root, but is evaluated at the kth iterate while the former is the nullvector even when not at the root.

Another way of writing (24) is as follows

M w + = Δ α B 1 w − Δ β B 1 J w . (27)

This means that we could solve (24) by solving

M u = B 1 w , (28)

for u . After which the solution of (27) is given by

w + = Δ α u − Δ β J u . (29)

With this expression for w + , it can be observed that

M w + = Δ α M u − Δ β M J u = Δ α B 1 w − Δ β J M u = Δ α B 1 w − Δ β J B 1 w = Δ α B 1 w − Δ β B 1 J w .

Which means that w + is well defined. Furthermore, from (25)

M n w = n α B 1 w − n β ( B 1 J w ) ,

using the fact that J commutes with B 1 and (28) gives

n w = n α u − n β J u . (30)

Since w is B 1 -orthogonal to n w by virtue of Equation (26), taking the B 1 -inner product of both sides of the above with w yields

w T B 1 n w = n α w T B 1 u − n β w T B 1 J u = 0.

From which we deduce

n α = w T B 1 J u and n β = w T B 1 u . (31)

Consider the problem of solving the under-determined linear system of Equations (20) for the 2 n + 2 real unknowns Δ v = [ Δ w T , Δ α , Δ β ] . It was stated in Lemma 1.1 that the minimum norm solution to an under-determined linear system of equations is orthogonal to the nullspace. It is an application of this result that yields the following important relationship:

0 = n T Δ v = n w T Δ w + n α Δ α + n β Δ β . (32)

If we add the nullvector n to the last row of (24), then

[ M − B 1 w B 1 J w ( B 1 w ) T 0 0 n w T n α n β ] [ w + Δ α Δ β ] = [ 0 1 2 ( w T B 1 w + 1 ) n w T w ] . (33)

By expanding the second to the last row, w T B 1 w + = 1 2 ( w T B 1 w + 1 ) . But from

(29), w + = Δ α u − Δ β J u . This implies that, by taking the inner product of both sides with w , yields

w T B 1 w + = Δ α ( w T B 1 u ) − Δ β ( w T B 1 J u ) = 1 2 ( w T B 1 w + 1 ) .

Using the definition (31) for n α and n β , we obtain

n β Δ α − n α Δ β = 1 2 ( w T B 1 w + 1 ) , (34)

where the unknown quantities Δ α and Δ β are to be determined, so we need an extra equation to be able to do so. The last row of the matrix-vector multiplication (cf. (33)) above comes from (32) since

n w T w + + n α Δ α + n β Δ β = n w T ( w + Δ w ) + n α Δ α + n β Δ β = n w T w + ( n w T Δ w + n α Δ α + n β Δ β ) ︸ = 0 = n w T w .

If we substitute the expression (30) for n w and (29) for w + into the left hand side, then one obtains

[ n α u T − n β ( J u ) T ] [ Δ α u − Δ β J u ] + n α Δ α + n β Δ β = n w T w .

Furthermore, by expanding the first term on the left hand side, using the properties of J , then

[ n α u T − n β ( J u ) T ] ( Δ α u − Δ β J u ) = n α Δ α u T u + n β Δ β u T J T J u = n α Δ α ‖ u ‖ 2 + n β Δ β ‖ u ‖ 2 .

Consequently,

n α Δ α ‖ u ‖ 2 + n β Δ β ‖ u ‖ 2 + n α Δ α + n β Δ β = ( 1 + ‖ u ‖ 2 ) ( n α Δ α + n β Δ β ) = n w T w .

Observe that because u is real, ( 1 + ‖ u ‖ 2 ) is nonzero. Accordingly, after dividing both sides by ( 1 + ‖ u ‖ 2 )

n α Δ α + n β Δ β = n w T w ( 1 + ‖ u ‖ 2 ) . (35)

We combine the two equations (34) and (35) below

[ n β − n α n α n β ] [ Δ α Δ β ] = [ 1 2 ( w T B 1 w + 1 ) n w T w ( 1 + ‖ u ‖ 2 ) ] , (36)

to compute Δ α and Δ β simultaneously. The matrix on the left hand side is always nonsingular except at the root (in which case all entries are zero), this is because, its determinant is n α 2 + n β 2 . Equation (35) can now be applied to simplify

w T B 1 J w + = w T B 1 J ( Δ α u − Δ β J u ) = w T B 1 ( Δ α J u + Δ β u ) = Δ α ( w T B 1 J u ) + Δ β ( w T B 1 u ) = n α Δ α + n β Δ β = 0. (37)

Notice that we have used the property J 2 = − I 2 n to arrive at the second step above and the definition (29) for w + .

Next, we want to establish the orthogonality of ϕ and Δ v in the next key result. Before we do that, notice from Theorem (2.1) that ϕ , at the root, is a scalar multiple of [ z 2 T , − z 1 T ,0,0 ] and by the definition of J in (18), we can also write ϕ = [ ( J w ) T ,0,0 ] , with w = [ z 1 T , z 2 T ] . This result holds when v = v ( k ) is at the root or not, but because the analysis used to establish the orthogonality is based on the assumption that M is nonsinsingular when not at the root. As a result of this, after presenting Algorithm (1) (to follow shortly), we then show that the same result holds when M is singular at the root.

Theorem 3.1 Let ϕ be an approximation to the exact nullvector n of F v ( v ) . Then, ϕ is orthogonal to Δ v .

Proof: To proof this, recall that v = [ w T , α , β ] , v + = [ w + T , α + , β + ] and ϕ = [ ( J w ) T ,0,0 ] . This implies

ϕ T Δ v = ϕ T ( v + − v ) = ( J w ) T ( w + − w ) = w T J T w + − w T J T w = w T J w − w T J w + = − w T J w + = 0, (38)

showing that ϕ T Δ v = 0 . In arriving at the last step above, we have used the properties of J and a special case of (37) where B 1 = I 2 n .

We present Algorithm (1), which involves the solution of two linear systems. The first is the 2n by 2n linear system of equations in (28), while the second is the 2 by 2 linear system (36).

Stop Algorithm 1 as soon as

‖ Δ v ( k ) ‖ ≤ t o l ,

where Δ v = [ w + − w , ω ] . The above analysis shows that ϕ T Δ v = 0 when v ( k ) is not at the root. Next, we want to show that the same result holds at the root.

In a manner analogous to the proof of Lemma (2.1), we postmultiply both sides of the above system of equation by C T where C is the 2n by 2 real matrix defined by (14), consisting of the left nullvectors of M . If M and N are as defined respectively in (10) and (11), then, this is the same as

C T M w + + C T N [ Δ α Δ β ] = 0 .

But by the definition of C , the first term on the left hand side of the equation above is zero, since C T M = 0 T . It can be recalled from the proof of Lemma 2.1 that the 2 by 2 real matrix H = C T N is nonsingular at the root. This implies

C T N [ Δ α Δ β ] = H [ Δ α Δ β ] = [ − ( ψ 1 T B z 1 + ψ 2 T B z 2 ) ψ 1 T B z 2 − ψ 2 T B z 1 ψ 1 T B z 2 − ψ 2 T B z 1 ψ 1 T B z 1 + ψ 2 T B z 2 ] [ Δ α Δ β ] = 0 .

Accordingly, Δ α = Δ β = 0 because of the nonsingularity of H . Therefore,

M w + = 0 . (39)

From the property of M at the root, it has two nonzero nullvectors and hence singular. The implication of this fact and the above is that, w + is in the nullspace of M . But, we have already established that the nullspace of M consists of w T = [ z 1 T , z 2 T ] and w 1 = J w = [ z 2 T , − z 1 T ] . Hence, we can write

w + = μ w + τ w 1 .

Now, from the last equation in (24),

1 2 ( w T B 1 w + 1 ) = w T B 1 w + = μ w T B 1 w + τ w T B 1 w 1 = μ w T B 1 w + τ ( z 1 T B z 2 − z 2 T B z 1 ) = μ w T B 1 w .

Consequently,

μ = w T B 1 w + 1 2 w T B 1 w .

But at the root, w T B 1 w = z 1 T B z 1 + z 2 T B z 2 = 1 . Therefore, μ = 1 , w + = w , z 1 + = z 1 and z 2 + = z 2 . This will now be used to deduce the following corollary at the root.

Corollary 3.1 Let ϕ = [ ( J w ) T ,0,0 ] . Let v + = [ w + , α + , β + ] T . Then, ϕ is orthogonal to Δ v at the root.

Proof: This follows from the second to the last line of the proof of Theorem 3.1 (cf., Equation (38)) where w + = w . Hence,

ϕ T Δ v = − w T J w + = − w T J w = 0.

In Section 2, we found two nonzero nullvectors for M at the root. As a result of this property of M at the root, in this section, we will describe an inexact inverse iteration technique for solving the large sparse system of Equations (28) in step 2 of Algorithm 1 and present Algorithm 2 and Algorithm 3. Result of a numerical experiment is given which supports the theory in Section 5.

We give the following version of inexact inverse iteration in Algorithm 2. We will use a fixed tolerance. Note that because of the special nature of M at the root, the choice of a preconditioner is crucial for convergence to the desired accuracy to be achieved. The inexact linear solver that we use is the preconditioned GMRES [

P ≈ [ A − α B β B A − α B ] . (40)

Next, we present Algorithm 3, which is the inexact inverse iteration equivalence of Algorithm 1. The stopping criterion for the outer iteration in Algorithm 3 depends on the norm of the eigenvalue residuals, that is

‖ r 1 ( k ) ‖ = ‖ ( A − α ( k ) ) z 1 ( k ) + β ( k ) B z 2 ( k ) ‖ ≤ t o l ,

and

‖ r 2 ( k ) ‖ = ‖ ( A − α ( k ) ) z 2 ( k ) − β ( k ) B z 1 ( k ) ‖ ≤ t o l ,

and

‖ [ Δ α Δ β ] ‖ ≤ t o l and ‖ w + − w ‖ ≤ t o l .

As mentioned earlier, the sparse linear system of equations in step 2 of Algorithm 1 is solved with an LU type factorization of M which is expensive and besides the L and U factors may have more nonzero elements than M . In this section, our main goal is to use preconditioned GMRES with the block triangular preconditioner P in (40) to solve the system M u = B 1 w inexactly. We do this by considering a single numerical experiment with a fixed and a decreasing tolerance. Results are presented by means of tables and figures.

Example 5.1

Consider the 200 by 200 matrix A bwm200.mtx from the matrix market library [

In this example, we take α ( 0 ) = 0.0 , β ( 0 ) = 2.5 in line with [

In

k | α | β | ‖ w + − w ‖ | ‖ [ Δ α Δ β ] ‖ | ‖ Δ v ‖ | f |
---|---|---|---|---|---|---|

0 | 0.00000e+00 | 2.50000 | 3.9e+00 | 7.3e−01 | 4.0e+00 | 2 |

1 | 1.81138e−01 | 1.78811 | 1.9e+00 | 2.3e−01 | 1.9e+00 | 2 |

2 | 1.14991e−01 | 2.01153 | 8.4e−01 | 1.5e−01 | 8.6e−01 | 2 |

3 | 9.16745e−02 | 2.15831 | 2.7e−01 | 1.0e−01 | 2.9e−01 | 2 |

4 | 7.62531e−02 | 2.26056 | 4.7e−02 | 1.4e−01 | 1.5e−01 | 3 |

5 | 3.87835e−03 | 2.14905 | 2.7e−03 | 1.4e−02 | 1.4e−02 | 2 |

6 | 4.00647e−03 | 2.13521 | 3.0e−03 | 7.8e−03 | 8.4e−03 | 4 |

7 | 4.36285e−04 | 2.14167 | 2.8e−03 | 2.7e−03 | 3.9e−03 | 2 |

8 | 4.37573e−04 | 2.13896 | 9.5e−04 | 1.2e−03 | 1.5e−03 | 2 |

9 | 4.36516e−04 | 2.14015 | 2.5e−04 | 7.9e−04 | 8.3e−04 | 5 |

10 | 1.83638e−05 | 2.13950 | 1.2e−07 | 3.8e−07 | 4.0e−07 | 2 |

11 | 1.83638e−05 | 2.13950 | 9.3e−08 | 2.9e−07 | 3.0e−07 | 8 |

12 | 1.82110e−05 | 2.13950 | 7.5e−08 | 6.0e−08 | 9.6e−08 | 2 |

13 | 1.82110e−05 | 2.13950 | 1.1e−08 | 1.8e−08 | 2.2e−08 | 11 |

14 | 1.81999e−05 | 2.13950 | 6.0e−11 | 5.3e−12 | 6.0e−11 | 15 |

15 | 1.81999e−05 | 2.13950 | 1.4e−12 | 4.5e−12 | 4.7e−12 | 2 |

16 | 1.81999e−05 | 2.13950 | 1.2e−12 | 3.8e−12 | 4.0e−12 | 14 |

17 | 1.81999e−05 | 2.13950 | 1.1e−12 | 8.3e−13 | 1.3e−12 | 2 |

18 | 1.81999e−05 | 2.13950 | 4.7e−14 | 2.1e−14 | 5.1e−14 | 18 |

k | α | β | ‖ w + − w ‖ | ‖ [ Δ α Δ β ] ‖ | ‖ Δ v ‖ | d |
---|---|---|---|---|---|---|

0 | 0.00000e+00 | 2.50000 | 3.9e+00 | 7.3e−01 | 4.0e+00 | 2 |

1 | 1.81138e−01 | 1.78811 | 1.9e+00 | 2.3e−01 | 1.9e+00 | 2 |

2 | 1.14991e−01 | 2.01153 | 8.4e−01 | 1.5e−01 | 8.6e−01 | 2 |

3 | 9.16745e−02 | 2.15831 | 2.7e−01 | 8.0e−02 | 2.8e−01 | 3 |

4 | 1.19965e−02 | 2.15159 | 3.8e−02 | 1.7e−02 | 4.1e−02 | 5 |

5 | 3.56419e−04 | 2.13989 | 7.2e−04 | 5.4e−04 | 9.0e−04 | 7 |

6 | 1.79381e−05 | 2.13950 | 2.6e−07 | 3.8e−07 | 4.6e−07 | 13 |

7 | 1.81999e−05 | 2.13950 | 4.0e−14 | 4.5e−14 | 6.0e−14 | 22 |

8 | 1.81999e−05 | 2.13950 | 2.1e−14 | 5.2e−14 | 5.6e−14 | 22 |

were needed to satisfy the decreasing inner tolerance as against the fixed tolerance. From

superlinearly in

While Ruhe ( [

Nevertheless, in this work, we obtained Algorithm 1 which consists of a combination of a 2n -by-2n system of equations (which is the same as those in [

The first author acknowledges funds provided by the University of Bath, United Kingdom during his PhD as well as an anonymous referee for useful comments.

Akinola, R.O., Kutchin, S.Y., Ayodele, A.S. and Muka, K.O. (2018) A Block-Preconditioned Inexact Linear Solver for Computing the Complex Eigenpairs of a Large Sparse Matrix. Journal of Applied Mathematics and Physics, 6, 429-445. https://doi.org/10.4236/jamp.2018.62040