A Block-Preconditioned Inexact Linear Solver for Computing the Complex Eigenpairs of a Large Sparse Matrix

In computing eigenpairs of the matrix pencil, one obtains a linear system of equations. In this work, we show how a block triadiagonal preconditioner in GMRES can be used to solve a large sparse unsymmetric system of equations inexactly using a fixed and decreasing tolerance. While the fixed tolerance solver converged superlinearly to the eigenvalue of interest, the decreasing one converged quadratically. This surpasses an earlier result which converged harmonically.


Introduction
Let A be a large sparse, real n by n nonsymmetric matrix and where λ ∈  is the eigenvalue of the pencil ( ) , A B and z its corresponding complex eigenvector.We assume that the eigenpair of interest ( ) , λ z is algebraically simple, with H ψ the corresponding left eigenvector such that [1]   0.
By adding the normalization 1, H = z Bz (3) to (1), the combined system of equations can be expressed in the form Note that H z Bz is real since B is symmetric and positive definite.This results in solving a system of nonlinear n complex and one real equation for the . The reason we cannot use Newton's method to solve (4) is because z in the normalization Recall that for a real eigenpair ( ) 4) is ( ) real equations for ( ) real unknowns and Newton's method for solving (4) involves the solution of the ( ) Bz z Bz (5) for the ( ) Secondly, if instead of the normalization (3), we add , where c is a fixed complex vector (see, for example, [2]), ( 1) and 1 H = c z provide ( ) complex equations for ( ) complex unknowns, and the Jacobian of this new system is ( ) . 0 The above Jacobian is square and can be easily shown to be nonsingular, using the ABCD Lemma [3] if the eigenvalue of interest is algebraically simple and 0 H ≠ c z .Thirdly, if ( ) , λ z is complex, then, as stated earlier, we have n complex and one real equation.Also, if z solves (4), then so does e iθ z for any θ , such that 0 2 θ ≤ ≤ π .
Our approach for analyzing the solution of (4) for v begins by splitting the eigenpair ( ) , λ z into their real and imaginary parts: where 1 2 , n ∈ z z  , and , α β ∈  .After expanding (4), we will obtain a real 2 1 n + under-determined system of nonlinear equations in 2 2 n + real unknowns , and it is natural to use the Gauss-Newton method (see, for example, Deuflhard ([4], pp.222-223)) to obtain a solution (see also, [5] [6] [7] [8]).By linearizing the under-determined system of nonlinear equations, we obtain an under-determined system of linear equations involving the Jacobian.This paper is structured as follows: in Section 2, we show that the Jacobian has a unique nullvector at the root.This is then followed in Section 3, we present two orthogonality results and Algorithm 1 is given.In Section 4, we present an inexact inverse iteration algorithm with preconditioning for solving the large system of equations encountered.Algorithm 1. Computing the complex eigenvalues of the pencil ( ) Mu B w for u :  Find the lu factorization of M i.e., [ ] ( )  Solve the lower triangular system . .
The main mathematical tools used in this paper are the LU factorization, inexact inverse iteration and preconditioned Generalized Minimal Residual (GMRES) [9].The main reason for using inexact inverse iteration is due to the fact that as mentioned in an earlier paper, we do not solve practice but we will use it to show that the solution is possible.Theorem 2.1 shows that the Jacobian has a single nullvector at the root, while Theorem 3.1 gives an important orthogonality result.Algorithms 1-3 are presented.We remark that in the limit, the approximate nullvector converges to the exact.A numerical example is given which supports the validity of the algorithms presented though, as usual, relies on good initial guesses to the desired eigenpair.
The classical inverse iteration for the matrix pencil converges slowly for some eigenvalue problems while we present algorithms that converge quadratically.Throughout this paper, unless otherwise stated all norms are the 2-norm.
The following result helps to enforce the validity of the results in this paper. .
Algorithm 3. Complex eigenvalues of the pencil ( ) , A B using Inexact Inverse Iteration with preconditioning.

Compute
T 1 α = n w B Ju and Form the vector In the next section, we will express both z and λ as i λ α β = + and + z z z , convert the nonlinear system (4) to a real under-determined system of nonlinear equations and prove some important results.

Computation of Complex Eigenpairs by Solving an Under-Determined System of Nonlinear Equations
In this section, we will expand the system of nonlinear n complex and one real equations in 1 n + complex unknowns (4) by writing z and λ as n + real under-determined system of nonlinear equations in 2 2 n + real unknowns.This will then be followed by presenting the underdetermined system of nonlinear equations and explicit expression for its Jacobian.Furthermore, we will show in the main result of this section-Theorem

that if the eigenvalue of interest in ( )
, A B is algebraically simple, then the Jacobian has linearly independent rows.We will find the right nullvector of the  4) can be written as and 2 .
H = + z Bz z Bz z Bz (7) Hence, ( ) B z 0 , we equate the real and imaginary parts of ( 6) to zero and obtain the 2n real equations ( )

( )
F v consists of the 2n real equations arising from ( 6) and one real equation : is a 2 1 n + by 2 2 n + real matrix.From the Jacobian (9) above, we define the real 2n by 2n matrix M as Also, we form the 2n by 2 real matrix [ ] and the matrix of right nullvectors of M at the root, where and O is the n by n zero matrix.The Jacobian ( 9) can be rewritten in the following partitioned form ( ) ( ) ( ) with M , N are as defined in ( 10) and ( 11) respectively.Note that because at the root, ) or its nonzero scalar multiple is a right nullvector of M .
In the same vein, we find or its nonzero scalar multiple is also a right nullvector of M at the root.Since the eigenvalue λ of ( ) A B is algebraically simple by assumption, then by (2), we need to give explicit expressions for the left nullvector of ( ) λ − A B in order to prove that the Jacobian has full row rank at the root.
Observe that if we define .
Hence, ( ) ( ) and it shows that So we form the matrix C consisting of the 2-dimensional left nullvectors of M at the root (in practice C is not computed), as Now, observe that the condition (2), implies

and Physics
Therefore, at the root, either are nonzero.It excludes the possibility that they are both zero.
Before we continue with the rest of the analysis, we pause a little to present the main result of this section which shows that the Jacobian (9) has a one dimensional nullvector at the root.Proof: See [5].
After linearizing ( ) = F v 0 , we have the following under-determined linear system of equations Let ( )   k n be the exact nullvector of the Jacobian . By adding the extra condition , which stems from Lemma 1.1 to the underdetermined linear system of Equations ( 15), we obtain the following square linear system of equations

Square System of Equations for the Numerical Computation of the Complex Eigenvalues of a Matrix
In the preceding section, we saw that by adding an extra equation to the under-determined linear system of equations 15, we obtained a square system of equations ( 16).However, in practice we would never compute ( ) k n , but Theorem 2.1 guarantees the existence of a unique nullvector ( ) k φ at the root.This is the motivation for the discussion in this section.In this section, we will use ( ) , , 0, 0 φ as an approximation to n in (16) and show that the solution obtained by solving (15) is equivalent to the solution obtained by solving in the absence of round off errors.To do this, we will show that ( ) for each k, and this is presented in the main result of this section: Theorem 3.1.
Algorithm 1 is given for computing an algebraically simple eigenpair of the pencil ( ) , A B .Note that since M has been shown to be singular at the root in section 2, this section is anchored on the assumption that when v is not at the root, M is nonsingular.
First, we define the 2n by 2n matrix J as (see, also [11]) and The matrix J satisfies the following properties: 1) T = − J J.
2) T 2n = J J I , where 2n I is the 2n by 2n identity matrix.We begin by writing the linear system of Equations (15) explicitly.For ease of notation, we shall drop the superscripts and define + = + ∆ w w w where ( ) , with w and ∆w respec- tively.This means that (15) can now be rewritten as: ( ) which is equivalent to the following system of equations After rearrangement, the first n-equation reduces to By multiplying both sides of the ( ) equation by 2, we obtain:

∆ + = w B w w B w
This in turn reduces to The combined set of Equations ( 21) and ( 23), which is the simplified form of (20), can be expressed as: ( ) ( ) We assume that the 2n by 2n matrix M is nonsingular except at the root.
This is what forms the basis for the following discussion.That is to say, we want to show that when not at the root, ( ) First of all, let the exact nullvector n of ( ) ( ) be defined as T , , n n are real scalars, Jw and M are defined respectively by ( 19) and (10).Hence, ( ) then after expanding the matrix-vector multiplication, we obtain ( ) We make distinctly clear at this juncture, that the nullvector T , , is not exactly the same as ( )  Jw φ because, the later has the form of the exact nullvector at the root, but is evaluated at the kth iterate while the former is the nullvector even when not at the root.
Another way of writing (24) is as follows This means that we could solve (24) by solving for u .After which the solution of ( 27) is given by .α β With this expression for + w , it can be observed that

Mw Mu MJu B w JMu B w JB w B w B Jw
Which means that + w is well defined.Furthermore, from (25) and .
Consider the problem of solving the under-determined linear system of Equations (20) for the 2 2 n + real unknowns T , , . It was stated in Lemma 1.1 that the minimum norm solution to an under-determined linear system of equations is orthogonal to the nullspace.It is an application of this result that yields the following important relationship: If we add the nullvector n to the last row of (24), then ( ) ( ) By expanding the second to the last row, ( ) Ju .This implies that, by taking the inner product of both sides with w , yields

w w B u w B Ju w B w
Using the definition (31) for α n and β n , we obtain ( ) where the unknown quantities α ∆ and β ∆ are to be determined, so we need an extra equation to be able to do so.The last row of the matrix-vector multiplication (cf.(33)) above comes from (32) since

n w n n n w w n n n w n w n n n w
If we substitute the expression (30) for w n and (29) for + w into the left hand side, then one obtains Furthermore, by expanding the first term on the left hand side, using the properties of J , then ( ) ( ) Observe that because u is real, ( ) 1+ u is nonzero.Accordingly, after dividing both sides by ( ) We combine the two equations ( 34) and ( 35) below ( ) ( ) Notice that we have used the property 2 2n = − J I to arrive at the second step above and the definition (29) for + w .
Next, we want to establish the orthogonality of φ and ∆v in the next key result.Before we do that, notice from Theorem (2.1) that φ , at the root, is a scalar multiple of and by the definition of J in (18), we can also write ( ) is at the root or not, but because the analysis used to establish the orthogonality is based on the assumption that M is nonsinsingular when not at the root.As a result of this, after presenting Algorithm (1) (to follow shortly), we then show that the same result holds when M is singular at the root.Theorem 3.1 Let φ be an approximation to the exact nullvector n of ( ) showing that T 0 ∆ = v φ . In arriving at the last step above, we have used the properties of J and a special case of (37) where We present Algorithm (1), which involves the solution of two linear systems.
The first is the 2n by 2n linear system of equations in (28), while the second is the 2 by 2 linear system (36).
Stop Algorithm 1 as soon as . The above analysis shows that is not at the root.Next, we want to show that the same result holds at the root.
In a manner analogous to the proof of Lemma (2.1), we postmultiply both sides of the above system of equation by T C where C is the 2n by 2 real matrix defined by ( 14), consisting of the left nullvectors of M .If M and N are as defined respectively in (10) (11), then, this is the same as But by the definition of C , the first term on the left hand side of the equation above is zero, since T T = C M 0 .It can be recalled from the proof of Lemma 2.1 that the 2 by 2 real matrix T = H C N is nonsingular at the root.This implies ( ) In Section 2, we found two nonzero nullvectors for M at the root.As a result of this property of M at the root, in this section, we will describe an inexact inverse iteration technique for solving the large sparse system of Equations (28) in step 2 of Algorithm 1 and present Algorithm 2 and Algorithm 3. Result of a numerical experiment is given which supports the theory in Section 5.
We give the following version of inexact inverse iteration in Algorithm 2. We will use fixed Note that because of the special nature of M at the root, the choice of a preconditioner is crucial for convergence to the desired accuracy to be achieved.The inexact linear solver that we use is the preconditioned GMRES [9] where we define the following block tridiagonal preconditioner, .
Next, we present Algorithm 3, which is the inexact inverse iteration equivalence of Algorithm 1.The stopping criterion for the outer iteration in Algorithm 3 depends on the norm of the eigenvalue residuals, that is

Numerical Experiments
As mentioned earlier, the sparse linear system of equations in step 2 of Algorithm 1 is solved with an LU type factorization of M which is expensive and besides the L and U factors may have more nonzero elements than M .In this section, our main goal is to use preconditioned GMRES with the block triangular preconditioner  in (40) to solve the system Consider the 200 by 200 matrix A bwm200.mtx from the matrix market library [12].It is the discretised Jacobian of the Brusselator wave model for a chemical reaction.The resulting eigenvalue problem with = B I was also studied in [13] and we are interested in finding the rightmost eigenvalue of A Journal of Applied Mathematics and Physics which is closest to the imaginary axis and its corresponding eigenvector.
In Table 1   were needed to satisfy the decreasing inner tolerance as against the fixed tolerance.From Table 1, we observed superlinear convergence in columns five and six.
Figure 1 and Figure 2 shows a plot of the norm of the eigenvalue residuals against the outer iterations with fixed and decreasing inner tolerances respectively.While the norm of the eigenvalue residuals decayed almost matrix.In this paper, we consider the problem of computing the eigenpair ( ) for c . Solve the Upper triangular system U = u c for u .
of full rank.If w , is an under-determined linear system of equations, then its least squares solution

Algorithm 2 .
Inexact inverse iteration algorithm.Input: , A B , ( ) . The reason for having an underdetermined system of equations instead of a square system of equations is becauseresults in 2n real equations.This results in a 2 1

4 ) 6 ) 1 = 1 =JB w and hence 1 = 1 = 1 =
The matrix J commutes with M , i.e., Let u be an unknown vector that solves Mu B w .By premultiplying both sides by J we obtain JMu MJu JB w because the commutativity of M and J .Therefore, if Mu B w , then Ju solves ( ) M Ju JB w .

1 ,Since w is 1 BFrom
-orthogonal to w n by virtue of Equation (26), taking the 1 B - inner product of both sides of the above with w yields u n w B Ju Journal of Applied Mathematics and Physics

.
because of the nonsingularity of H . Therefore, of M at the root, it has two nonzero nullvectors and hence singular.The implication of this fact and the above is that, + w is in the nullspace of M .But, we have already established that the nullspace of M Then, φ is orthogonal to ∆v at the root.Proof: This follows from the second to the last line of the proof of Theorem 3.1 (cf., Equation (38)) where + = w w.Hence,

1 =
Mu B w inexactly.We do this by considering a single numerical experiment with a fixed and a decreasing tolerance.Results are presented by means of tables and figures.Example 5.1

Figure 1 .
Figure 1.Convergence history of the eigenvalue residuals on Example 5.1 with a fixed tolerance inner 0.6 τ = .

Table 1 ,
and Table2, we present results for the computation of the while d in Table2represents number of inner iterations used by preconditioned GMRES to satisfy the decreasing inner tolerance

Table 2 .
However, this quadratic convergence is lost in the last iterate.This could be due to the fact that we are solving a nearly singular system as we approach the root.As shown in columns five and six of Table1, as we approach the root, more number of inner iterations

Table 1 .
Table showing convergence to the eigenvalue 1.81999e 05 2.13950i λ = − + with a fixed inner tolerance for Example 5.1.The last column shows the number of inner iterations it took to satisfy the fixed inner tolerance 0.6

Table 2 .
Table showing quadratic convergence to the eigenvalue 1.81999e 05 2.13950i λ = − + with a decreasing tolerance for Example 5.1.The last column shows the number of inner iterations it took to satisfy the decreasing inner