An Efficient Algorithm for the Numerical Computation of the Complex Eigenpair of a Matrix

In computing the desired complex eigenpair of a matrix, we show that by adding Ruhe’s normalization to the matrix pencil, we obtain a square nonlinear system of equations. In this work, we show that the corresponding Jacobian is non-singular at the root and that with an appropriately chosen initial guesses, Ruhe’s normalization with a fixed complex vector not only converges quadratically but also faster than the earlier Algorithms for the numerical computation of the complex eigenpair of a matrix. The mathematical tools used in this work are Newton and Gauss-Newton’s methods.


Introduction
In [1], Akinola and Spence considered the problem of computing the eigenpair ( ) , λ x from the generalized complex eigenvalue problem: , λ = Dx Px (1) where , 0, ,  is a large real n n × non-symmetric matrix and P a real symmetric positive definite matrix.After adding the normalization [2] 1, H = x Px (2) to (1), they obtained a combined system of equations of the form ( ) 0, where , , In trying to solve the nonlinear system (3), two drawbacks were encountered.
The first one is that if x from ( ) , λ x solves (3), then so does exp iθ x for any [ ) , which means that x has no unique solution.Secondly, x in T H = x x is not differentiable since x does not satisfy the Cauchy-Riemann [3] equations which implies that (3) cannot be differentiated and the standard Newton's method cannot be applied.The author then proposed that the above drawbacks could be overcome at least for the I = P case.Before the works of Akinola, Ruhe [4] and Tisseur [5] added the differentiable normalizations 1, , where c is a fixed complex vector and ( ) max , τ = D P for some fixed s.
Adding each of the two normalization to (1), Ruhe and Tisseur then obtained the following combined system of equations; ( ) ( ) (7) which have the corresponding Jacobians  (9) In this paper, we show that the square Jacobian given by ( 8) is nonsingular at the root using the ABCD lemma if the eigenvalue of interest is algebraically simple.The major distinction between the two-norm normalization and Ruhe's normalization is that the two-norm normalization is a natural normalization which makes the choice of c free.The Jacobian (9) above was shown to be singular in [5] at the root if and only if λ * is a finite multiple eigenvalue of the pencil ( ) In this paper, we compare the numerical performance of the algorithm (Algorithm 1) based on Ruhe's normalization (i.e., an application of Newton's method on (6) using the Jacobian (8)) with previous algorithms developed by Akinola et al.,in [1] [6] and [7].All three algorithms: Algorithm 2 as discussed in [1], Al- gorithm 3 as described in [6], Algorithm 4 as presented in [7] were based on the natural two-norm normalization for the eigenvector.We show that with the same starting guesses, and a carefully chosen fixed complex vector c that the algorithm based on Ruhe's normalization converges faster than the other three.
The plan of this paper is as follows: in Section 2, we used Keller's ABCD Lemma [8] to show that the Jacobian ( 8) is nonsingular at the root in Theorem 2.1 and we present the four Algorithms.In Section 3, we compare the performance of the four algorithms on three numerical examples.Eigenvalues are used in differential equations in studying stability and in complex biological systems in determining eigenvector centrality (see also [9] [10]).

Methodology
In this section, we proof the main result in this paper which states the condition under which the Jacobian matrix (8) (for = P I ) is non-singular at the root, that is ( ) x .This is then followed by a presentation of Algorithm 1, which is actually Newton's method for solving (6).The remaining algorithms have been discussed extensively in [1] [6] and [7].For the sake of avoiding self plagiarism, we refer the interested reader to those articles.
Algorithm 1 involves solving an ( ) square system of equations using LU factorisation and does not involve splitting the eigenvalue and eigenvector into real and imaginary parts.
Algorithm 2 involves splitting the eigenpair into real and imaginary parts to obtain an under-determined non linear system of equations.This results in solving a ( ) real under-determined linear system of equations for ( ) unknowns using Gauss-Newton method [11].This is solved using QR factorisation.
Algorithm 3 also involves splitting the eigenpair into real and imaginary parts but with the help of an added equation we obtained a square ( ) system of linear equations which is solved using LU factorisation.
Algorithm 4 is closely related to Algorithm 1 in the sense that both uses complex arithmetic.While Algorithm 1 used a fixed complex vector which does not change throughout the computation, Algorithm 4 uses the natural two-norm normalization which ensures that the eigenvector is updated at each stage of the computation., 0 (10) be an ( ) This shows that we have multiplied the nonsingular matrix M by a nonzero vector to obtain the zero vector, this implies that M is singular, a contradic-tion, hence This shows that M is singular, contradicting the nonsingularity of M , therefore, , and assume then M is nonsingular.After expanding the above equation, we obtain 0.
By using the fact that . But by assumption, , hence 0 q = .With this value of q , we are left with ( ) . Therefore, 0 = p and M is nonsingular.
Next, we present Algorithm 1 for computing the complex eigenpair of D using complex arithmetic.This is the main contribution to knowledge in this paper.
Algorithm 1 Eigenpair Computation using Newton's method Input: and tol.
4: Solve the lower triangular system 5: Solve the upper triangular system 7: end for Out for: Stop Algorithm 1 as soon as Next, we present Algorithm 4 for computing the complex eigenpair of D using complex arithmetic.
6: end for Out for: Algorithm 3 Eigenpair Computation using Newton's method [6] Input: x x v w and tol .

Mw d w w
4: Solve the lower triangular system 5: Solve the upper triangular system Algorithm 4 Eigenpair Computation using Newton's method in complex arithmetic [7] Input: x and tol.
4: Solve the lower triangular system 5: Solve the upper triangular system Out for:

Numerical Experiments
In this section, we compare the performance of the algorithm (Algorithm 1) obtained by adding Ruhe's normalization with three other algorithms (Algorithm 2, Algorithm 3 and Algorithm 4) which were presented in the last section on three numerical examples.Throughout this section ( ) , Consider the matrix We compared the performance of the four algorithms on the two by two matrix and the results are as presented in Table 1, Table 2, Table 3 and Table 4 respectively.In all four algorithms we used the same initial guesses ( ) 6.0 10 ( ) for the two by two matrix using Algorithm 1.The grcar matrix [12] [13] is a non symmetric matrix with sensitive eigenvalues and defined by ( ) 1, if and 0, Otherwise.
i j i j i j j i k 1 shows the distribution of the complex eigenvalues of the twenty by twenty grcar matrix on the real and imaginary axis.All the four algorithms discussed in the last section converged to the eigenvalue  5, Table 6, Table 7 and Table 8.However, unlike the first example in which Algorithm 1 converged faster than the other three, that was not the case, maybe due to the sensitivity of its eigenvalues.Consider the 200 by 200 matrix D bwm200.mtxfrom the matrix market library [14].It is the discretised Jacobian of the Brusselator wave model for a chemical reaction.The resulting eigenvalue problem with P = I was also studied in [9] and we are interested in finding the rightmost eigenvalue of D which is closest to the imaginary axis and its corresponding eigenvector.Figure 2 shows the distribution of the complex eigenvalues of the matrix.
For this example, in all four algorithms we take ( ) ( ) with [9] and took ( ) , where 1 is Table 5.Values of ( ) ( ) for the twenty by twenty grcar matrix using Algorithm 1.
Quadratic convergence is shown in columns 3 and 5 for       the vector of all ones.Results of numerical experiments are as tabulated in Tables 9-12 respectively.We observed that while it took Algorithm 1 with a fixed complex vector six iterations to converge to the desired eigenvalue  9, it took eight, ten and eight iterates for Algorithm 2 (Table 10), Algorithm 3 (Table 11) and Algorithm 4 (Table 12) respectively to achieve the same result.This shows that Algorithm 1 converged faster than the others.
As shown in Table 1 and Table 9, we observed that an application of Algorithm 1 showed faster convergence to the eigenvalue of interest with a close enough initial guess than the previous algorithms already discussed in [1] [6] and [7] viz-a-viz: Algorithm 2, Algorithm 3 and Algorithm 4 respectively.

Conclusion
In this paper, we have shown using the ABCD Lemma that the Jacobian obtained from adding Ruhe's normalization to the matrix pencil is non-singular at the root.With a proper choice of the fixed complex vector and an initial guess close to the eigenvalue of interest, we recommend the use of Algorithm 1 for the numerical computation of the desired complex eigenpair of a matrix because of its faster convergence.

.
We multiply M from the right by the nonzero vector [ ]

k α and ( ) k β for the two by two matrix using Algorithm 3
k

Figure 1 .
Figure 1.Distribution of the complex eigenvalues of the 20 by 20 grcar matrix.The x -axis is the real axis while the y -axis is the imaginary axis.
the same starting guesses as shown in Table

Figure 2 .
Figure 2. Distribution of the complex eigenvalues of the 200 by 200 bmw 200.mtx matrix.The x -axis is the real axis while the yaxis is the imaginary axis. k k k

Table 12 .
Values of ( ) the 200 by 200 matrix of Example 3.3 using Algorithm 4. Columns 5 and 6 show that the results converged quadratically for k β for