An Efficient Algorithm for the Numerical Computation of the Complex Eigenpair of a Matrix ()
1. Introduction
In [1] , Akinola and Spence considered the problem of computing the eigenpair
from the generalized complex eigenvalue problem:
(1)
where
is a large real
non-symmetric matrix and
a real symmetric positive definite matrix. After adding the normalization [2]
(2)
to (1), they obtained a combined system of equations of the form
where
given as
(3)
In trying to solve the nonlinear system (3), two drawbacks were encountered. The first one is that if
from
solves (3), then so does
for any
, which means that
has no unique solution. Secondly,
in
is not differentiable since
does not satisfy the Cauchy-Riemann [3] equations which implies that (3) cannot be differentiated and the standard Newton’s method cannot be applied. The author then proposed that the above drawbacks could be overcome at least for the
case. Before the works of Akinola, Ruhe [4] and Tisseur [5] added the differentiable normalizations
(4)
and
(5)
where
is a fixed complex vector and
for some fixed s. Adding each of the two normalization to (1), Ruhe and Tisseur then obtained the following combined system of equations;
(6)
and
(7)
which have the corresponding Jacobians
(8)
and
(9)
In this paper, we show that the square Jacobian given by (8) is nonsingular at the root using the ABCD lemma if the eigenvalue of interest is algebraically simple. The major distinction between the two-norm normalization and Ruhe’s normalization is that the two-norm normalization is a natural normalization which makes the choice of
free. The Jacobian (9) above was shown to be singular in [5] at the root if and only if
is a finite multiple eigenvalue of the pencil
.
In this paper, we compare the numerical performance of the algorithm (Algorithm 1) based on Ruhe’s normalization (i.e., an application of Newton’s method on (6) using the Jacobian (8)) with previous algorithms developed by Akinola et al., in [1] [6] and [7] . All three algorithms: Algorithm 2 as discussed in [1] , Algorithm 3 as described in [6] , Algorithm 4 as presented in [7] were based on the natural two-norm normalization for the eigenvector. We show that with the same starting guesses, and a carefully chosen fixed complex vector
that the algorithm based on Ruhe’s normalization converges faster than the other three. The plan of this paper is as follows: in Section 2, we used Keller’s ABCD Lemma [8] to show that the Jacobian (8) is nonsingular at the root in Theorem 2.1 and we present the four Algorithms. In Section 3, we compare the performance of the four algorithms on three numerical examples. Eigenvalues are used in differential equations in studying stability and in complex biological systems in determining eigenvector centrality (see also [9] [10] ).
2. Methodology
In this section, we proof the main result in this paper which states the condition under which the Jacobian matrix (8) (for
) is non-singular at the root, that is
. This is then followed by a presentation of Algorithm 1, which is actually Newton’s method for solving (6). The remaining algorithms have been discussed extensively in [1] [6] and [7] . For the sake of avoiding self plagiarism, we refer the interested reader to those articles.
Algorithm 1 involves solving an
by
square system of equations using LU factorisation and does not involve splitting the eigenvalue and eigenvector into real and imaginary parts.
Algorithm 2 involves splitting the eigenpair into real and imaginary parts to obtain an under-determined non linear system of equations. This results in solving a
real under-determined linear system of equations for
real unknowns using Gauss-Newton method [11] . This is solved using QR factorisation.
Algorithm 3 also involves splitting the eigenpair into real and imaginary parts but with the help of an added equation we obtained a square
by
system of linear equations which is solved using LU factorisation.
Algorithm 4 is closely related to Algorithm 1 in the sense that both uses complex arithmetic. While Algorithm 1 used a fixed complex vector which does not change throughout the computation, Algorithm 4 uses the natural two-norm normalization which ensures that the eigenvector is updated at each stage of the computation.
Theorem 2.1. Let
be an
by
matrix,
. Let
(10)
be an
by
matrix. If
is singular and
Math_49#, then
is nonsingular if and only if
, for all
and
, for all
. Where
is the nullspace of
.
Proof: Let
be nonsingular. Assume
is singular and
, we want to show by contradiction that
. We multiply
from the right by the nonzero vector
to yield
(11)
This shows that we have multiplied the nonsingular matrix
by a nonzero vector to obtain the zero vector, this implies that
is singular, a contradiction, hence
. Similarly, let
, multiply
from the left by the nonzero vector
to obtain
This shows that
is singular, contradicting the nonsingularity of
, therefore,
.
Conversely, let
be singular of
, and assume
and
. We want to show that
is nonsingular. If we can show that the vector
is zero in
then
is nonsingular. After expanding the above equation, we obtain
(12)
(13)
By using the fact that
in
, we have
. But by assumption,
, hence
. With this value of
, we are left with
in (12) and because
is singular, this implies that
. After substituting the value of
into (13), we have
from which
is immediate since
. Therefore,
and
is nonsingular.
Next, we present Algorithm 1 for computing the complex eigenpair of
using complex arithmetic. This is the main contribution to knowledge in this paper.
Stop Algorithm 1 as soon as
.
Next, we present Algorithm 4 for computing the complex eigenpair of
using complex arithmetic.
3. Numerical Experiments
In this section, we compare the performance of the algorithm (Algorithm 1) obtained by adding Ruhe’s normalization with three other algorithms (Algorithm 2, Algorithm 3 and Algorithm 4) which were presented in the last section on three numerical examples. Throughout this section
and
.
Example 3.1.
Consider the matrix
We compared the performance of the four algorithms on the two by two matrix and the results are as presented in Table 1, Table 2, Table 3 and Table 4 respectively. In all four algorithms we used the same initial guesses
,
,
and
. It was observed that
Table 1. Values of
for the two by two matrix using Algorithm 1.
Table 2. Values of
for the two by two matrix using Algorithm 2. Quadratic convergence is shown in columns 4, 6 and 7 for
and
.
Table 3. Values of
and
for the two by two matrix using Algorithm 3. Quadratic convergence is shown in columns 4, 6 and 7 for
and
.
Table 4. Values of
for the two by two matrix using Algorithm 4. Quadratic convergence is shown in columns 3, 5 and 6 for
and
.
Figure 1. Distribution of the complex eigenvalues of the 20 by 20 grcar matrix. The
-axis is the real axis while the
-axis is the imaginary axis.
Algorithm 1 converged after only four iterations while it took seven iterates for the other three to converge to the eigenvalue
.
Example 3.2.
The grcar matrix [12] [13] is a non symmetric matrix with sensitive eigenvalues and defined by
Figure 1 shows the distribution of the complex eigenvalues of the twenty by twenty grcar matrix on the real and imaginary axis. All the four algorithms discussed in the last section converged to the eigenvalue
after 12 iterations with the same starting guesses as shown in Table 5, Table 6, Table 7 and Table 8. However, unlike the first example in which Algorithm 1 converged faster than the other three, that was not the case, maybe due to the sensitivity of its eigenvalues.
Figure 2. Distribution of the complex eigenvalues of the 200 by 200 bmw 200.mtx matrix. The
-axis is the real axis while the
- axis is the imaginary axis.
Example 3.3.
Consider the 200 by 200 matrix
bwm200.mtx from the matrix market library [14] . It is the discretised Jacobian of the Brusselator wave model for a chemical reaction. The resulting eigenvalue problem with
was also studied in [9] and we are interested in finding the rightmost eigenvalue of
which is closest to the imaginary axis and its corresponding eigenvector. Figure 2 shows the distribution of the complex eigenvalues of the matrix.
For this example, in all four algorithms we take
in line with [9] and took
,
and
, where 1 is
Table 5. Values of
for the twenty by twenty grcar matrix using Algorithm 1. Quadratic convergence is shown in columns 3 and 5 for
and 11.
Table 6. Values of
for the twenty by twenty grcar matrix using Algorithm 2. Quadratic convergence is shown in columns 4, 5 and 6 for
and 11.
Table 7. Values of
for the twenty by twenty grcar matrix using Algorithm 3. Quadratic convergence is shown in columns 4, 5 and 6 for
and 11.
Table 8. Values of
for the twenty by twenty grcar matrix using Algorithm 4. Quadratic convergence is shown in columns 4, 5 and 6 for
and 11.
the vector of all ones. Results of numerical experiments are as tabulated in Tables 9-12 respectively. We observed that while it took Algorithm 1 with a fixed complex vector six iterations to converge to the desired eigenvalue
as shown in Table 9, it took eight, ten and eight iterates for Algorithm 2 (Table 10), Algorithm 3 (Table 11) and Algorithm 4 (Table 12) respectively to achieve the same result. This shows that Algorithm 1 converged faster than the others.
As shown in Table 1 and Table 9, we observed that an application of Algorithm 1 showed faster convergence to the eigenvalue of interest with a close enough initial guess than the previous algorithms already discussed in [1] [6] and [7] viz-a-viz: Algorithm 2, Algorithm 3 and Algorithm 4 respectively.
Table 9. Values of
for the 200 by 200 matrix of Example 3.3 using Algorithm 1. Columns 4 and 5 shows that the results converged quadratically for
and 4.
Table 10. Values of
and
for the 200 by 200 matrix of Example 3.3 using Algorithm 2. Columns 6 and 7 show that the results converged quadratically for
and 7.
Table 11. Values of
and
for the 200 by 200 matrix of Example 3.3 using Algorithm 3. Columns 6 and 7 show that the results converged quadratically for
and 7.
Table 12. Values of
and
for the 200 by 200 matrix of Example 3.3 using Algorithm 4. Columns 5 and 6 show that the results converged quadratically for
and 7.
4. Conclusion
In this paper, we have shown using the ABCD Lemma that the Jacobian obtained from adding Ruhe’s normalization to the matrix pencil is non-singular at the root. With a proper choice of the fixed complex vector and an initial guess close to the eigenvalue of interest, we recommend the use of Algorithm 1 for the numerical computation of the desired complex eigenpair of a matrix because of its faster convergence.
Acknowledgements
The authors acknowledge valuable suggestions of an anonymous referee which helped in improving the final version of this paper. The main part of this work was done when the first author was a Ph.D. student at the University of Bath and duly acknowledge financial support in the form of a studentship.