Advances in Linear Algebra & Matrix Theory
Vol.05 No.02(2015), Article ID:56994,7 pages
10.4236/alamt.2015.52005

On Exact Determination of Eigen Vectors

Raghuram Prasad Dasaradhi1*, V. V. Haragopal2

1Department of Mathematics, Osmania University, Hyderabad, India

2Department of Statistics, Osmania University, Hyderabad, India

Email: *draghuramp@gmail.com, haragopalvajjha@gmail.com

Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 24 March 2015; accepted 6 June 2015; published 9 June 2015

ABSTRACT

In this article, we determine the Eigen values and Eigen vectors of a square matrix by a new approach. This considers all the roots with their multiplicities are known, using only the simple matrix multiplication of a vector. This process does not even require matrix inversion.

Keywords:

Characteristic Equation, Minimal Polynomia, Eigen Values, Eigen Vectors, Vander Monde Matrices, Jordan Reduction

1. Introduction

There are many algorithms to determine the Eigen values and Eigen vectors of a square matrix [1] -[4] . Basically they are iterative, they either determine sequentially the largest Eigen values and associated Eigen vectors, or as in the case of positive definite matrices go for simultaneous iterative determination of all Eigen values and Eigen vectors by a succession of orthogonal transformations [1] [3] . Also, when one has multiple roots, the usual iterative approaches generally fail to work, unless additional properties of Eigen vectors of repeated roots are exploited. However, it is theoretically possible (and hence, in practice achievable with some success) to obtain all the Eigen vectors (including the generalized Eigen vectors connected with Jordan reduction) of a matrix if all the roots with their multiplicities are known using only the simple matrix multiplication of a vector. This process does not even require matrix inversion.

In what follows, we shall present the procedure through illustrative examples. Since the theory behind it is rather simple and becomes almost obvious once the way is pointed out, we shall not prove any result. Rather we shall only state a relevant new theorem in matrix theory. Implication of this theorem and its extensions in the general contexts are dealt with in a separate study.

2. Basic Points

To place the results of the present paper in a proper perspective, it is necessary to make the following points explicit before going to the theorem.

1) This procedure does require the knowledge of minimal polynomial [5] and the numerical value of the Eigen value for which the Eigen vector is to be obtained.

2) The only matrix operation involved in obtaining the Eigen vector is multiplication of a matrix and a vector.

3) One can obtain, with equal ease, the Eigen vectors for each known root, including the generalized vectors for multiple roots when they exist. In other words, Eigen vectors can be obtained for each value by itself without needing to determine either other root or the associated vectors.

4) When a multiple Eigen root has many roots, one can get them all by starting with different initial vectors.

® For convenience, we shall employ the following convention and notations:

a) A is a square matrix of order n with Eigen roots; unless stated otherwise, they are all assumed to be distinct.

b) is the matrix of Eigen columns of A.

c) 1) is the initial or starting vector and,.

2) is the n by r matrix with the as its columns.

d) is the Vander Monde matrix [6] of Eigen values

is the minimal polynomial of A and is the vector of coeff- icients in in reverse order. When the are all distinct, as is being assumed for the present, this is also the characteristic polynomial with as the corres- ponding coefficient vector. Thus, in the present situation.

As is well known, with the condition, a is unique if is of full rank. This will be the case when the characteristic and minimal polynomial are identical, otherwise the class of polynomials defined by this vector will have as their H.C.F. a polynomial which has the minimal polynomial as its (possibly trivial) factor.

e) is the quotient polynomial associated with the Eigen root, with the quotient vector.

3. Main Results

We can now state the new Eigen vector theorems and their obvious extensions which are at the heart of the procedures presented in the sequel.

THEOREM 1:

Proof is easy once it is noted that and, since the Eigen

vectors in general are unique upto scale and Eigen vectors associated with different Eigen values are linearly independent, the choice of above is not restrictive; unless one is extremely unlucky (or when the Eigen vectors are highly structured, as in a subsequent illustration), any vector, which may the arbitrarily chosen one, will be linear combination of all the’s.

THEOREM 2: The vector is an Eigen vector of A associated with the Eigen

values. This theorem is illustrated by the following examples.

3.1. Illustrations

We shall now illustrate the application of the above theorems in the determination of and the Eigen vectors. Different types of situations including the case of Jordan reducible matrices and generalized Eigen vectors will be illustrated by numerical examples. These examples will be interspersed with comments as required.

3.2. Illustration 1

, , ,

Hence a, such that, is and.

Since is of order 3, we have,

and

Similarly, , and

and.

3.3. Illustration 2

, , ,

Here, is of rank 2, hence through has non-trivial solutions, there is no unique solution “a” even with the requirement. Hence one cannot obtain by this approach. However, it is possible to get by using. Thus, solving the equation with, we get and .

Hence, , where is a 1st degree polynomial, say, since, one must have the coefficient of in equal to −7. Hence, only.

And, thus, has two distinct roots and; the root is a double root and has two independent eigenvectors.

Taking and, we get ;

Taking, we get .

A second eigenvector associated with is obtainable by taking new starting vector say.

We then get, giving

and.

As is to be expected, the obtained by using the two different’s are multiples of each other.

3.4. Illustration 3

, ,

Here again rank of is 2. Hence, solving with, we get .

Since, we have.

Thus is the triple Eigen root of but has two independent Eigen vectors and one generalized Eigen vector.

One Eigen vector is got by taking

;

Defining, we get, and .

It is easily verified that is a generalized Eigen vector of A with.

To obtain a second Eigen vector for,

we start with a different, say with

The new Eigen vectors are and.

With and.

Here, and we have.

Hence defining,

We have

Thus, is a second Eigen vector of, for.

Obviously, two independent Eigen vectors say, and Z above, are not unique; they are a basis for the two dimensional vector space of Eigen vectors of A for, viz., the solutions space of the equation.

3.5. Illustration 4

With and

Since is of full rank, we get the solution vector of as

Hence. Thus, has a triple Eigen root,

But has only one Eigen vector. Defining,

and,

We get; and.

Giving, and.

As is to be expected, we have; and.

3.6. Illustration 5

, , gives

Since is of rank 4, we have solution of as,

Giving the minimal polynomial

Thus, is a triple root with two Eigen vectors and one generalized Eigen vector while is a dou- ble root with one Eigen vector and one generalized eigenvector.

Since, , , we get

, with and.

Similarly with,

We get and

With and.

Thus, we have obtained four vectors for A; one more Eigen vector is yet to be obtained corresponding to the triple Eigen value.

3.7. Case 1

With the starting we have

This is obviously of rank 2 only.

Solving the homogeneous equations X3y = 0 we get a polynomial since is a solution. Thus we know that A has as a multiple root with a multiplicity of at least 2.

Using and we get

And and hence as generalized vector itself.

To illustrate the complexity of the situation one has to be prepared to encounter, we shall in turn present the results with five different starting vectors.

3.8. Case 2

Has rank 2. Hence, and

3.9. Case 3

have rank 3.

Hence. i.e., is also an Eigen value of A.

Taking and and

And, we get.

3.10. Case 4

is of rank 3.

Hence. Thus A has also as repeated root of multi- plicity at least 2. Combining the information from Case 3, we see that the matrix A has and as repeated roots, each of multiplicity at least 2. Since trA = 5, it follows that is actually a triple root and is a double root of the characteristic equation of A. Since in this case also, we get etc the same as in Case 3 with as the eigenvector for, using and for and the corresponding generalized vector by taking; giving; and.

3.11. Case 5

is of rank 2,

With, confirming that is at least a double root.

Taking and, we get and with,.

It is also instructive to examine the result of multiplication of an X matrix with the coefficient vector got by dividing the characteristic polynomial by any of its factors. With the of Case 5, taking, we get, the generalized vector for; taking we get, the Eigen vector for. It can also be verified that if is constructed using as the starting vector and is taken as and respectively, we get, the Eigen vector for and. This behavior is decided by the composition of in terms of s.

For the matrix A, we have one representation as

In Case 5, we have and by X from this initial vector, we can only get and. If has as a factor, the corresponding vector will be zero vector. If it has but not as a factor, then u will be an Eigen vector for while, if has only factors of other than, it will be a generalized Eigen vector associated with.

Similarly, in Case 2, we have, for which the associated Eigen root is, with multiplicity 3 but with only one super diagonal unity and hence and are eigenvectors for while is a generalized eigenvector.

Hence, as is to be expected, the X for this case viz,

Is of rank 2, giving with, the polynomial Any factor of which in- cludes gives the;

Any Q which includes only as a factor gives a V which is an Eigen vector for while any Q which has no factor at all gives a generalized vector of A for.

These results, of course, are the consequences of the second Eigen vector theorem presented elsewhere.

4. Summary

Some general observations regarding the problem of determining the Eigen values and corresponding Eigen vectors of a matrix are now in order. Though, theoretically obvious, significance of the procedure presented above in the case of computation of the same perhaps needs to be reiterated. However, in the present study we have not gone into the important questions regarding the approximations in practical computations and effect of consequent noise on the final results observed.

1) If one has the ability to solve a set of linear equations, one can obtain the characteristic polynomial of a matrix provided, it is also the minimal polynomial. This fact is, of course, well known. However, the same is true regarding the determination of minimal polynomial in general.

In the above notation, we compute sequentially the ranks of. Let k be the smallest integer such that. Then, the minimal polynomial of degree k and its coefficients are propor- tional to the solutions of the equation.

This is the case when is a nontrivial linear combination of all generalized Eigen vectors and one each of Eigen vectors connected with the Eigen roots of the matrix. Otherwise, the polynomial will be a proper factor of the minimal polynomial.

If is the minimal polynomial, say then A has k distinct Eigen values viz, , , with a minimum multiplicity of. However, since the minimal polynomial will have every root of as its root as well, it follows that where. Also, will have independent Eigen vectors and generalized Eigen vectors. Also, by using information on the trace of A, one can, by solving appropriate linear Diophantine equations, determine the values of s.

2) Since the highest common factors of a polynomial and its derivative have each of the roots of the polynomial,

multiplicity of each being reduced by 1, it follows that will have

each of the roots occurring exactly once. Hence, given, one can determine all the roots of the polynomial; their multiplicities, as noted earlier, can be determined by solving appropriate linear Diophantine equations.

3) Since real symmetric matrices are fully diagonalizable by an orthogonal matrix, their minimal polynomial will have no repeated root. This fact is of great help especially in situations where a dispersion matrix has signal Eigen values which are relatively large and possible distinct and a “noise Eigen value” which is hopefully small and will be of high multiplicity. A good estimation of the minimal polynomial will be possible with relatively less computational effort by the present approach. Using the same computational product by, one can get the Eigen vectors for these “Signal Eigen values”.

4) The present approach enables one also to tackle complex Eigen values and Eigen vectors, especially when A is real and hence Eigen values and vectors occur in conjugate pairs.

Acknowledgements

We are highly thankful to Late Prof. S. N. Narahari Pandit for suggesting this problem, we are indebted to him. The author 1, acknowledges UGC, India for financial support.

References

  1. Datta, K.B. (1991) Matrix and Linear Algebra. Prentice-Hall of India Private Limited, New Delhi,.
  2. Curtis, C.W. (1984) Linear Algebra an Introductory Approach. Springer-Verlag, New York.
  3. Proskuryakov, I.V. (1978) Problems in Linear Algebra. Mir Publishers, Moscow.
  4. Bellman, R. (1970) Introduction to Matrix Analysis. 2nd Edition, Tata Mcgraw-Hill Publishing Company Ltd., New Delhi.
  5. Cullen, C.G. (1990) Matrices and Linear Transformations. 2nd Edition, Dover Publications, New York.
  6. Horn, R.A. and Johnson, C.R. (1991) Topics in Matrix Analysis. Cambridge University Press, Cambridge. http://dx.doi.org/10.1017/CBO9780511840371

NOTES

*Corresponding author.