On Exact Determination of Eigen Vectors

In this article, we determine the Eigen values and Eigen vectors of a square matrix by a new approach. This considers all the roots with their multiplicities are known, using only the simple matrix multiplication of a vector. This process does not even require matrix inversion.


Introduction
There are many algorithms to determine the Eigen values and Eigen vectors of a square matrix [1]- [4].Basically they are iterative, they either determine sequentially the largest Eigen values and associated Eigen vectors, or as in the case of positive definite matrices go for simultaneous iterative determination of all Eigen values and Eigen vectors by a succession of orthogonal transformations [1] [3].Also, when one has multiple roots, the usual iterative approaches generally fail to work, unless additional properties of Eigen vectors of repeated roots are exploited.However, it is theoretically possible (and hence, in practice achievable with some success) to obtain all the Eigen vectors (including the generalized Eigen vectors connected with Jordan reduction) of a matrix if all the roots with their multiplicities are known using only the simple matrix multiplication of a vector.This process does not even require matrix inversion.
In what follows, we shall present the procedure through illustrative examples.Since the theory behind it is rather simple and becomes almost obvious once the way is pointed out, we shall not prove any result.Rather we shall only state a relevant new theorem in matrix theory.Implication of this theorem and its extensions in the general contexts are dealt with in a separate study.

Basic Points
To place the results of the present paper in a proper perspective, it is necessary to make the following points explicit before going to the theorem.
1) This procedure does require the knowledge of minimal polynomial [5] and the numerical value of the Eigen value for which the Eigen vector is to be obtained.
2) The only matrix operation involved in obtaining the Eigen vector is multiplication of a matrix and a vector.
3) One can obtain, with equal ease, the Eigen vectors for each known root, including the generalized vectors for multiple roots when they exist.In other words, Eigen vectors can be obtained for each value by itself without needing to determine either other root or the associated vectors.
4) When a multiple Eigen root has many roots, one can get them all by starting with different initial vectors.
→ For convenience, we shall employ the following convention and notations: a) A is a square matrix of order n with Eigen roots 1 2 , , , n λ λ λ  ; unless stated otherwise, they are all assumed to be distinct.b) is the initial or starting vector and , , , is the n by r matrix with the i x as its columns.

d)
( ) is the Vander Monde matrix [6] of Eigen values ( ) is the minimal polynomial of A and ( ) M λ in reverse order.When the i λ are all distinct, as is being assumed for the present, this is also the characteristic polynomial ( ) in the present situation.As is well known, with the condition 1 k a = , a is unique if n X is of full rank.This will be the case when the characteristic and minimal polynomial are identical, otherwise the class of polynomials defined by this vector will have as their H.C.F. a polynomial which has the minimal polynomial as its (possibly trivial) factor.e) ( ) ( ) ( ) is the quotient polynomial associated with the Eigen root * λ , with the quotient vector ( ) ( )

Main Results
We can now state the new Eigen vector theorems and their obvious extensions which are at the heart of the procedures presented in the sequel.THEOREM 1: x u = ∑ since the Eigen vectors in general are unique upto scale and Eigen vectors associated with different Eigen values are linearly independent, the choice of 1 x above is not restrictive; unless one is extremely unlucky (or when the Eigen vectors are highly structured, as in a subsequent illustration), any vector, which may the arbitrarily chosen one, will be linear combination of all the i u 's.
THEOREM 2: The vector ( ) ( ) is an Eigen vector of A associated with the Eigen values * λ .This theorem is illustrated by the following examples.

Illustrations
We shall now illustrate the application of the above theorems in the determination of ( ) M λ and the Eigen vectors.Different types of situations including the case of Jordan reducible matrices and generalized Eigen vectors will be illustrated by numerical examples.These examples will be interspersed with comments as required.

Illustration 2
X is of rank 2, hence through 4 0 X a = has non-trivial solutions, there is no unique solution "a" even with the requirement 3 1 a = .Hence one cannot obtain ( ) P λ by this approach.However, it is possible to get ( ) M λ by using 3 X .Thus, solving the equation 3 0 X y = with 3 1 y = , we get Hence, ( ) ( ) ( ) A has two distinct roots 2 λ = and 3 λ = ; the root 2 λ = is a double root and has two independent eigenvectors.Taking ( ) ( ) ( ) is obtainable by taking new starting vector 0 x say ( ) We then get 3 1 0 6 2 2 22 1 10 56 ( ) 1 10 56 0 13 and ( ) As is to be expected, the ( ) obtained by using the two different 3 X 's are multiples of each other.A but has two independent Eigen vectors and one generalized Eigen vector.

Illustration 3
One Eigen vector is got by taking ( ) ( ) ( ) T 1 4 1,0,0 q λ = = and ( ) ( ) ( ) To obtain a second Eigen vector for 4 we start with a different 1 X , say ( ) . With  ( ) Obviously, two independent Eigen vectors say, 1 V and Z above, are not unique; they are a basis for the two dimensional vector space of Eigen vectors of A for 4 λ = , viz., the solutions space of the equation ( ) 3.5.Illustration 4 and 4 1 8 45 220 2 9 37 140 0 2 19 132 X is of full rank, we get the solution vector of 4 0 X a = as But has only one Eigen vector.Defining ( ) ( ) ( ) As is to be expected, we have 4 1 , gives 5 X is of rank 4, we have solution of 5 0 X y = as ( ) Similarly with ( ) ( ) 2 A V V V = − + .Thus, we have obtained four vectors for A; one more Eigen vector is yet to be obtained corresponding to the triple Eigen value 3 λ = .

Case 1
With the starting ( ) 1,0,0,0,0 X = we have 5  This is obviously of rank 2 only.Solving the homogeneous equations X 3 y = 0 we get a polynomial − is a solution.Thus we know that A has 3 λ = as a multiple root with a multiplicity of at least 2.
Using ( ) ( ) ( ) 1, 0, 0, 0, 0 q = and hence ( ) To illustrate the complexity of the situation one has to be prepared to encounter, we shall in turn present the results with five different starting vectors.

Case 2
3 0 2 12 1 5 21 0 2 12 0 2 12 0 2 12 Has rank 2. Hence, ( ) 6 9 M λ λ λ = − + and ( ) 3.9.Case 3 Taking ( ) ( ) ( ) Taking ( ) ( ) It is also instructive to examine the result of multiplication of an X matrix with the coefficient vector got by dividing the characteristic polynomial by any of its factors.With the of Case 5, taking X is constructed using 5 e as the starting vector and is taken as 3 x in terms of i u s.For the matrix A, we have one representation In Case 5, we have 1 5 4 x u u = − and by X from this initial vector, we can only get 4 u and 5 u .If ( ) as a factor, the corresponding vector will be zero vector.If it has ( ) as a factor, then u will be an Eigen vector for 2 λ = − while, if ( ) Q λ has only factors of ( ) Is of rank 2, giving with 3 λ , the polynomial These results, of course, are the consequences of the second Eigen vector theorem presented elsewhere.

Summary
Some general observations regarding the problem of determining the Eigen values and corresponding Eigen vectors of a matrix are now in order.Though, theoretically obvious, significance of the procedure presented above in the case of computation of the same perhaps needs to be reiterated.However, in the present study we have not gone into the important questions regarding the approximations in practical computations and effect of consequent noise on the final results observed.1) If one has the ability to solve a set of linear equations, one can obtain the characteristic polynomial of a matrix provided, it is also the minimal polynomial.This fact is, of course, well known.However, the same is true regarding the determination of minimal polynomial in general.
In the above notation, we compute sequentially the ranks of 1 2 , , , n X X X  .Let k be the smallest integer such that ( ) ( ) . Then, the minimal polynomial of degree k and its coefficients are proportional to the solutions of the equation 1 0 k X y + = .This is the case when 1 x is a nontrivial linear combination of all generalized Eigen vectors and one each of Eigen vectors connected with the Eigen roots of the matrix.Otherwise, the polynomial will be a proper factor of the minimal polynomial.If

( )
M λ is the minimal polynomial, say ( ) ( )  , with a minimum multiplicity of i α .However, since the minimal polynomial will have every root of ( ) P λ as its root as well, it follows that ( ) ( ) 3) Since real symmetric matrices are fully diagonalizable by an orthogonal matrix, their minimal polynomial will have no repeated root.This fact is of great help especially in situations where a dispersion matrix has signal Eigen values which are relatively large and possible distinct and a "noise Eigen value" which is hopefully small and will be of high multiplicity.A good estimation of the minimal polynomial will be possible with relatively less computational effort by the present approach.Using the same computational product by k X , one can get the Eigen vectors for these "Signal Eigen values".4) The present approach enables one also to tackle complex Eigen values and Eigen vectors, especially when A is real and hence Eigen values and vectors occur in conjugate pairs.
is the triple Eigen root of 3

3 λ
= is a triple root with two Eigen vectors and one generalized Eigen vector while 2 λ = − is a dou- ble root with one Eigen vector and one generalized eigenvector.
= + − .Thus A has 2 λ = − also as repeated root of multi- plicity at least 2. Combining the information from Case 3, we see that the matrix A has 3 λ = and 2 λ = − as repeated roots, each of multiplicity at least 2. Since trA = 5, it follows that 3 λ = is actually a triple root and 2 λ = − is a double root of the characteristic equation of A. Since in this case also . It can also be verified that if 5 0, 0, 0 .This behavior is decided by the composition of 1 gives a V which is an Eigen vector for 3 λ = while any Q which has no factor ( )3 λ −at all gives a generalized vector of A for 3 λ = .

M
Eigen vectors.Also, by using information on the trace of A, one can, by solving appropriate linear Diophantine equations, determine the values of i n s. 2) Since the highest common factors of a polynomial and its derivative have each of the roots of the polynomial, multiplicity of each being reduced by 1, it follows that λ occurring exactly once.Hence, given ( ) M λ , one can determine all the roots of the polynomial; their multiplicities, as noted earlier, can be determined by solving appropriate linear Diophantine equations.