A Note on the Proof of the Perron-Frobenius Theorem

This paper provides a simple proof for the Perron-Frobenius theorem concerned with positive matrices using a homotopy technique. By analyzing the behaviour of the eigenvalues of a family of positive matrices, we observe that the conclusions of Perron-Frobenius theorem will hold if it holds for the starting matrix of this family. Based on our observations, we develop a simple numerical technique for approximating the Perron’s eigenpair of a given positive matrix. We apply the techniques introduced in the paper to approximate the Perron’s interval eigenvalue of a given positive interval matrix.


Introduction
A simple form of Perron-Frobenius theorem states (see [1,2]): is a real matrix with strictly positive entries , then: n n  0  1) A has a positive eigenvalue r which is equal to the spectral radius of A, 2) r is a simple, 3) r has a unique positive eigenvector v, 4) An estimate of r is given by the inequalities: The general form of Perron-Frobenius theorem involves non-negative irreducible matrices.For simplicity, we confine ourselves in this paper with the case of positive matrices.The proof, for the more general form of the theorem can be obtained by modifying the proof for positive matrices given here.
Perron-Frobenius theorem has many applications in numerous fields, including probability, economics, and demography.Its wide use stems from the fact that eigenvalue problems on these types of matrices frequently arise in many different fields of science and engineering [3].Reference [3] discusses the applications of the theorem in diverse areas such as steady state behaviour of Markov chains, power control in wireless networks, commodity pricing models in economics, population growth models, and Web search engines.
We became interested in the theorem for its important role in interval matrices.The elements of an interval matrix are intervals of .In [4], the theorem is used to establish conditions for regularity of an interval matrix.(An interval matrix is regular if every point in the interval matrix is invertible).In Section 4 we develop a method for approximation of the Perron's interval eigenvalue of a given positive interval matrix.See [5] for a broad exposure to interval matrices.


Since after Perron-Frobenius theorem evolved from the work of Perron [1] and Frobenius [2], different proofs have been developed.A popular line starts with the Brouwer fixed point theorem, which is also how our proof begins.Another popular proof is that of Wielandt.He used the Collatz-Wielandt formula to extend and clarify Frobenius's work.See [6] for some interesting discussion of the different proofs of the theorem.
It is interesting how this theorem can be proved and applied with very different flavours.Most proofs are based on algebraic and analytic techniques.For example, [7] uses Markov's chain and probability transition matrix.In addition, some interesting geometric proofs are given by several authors: see [8,9].Some techniques and results, such as Perron projection and bounds for spectral radius, are developed within these proofs.More detailed history of the geometry based proofs of the theorem can be found in [8].
In our proof, a homotopy method is used to construct the eigenpairs of the positive matrix A. Starting with some matrix 0 H with known eigenpairs, we find the eigenpairs of the matrix for t starting at 0 and going to 1.If for each t all eigenvalues of   H t are simple, then the eigencurves   r t do not intersect as t varies from 0 to 1.
Our proof requires that the curve formed by the greatest eigenvalues and its reflection about the real axis (i.e., ) will not intersect with any other eigencurve.Together they form a "restricting area" for all other eigenvalue curves.As a result, the absolute value of any other eigenvalue will be strictly less than   r t for .By choosing an initial matrix 0 0 < < 1 t H that has the desired properties stated in the Perron-Frobenius theorem, we will show that the "restricting area" preserves these properties along the eigencurves for all   H t , and for in particular.

 
1 A H  Our proof is elementary, and therefore is easier to understand than other proofs.While most of the other proofs focus on the matrix A itself, we approach the problem by analysing a family of matrices.In our proof we study some intuitive structures of the eigenvalues of positive matrices and show how those structures are preserved for matrices in a homotopy.Thus, our proof provides an alternative perspective of studying the behaviour of eigenvalues in a homotopy.
Furthermore, our proof is constructive.The idea is to start with the known eigenpair corresponding to the maximal eigenvalue of 0 H , then use the homotopy method and follow the eigencurve corresponding to the maximal eigenvalues of positive matrices   H t , applying techniques such as Newton's method.Recently, many articles are devoted to using homotopy methods to find eigenvalues, for example see [10][11][12] and the references therein.In most cases, the diagonal of A is used as starting matrix 0 H .Still, people are interested in finding a more efficient 0 H , one which has a smaller difference from A. The 0 H constructed in our proof provides an alternative to the query.It is promising because by proper scaling, it can behave as some "average" matrix.

The Proof
In the following sections,   ij A a  will denote a real matrix with strictly positive entries, i.e.
. If is an eigenvalue for A, and v is its corresponding eigenvector, then forms an eigenpair for A. A vector is positive if all of its components are positive.An eigenpair is positive if both of its eigenvalue and eigenvector components are positive.  , r v Proof.Define the function to be: 1, 0for1 ,and and v denotes the maximum norm of n v   Then f is continuous (since V does not contain the zero vector and Av is positive for any v in V), V is convex and compact (since V is closed and bounded, it is compact, while convexity follows trivially),   According to Brouwer fixed point theorem, a continuous function f which maps a convex compact subset K of a Euclidean space into itself must have a fixed point in K. Thus, there exists v in V such that No component of v can be 0, since any positive matrix operating on a non-negative vector with at least one positive element will result in a strictly positive vector.So v is a positive eigenvector of A, and the associated eigenvalue r is also positive.
Lemma 2.2.If r is the positive eigenvalue associated with the eigenvector v in the previous lemma, then r has no other (independent) eigenvector.
Proof.Suppose on the contrary, there is another positive eigenvector x for r.Assume that x and v are independent.Let min 0 Let m be an index such that m m v x t  .Let y v tx   , then y is an eigenvector for A associated with eigenvalue r.It's clear that 0 m y  and for all i.Since x and v are linearly independent, .Therefore, Proof.Inequalities ( 1) and ( 2) are equivalent to According to Dirichlet's approximation theorem, for any , there is , Lemma 2.6.There does not exist complex eigenvalue of A such that z z r  .Proof.Suppose, on the contrary, that there exists an eigenpair such that  for all j, for this would make for all j.However, it's clear that when .
 Therefore, there exists some x j such that and t is obtained at .Let , then for all i.Either  or there exists some n such that n .Since if for all i, then let m be the index of the element with non-zero imaginary part.For any , then according to lemma 2.5, there exists   such that The case for   0 has a simple eigenvalue n and eigenvalue 0 with algebraic multiplicity 1 n  .In addition, the eigenvector associated with n is positive.
Proof.Since , n is an eigenvalue of D. Likewise, are  independent eigenvectors of D associated with the eigenvalue 0. So 0 is an eigenvalue for D with multiplicity 1 n  .Since an n n  matrix have only n eigenvalues, these are all the eigenvalues of D. Therefore, the eigenvalue of the greatest absolute value of D is positive and simple, and its corresponding eivenvector has positive entries.
Theorem 2.1.Let A be any positive matrix.Then A has a positive simple maximal eigenvalue r such that any other eigenvalue λ satisfies r   and a unique positive eigenvector v corresponding to r.In addition, this unique positive eigenpair,   , r v , can be found by following the maximal eigenpair curve of the family of matrices where D is the n n  matrix with defined in lemma 2.7.
Proof.The first part of the statement of the theorem follows from the previous lemmas.We will denote the eigenpair of the matrix D by   1 0 r  n and .

   
, are all positive matrices.We will 1 now examine the eigencurves , where is not going to intersect any other eigencurve at any time and remains to be the largest eigenvalue.Therefore, the unique positive eigenpair, of the matrix A, can be found by following the maximal eigenpair curve .
2. An estimate of r is given by:

 
Remark.This completes the proof of Perron-Frobenius theorem for positive matrices.The proof can be modified to prove the more general case for irreducible non-negative matrices.For example, this can be done by letting , where D is the matrix defined in Lemma 2.7.As we noted in the introduction, we will next demonstrate how to use homotopy method to find the largest eigenvalue of a positive matrix A numerically.

Numerical Example
In this section we use the homotopy method to approximate the positive eigenpair of the matrix: starting with the 5 × 5 matrix D of all entries ones.In [12] it is shown that the homotopy curves that connect the eigenpairs of the starting matrix D and those of A can be followed using Newton's method.We use these techniques to follow the eigencurve associated with the largest eigenvalue of D. While [12] finds all the eigenvalues of tridiagonal symmetric matrices, the method works well in approximating the largest eigenvalue when it is applied to any positive matrix due to the separation of its eigencurves (see [12] for details).

An Application to Positive Interval Matrices
To differentiate ordinary matrices in the previous sections from interval matrices, we will call them point matrices in this section.As stated in Section 1.
, and suppose the maximum is obtained when i k  .Then  Remark.Theorem 4.1 shows that in order to find the Perron's interval eigenvalue E of A, we only need to find the Perron's eigenvalues of A and A , which can be approximated using the technique introduced in the previous section.

r v Lemma 2 . 1 .
A has a positive eigenpair .

Figure 1 .
Figure 1.The maximal eigenvalue path for A.
Suppose the statement of the lemma is false.It follows that there exists an eigenpair such that On the other hand, m , a contradiction.Therefore v is the only eigenvector for r.    , r v Lemma 2.4.There is no negative eigenvalue  for A such that r   , where   Copyright © 2012 SciRes.AMProof.
then there exists some p such that We'll show that if s = the Perron's eigenvalue of A , t = the Perron's eigenvalue of A , then be a positive interval matrix, and E is its Perron's interval eigenvalue.Suppose = s the Perron's eigenvalue of A , t  the Perron's eigenvalue of A , then   .Suppose  is the Perron's eigenvalue of B, then s t    from the previous lemma.Therefore   , E  s t  