Bounds for Polynomial’s Roots from Hessenberg Matrices and Gershgorin’s Disks

The goal of this study is to propose a method of estimation of bounds for roots of polynomials with complex coefficients. A well-known and easy tool to obtain such information is to use the standard Gershgorin’s theorem, however, it doesn’t take into account the structure of the matrix. The modified disks of Gershgorin give the opportunity through some geometrical figures called Ovals of Cassini, to consider the form of the matrix in order to determine appropriated bounds for roots. Furthermore, we have seen that, the Hessenbeg matrices are indicated to estimate good bounds for roots of polynomials as far as we become improved bounds for high values of polynomial’s coefficients. But the bounds are better for small values. The aim of the work was to take advantages of this, after introducing the Dehmer’s bound, to find an appropriated property of the Hessenberg form. To illustrate our results, illustrative examples are given to compare the obtained bounds to those obtained through classical methods like Cauchy’s bounds, Montel’s bounds and Carmichel-Mason’s bounds.


Introduction
The estimation of complex roots of a polynomial is a long standing classical problem. A convenient way to obtain information on the location of the zeros of a polynomial is by locating the eigenvalues of its companion matrix. A well-known and easy tool to obtain such information is that of the Gershgorin disks, centered at the diagonal elements of the matrix, however, the classical form of the Gershgorin's theorem doesn't take into account the structure of the matrix [1]. This is why some methods, such as the modified disks of Gershgorin, through some geometrical figures called Ovals of Cassini, prospect estimation of bounds by looking in details of the elements in the matrix.
We first present Frobenius companion matrices and some important properties such as Frobenius decomposition's theorem. Secondly, we will introduce Fiedler matrices and the Gershgorin's theorem in its general form. We introduce classical bounds for roots like Cauchy's bounds, Montel's bounds, Dehmer's bounds and Carmichel-Mason' bounds, which are going to be used for comparisons. Some applications to classical companion matrices are presented to estimate bounds for roots of polynomials. We introduce furthermore, bounds for roots with Hessenbeg matrices. We have seen that, the Hessenbeg matrix form is appropriated to estimate good bounds for roots of polynomials as far as it gives improved bounds for high values of polynomial's coefficients. But it is more indicated for estimation of bounds for small values of coefficients for a given polynomial [2].
Indeed, as principal issue of this work, we present some studies on polynomials in case of very small values of polynomial's coefficients when In the third part, through illustrative examples, we make a comparison of classical bounds to those obtained with the special hessenberg matrix form in order to compare the pertinence of the studied property. We can see that, the used classical methods always provide bounds higher than 1.

Forms of the Frobenius Companion Matrices
Let ( ) P X be a polynomial: 1 0 n n n P X X a X a X a X − − = + + + + ∈   ( 0 0 a ≠ , because when 0 0 a = , then we have an evident root which is 0, we exclude this case, to studie non obvious cases).
Let ( ) C P be a matrix: The given n n × matrix ( ) C P is called "Frobenius matrix" or the "standard Frobenius companion matrix of the polynomial ( ) P X ". Definition 1. [3] We define a "companion matrix" to be an n n entries constant in  and the remaining en- We say that two companion matrices A and B are equivalent if either A or A T can be obtained from B via a permutation similarity.
There are many other Frobenius companion matrices. In this paper, we notice: is the classical Frobenius companion matrix associated to the polynomial ( ) P X : C P is the first Frobenius companion matrix associated to the polynomial ( ) P X : ( ) C P is the second Frobenius companion matrix associated to the polynomial ( ) P X : ( ) Companion matrices have a great interest in that they offer the possibility of elegant demonstrations.

Statement of the Gershgorin's Theorem
A common way to obtain inclusion areas for the zeros of a polynomial is to find eigenvalue inclusion areas for a companion matrix of the polynomial, whose eigenvalues are precisely the zeros of the polynomial.
In this section, it is about locating zeros of polynomials from the Gershgorin's disks. These disks are estimated by using the diagonal elements of the matrix. The radii are then calculated from the polynomial's coefficients. Gershgorin's theorem states that the union of these disks contains all eigenvalues. We formally state here Gershgorin's theorem as lemma and its application to companion matrices. Lemma 1. For a given matrix A of dimension n n × with complex elements ij a , all the eigenvalues are located in the union of the following n disks so that: [1].
Considering the rows: Considering the columns: Proof. Let λ be an eigenvalue of a complex n n × matrix A and whose corresponding eigenvectors are x.
So we have Ax x λ = .
If x is an eigenvector, it has at least one of its components that is non-zero. Let . We will take the absolute value on either side of the equality and use the triangular inequality while dividing by k x . Since 1 j k x x < for every j k ≠ , we become: Without knowing the eigenvectors, we do not know to which k each eigenvalue corresponds. We must therefore take the union of all these disks to obtain a region which guarantees to contain all the eigenvalues in a safe way.

Application to Classical Companion Matrices
The proof of the theorem doesn't take into account any particular property of the matrix A. It can be then interesting to consider some properties of matrices and to study the advantages related to these particularities.
Since our study concerns Fiedler companion matrices, we will first consider the particular properties of companion matrices by using explicit forms.
We consider the classical companion matrix ( ) It is already known that the eigenvalues of the matrix ( ) C P are roots of its Gershgorin's theorem applies to the matrix ( ) C P guarantees that all the zeros of the polynomial ( ) which is the union of n disks such as: where: The inequalities allow us to say that for any zero λ of ( ) P X , we have: The obtained bound is better than the bound of Cauchy namely: By applying Gershgorin's theorem, we have: : Then, all the zeros of the polynomial ( ) P X are therefore included in the

Improved Bounds for Roots of Polynomials by Modifying the Disks of Gershgorin for Companion Matrices
The general form of the Gershgorin's theorem doesn't exploit the specific structure of the matrix. By rewriting the equations from which we obtain the Gershgorin's disks and by exploiting them, we can obtain improved forms of those disks. More generally, we consider x and λ respectively eigenvalue and eigenvector of the companion matrix ( ) C P , we have: ( ) From this equality follows the equations: This set of equations is used to modify the ( ) 0 Γ disks of Gershgorin in order to obtain ( ) 1 Γ which allows to obtain an improvement of the bounds of the roots for polynomials. For this, we first make changes on 1 Γ to 1 Ω and then 2 Γ to 2 Ω respectively. Each step produces a new set of zero inclusions that are summarized in the following theorem [1]. Theorem 2. Let ( ) . All the zeros λ of ( ) P X can be found in the set ( ) Likewise, all the zeros λ of ( ) P X can be found in the set ( ) where: We also then have: Proof. In order to change respectively 1 Γ to 1 Ω and then 2 Γ to 2 Ω , some steps are required. Each step allows to calculate a bound on the moduli of the zeros for the obtained modified set. First modified disk 1 Ω We consider the vector x to be the eigenvector with the greatest absolute value.
About this modification, we start from the equation We obtain another inequality that must be satisfied by λ precisely K Ω =Γ  .
The limit of 1 K is a quartic curve known as the oval of Cassini. It consists of either one or two loops. Ovals of Cassini also appear in a different and slightly more complicated eigenvalue inclusion set. We can therefore choose to replace 1 Γ by 1 Ω in Gershgorin's theorem. It gives for every j, then we have This helps to define a new set However, since we have 2 1 x x ≥ for every j, we have from It is a disk centered at the origin with radius 2 1 a + .
We conclude that, in this case, 2 2 K λ ∈ Γ  . We define a set 2 Ω so that The theorem is proved. From this theorem, we consider estimated values of the litteratur for our illustrative examples.

Improved Bound from the Hessenberg Matrices and Classical Usal Bounds of Roots for Polynomials
We announce the following theorem as principal result of this work. The first part of the theorem was already part of former issued paper [2]. The originality of the theorem comes from the fact that, by using the Hessenberg form, in the special case that When: Proof. The estimation of the general bound for λ can be found in [2].
What to be proved is the case: We introduce the following theorem as a specific case of the modified disks of Gershgorin in which γ and δ are computed. It will be followed by other theorems of the litteratur that will be used for comparison to estimate bounds for roots of some given polynomials.  Any zero λ of ( ) P X satisfies the relation: Proof. This bound is the usual bound found in the litteratur as Dehmer bound.
It can be found in [5]. classical bounds that are usually found in the litteratur. We do not prove them in this paper [6].

Illustrative Examples for Polynomials by Degree n = 5
In this section we will take some examples to illustrate the sharpness of the theorems on modified disks of Gershgorin for companion matrices. We will make a particular study on polynomial for degree 5 n = by using following bounds: Ovals of Cassini, Hessenberg, Dehmer, Cauchy, Montel and Carmichel-Mason. All the bounds that are going to be used are listed in Figure 1 Table 1 and Table 2.   Table 3 and  Table 4.  126 The roots and their bounds of polynomial ( ) 3 P X are illustrated in Table 5 and  Table 6.    Table 4. Estimation of the bounds of the zeros of 2 P (Traces matrices of Graeffe [7]).    We can see that the sparse companion matrices become interesting for use, in case of small values of the coefficients of polynomials. This is why we make a closer look at these cases. 10 10 10 10 10 We have:  Table  9 and Table 10.

Conclusions
We have studied several methods of estimating bounds for zeros of a polynomial.
We first present different matrices that are used in this paper. We begin by presenting companion matrices and cyclic matrices in order to introduce Frobenius companion matrices and Frobenius decomposition's theorem of matrices into diagonal block of companion matriices. We also present Fiedler matrices and then we introduce the Gershgorin's theorem. As the standard Gershgorin's theorem doesn't exploit the specific structure of the matrix, we apply it to a Frobenius companion matrix, that is also a Fiedler matrix after modifying the first and the second Gershgorin disks. The obtained new union of disks after the modification, called ovals of Cassini, allows us to estimate news expression of bounds for roots for a given polynomials [1].  a table in order to start a comparison through manual computation.
As experimental part of this paper, we take many illustrative examples of polynomials for degrees 5 n = , in order to compare our obtained bounds to absolute values of the exact roots. Bounds for the roots from our improved methods of the given polynomials have been manually computed. The exact roots have been computed with the software SCILAB and the Traces matrices of Graeffe [7] and then compared to our computed bounds. From our examples, we confirm that none of the classical methods gives alone the best value for both small values and high values of the polynomial's coefficients. Some methods are indicated for small values while others are indicated for high values of these coefficients. But the Hessenberg method makes it possible to obtain good estimation of the bounds especially for small values of the coefficients. The Dehmer's bound belongs also to the best in cases of some high value of polynomial's coefficients. In all the cases, we note that no method makes it possible to estimate a bound which is lower than 1. However, in the case when ∑ , we need to be able to estimate bounds strictly less than 1 since the absolute values of the roots are less than 1. Hence a study on polynomials to find special formula that gives a bound less than 1 is indicated. As future research topic, it will be interesting to find forms of matrices and methods that can provide bounds strictly less than 1 for a polynomials if there is any.