^{1}

^{2}

^{*}

A method is presented for calculating a matrix spectrum with a given set of eigenvalues. It can be used to build systems with different spectrums with the aim of choosing desired alternative. It enables a practical implementation of control algorithms without resorting to transformation of variables.

In algebra, the problems dealing with eigenvalues belong to spectral ones. A matrix spectrum is changed via its elements. This procedure can be implemented in various ways. For example, in computing mathematics, a matrix is multiplied by other matrix for solving systems of linear algebraic equations.

The problem of target transforming a spectrum is the subject of control theory. It is called as the method of characteristic equation setting, arrangement of eigenvalues, spectrum control, and modal control [

Such spectrum transformation, which is pertinently called Frobenius, has a clear theoretical basis; moreover, it specifies an obvious way for its practical application, which implies supplementing the elements of the row of Frobenius matrix to the values making a matrix spectrum equal to a given set of numbers.

The reason for searching a new method of a spectrum transformation is the requirement for obtaining a desired spectrum for concrete technical systems using real-time control algorithms. A more detailed explanation of the necessity of other approach for solving this problem is given in Appendix 1.

The method for calculating a desired spectrum, for which the authors find possible to use the definition in the headline, is not based on a Frobenius matrix.

It can be used to calculate the feedback coefficients of a control system with the aim to obtain a desired spectrum of closed-loop system without resorting to transformation of variables. This allows practical problems of control to be solved at the design phase of the system. By simulating the system behavior with different spectrums, it is possible to find a suitable alternative, which can be further implemented as a direct digital control algorithm. The paper is an outgrowth of the work [

In the matrix, a Frobenius transformation forms the row of elements reflecting the coefficients of the characteristic polynomial. An additive influence of the feedback onto these elements varies a spectrum. The feedback elements are calculated in the obvious way as the differences of the row elements and the coefficients of the polynomial with roots that equal to the values of a given spectrum.

The proposed method is based on the relationships between the elements and the spectrum not for the transformed matrix but the original one. These relations represent another kind of Vieta’s formulas, where the sums of the main minors of the matrix appear instead of characteristic polynomial coefficients. In contrast to the Frobenius form and Vieta’s formulas these minors contain all of the matrix elements.

In order to change a spectrum by a given set of eigenvalues, the matrix elements are replaced by unknowns, and then the corresponding elements of minors and combinations of the matrix eigenvalues are replaced by the same combinations of numbers from a given set. In this case, the identities are transformed into a system of equations for the unknowns. As a result, after the unknown will be replaced by the solution of the obtained system of equations, the matrix gains a desired spectrum. Feedback elements can also be calculated in obvious way as the differences between replaced matrix elements and elements of solution.

Suppose_{x} denote the matrix A with k replaced elements by unknowns.

The objective is to consider a range of issues related to evaluation of the unknowns, which are substituted into the matrix A_{x}, such that the condition

1) Replacement is a replacing the elements of the matrix A (replaced elements) by other elements (replacing elements). Replacing matrix A_{x} (matrix with replacement) is a matrix with replacing elements.

2) Spectral equations of matrix A (replacement system) are k equations that was formed by replacing the coefficients of Vieta’s formulae by the sums of main minors of the matrix A_{x} and by replacing the roots by the elements from a given set L.

3) Replacement of the i-th order is a replacement leading to spectral equations of the i-th order. Linear replacement is a replacement of the first order. Non-linear replacement is a replacement of the second order or higher.

4) Spectral transformation of the matrix A is a replacing the elements of the matrix A_{x} by the solution of spectral equations.

For a matrix

it is known Vieta’s formulas

where σ_{i} and a_{i} are the i-th root of the characteristic polynomial and the result of summation in the i-th row, which are the coefficients of the characteristic polynomial considering the sign.

Frobenius transformation of a spectrum is based on obtaining the elements on the left-hand side (taking into account the sign) by non-singular transformation of the matrix A and by supplementing them to the values that satisfy a given set. This corresponds to the fact that the sum on the right-hand side (2) are replaced by the same relationships between the numbers of a given set_{i}. This leads the system to the equations

with an obvious solution_{i} is a sum in the i-th row. Substituting the solution into Frobenius matrix one forms its spectrum with the values from a given set Λ.

The possibility to change a matrix spectrum by supplementing the matrix elements to the values that satisfy a given set provides an alternative to Frobenius transformation of a matrix.

To perform this procedure, we use the system (2) in the form of sums of main minors on the left-hand side. The example of such system for a matrix of the 3-rd order is given by

In this case, all of the matrix elements are in the system. In particular, this system enables one to evaluate how each element influences on the spectrum. This can’t be done with the help of Frobenius transformation.

Now, we supplement arbitrary elements of A, for example, the main diagonal elements by unknowns x_{1}, x_{2}, and x_{3}. As a result, the matrix A takes the form

and we obtain the system of equations for supplements the same as (3):

By solving (4), we consider the goal has been achieved. Indeed, substituting the solutions into the matrix A_{x} one makes it equal to a given set without resort to transforming the matrix.

Further efforts are aimed to simplifying the method of solving, since just the solving this particular system, after opening the brackets, is very complicated, and the solving complexity increases many-fold when the dimension increases. Difficulties in solving a particular system can be even more enhanced when we need to solve multivariate problems associated with a choice of complementary elements. The above example is illustrated by supplementing diagonal elements. Besides this embodiment, other variants can be used, the number of which also extremely increases with increasing a size of the matrix. Frobenius transformation of a spectrum does not have the variety of alternatives, as it has the unique solution to (3) when an appropriate condition is satisfied.

The above computational difficulties can be significantly reduced by choosing as the unknowns the elements together with its supplements instead of just the supplements. After solving the equations, we can determine the supplements as easy as in the Frobenius transformation.

For this purpose, the k arbitrary elements of A are replaced by unknowns, which are denoted for presentation by the capital letter X with the same indexes. For example, instead of the matrix

with unknown supplements to the elements a_{12}, a_{21}, a_{22} it is assumed the replacing matrix

where instead of the elements a_{12}, a_{21}, a_{22} called in definition 4.1 as replaced, the replacing elements X_{12}, X_{21}, X_{22} considered as the unknowns are located.

The result is the system of equations for X_{12}, X_{21}, X_{22} of the form

In general case, replacement of k elements of A with combining the replacing elements X_{i}_{,j} into the vector X_{i}_{,j} and building A_{x} gives the system of equations

where F is the non-linear vector function with size of k called by the spectral equation.

In a similar way, we can choose

different replacing sets of elements and obtain replacing matrices in the form of (5) and equations in the form of (7). The number N very rapidly increases with the size of A. For small values of k, it is given in

The type of the system (7) depends on the arrangement of replacing elements in A_{x}. If we allocate the replacing elements in different rows and columns, as it is shown for the matrix (5), the system can takes the linear or

k | N | n | M |
---|---|---|---|

2 | 6 | 1 | 5 |

3 | 84 | 20 | 64 |

4 | 1820 | 495 | 1325 |

5 | 53130 | 15504 | 37626 |

6 | 1947772 | 776475 | 1171297 |

7 | 85900584 | 26978328 | 58922256 |

non-linear form of degree from 2 to k. However, not all of the systems have a solution. Using a particular matrix, we can at once determine a group of systems that do not have a solution.

Further, for the sake of simplicity, we will denote the replacing and non-replaced elements of matrices by the numbers that equal to the indexes and dots, respectively.

The right-hand side of the first equations of the system (6) is the fixed sum, and the left-hand side has the unknown, therefore, the equation is consistent with arbitrary values of а_{11}, а_{33}, and d_{1}. But, for the other matrix

there are no unknowns on the left-hand side of

so, the last expression is inconsistent.

It is straightforward to make the following generalization. The matrix (9) belongs to the family of matrices, which is formed by replacing k elements of A that lie outside of the main diagonal in the two triangle areas containing k^{2} − k elements. This means that a necessary condition to solve (7) is that at least a one replacing element must be located on the main diagonal. It follows that the number of inconsistent Equation (7) is equal to the number of combinations

The dependence (10) is also given in

Subtracting (10) from (8), we obtain

(given in

where M_{i} is the number of the i-th order equations.

Determining the terms in (12) for a general case as functions of k is the problem that needs to be solved. Even calculating M_{1}, i.e. determining the number of the linear systems (7), is unobvious procedure that requires an analysis of equations of the form (6). We can say definitely (or, rather, we can suggest, since there is no rigorous proof) about only the single term M_{i} for i = k. It is equal to 1. In other words, there is only one way to replace k elements of matrix that allows a spectral equation of order k to be obtained by replacing the elements on the main diagonal.

We consider next a particular case for a matrix of the third order. By analyzing 64 consistent Equation (7), we establish 18, 45 and 1 variants to replace 3 elements according to (11). Replacements are described by two types of the first order equations, six types of the second order equations, and one type of third order equation. Equations of all types are resulted.

At first, we discuss the variants with evident solving the problem of choosing elements for linear replacement associated with a replacement of rows and columns of A.

There is only one element of replacing rows and columns in the summands of minors. Each of summand contains one unknown, and the multipliers obtained from the remaining elements give the coefficient at the summand. Assembly of these coefficients forms a matrix denoted by R. These equations belong to the type 1.1.

Some summands on the left-hand side, as it can be seen from the system (6), do not have replacing elements. We combine these elements in the row i into the element b_{i}. Then, after combining the elements b_{i} and d_{i} into the vectors b and d respectively, we can represent the Equation (7) in the linear form

by replacing rows and columns.

The solution to (13) exists under condition

The number of Equation (13) is

They do not describe all of possible linear systems but determine only obvious ones.

If replacing elements are not rows or columns, we can also get the linear system (7). In this case, the summands of minors can contain a product of replacing elements. Indeed, for example, for the matrix

the replacement system is

In the third row, we obtain the summand with a product of unknowns X_{11} and X_{32}. In this case, we have a formal reason to assign (17) to the second order system. However, we can find X_{11} from the first equation (i.e. X_{11} is known), and the system becomes linear. These equations belong to the type 1.2. The given types of equations exhaust linear replacements.

Replacements with a single diagonal elements lead to different types of the second order equations. Consider the matrix

which differs from (16) in a single element. The second and third equations for (18) is given by

The last equation of (19) is like (17) only externally. In the product, there is no variable expressed from the first equation. This type of replacement is denoted as 2.1.

The matrix

differs from (18) in a single element and contains by a single product of elements in two equations:

This type of replacement is denoted as 2.2.

The matrix

differs from (16) in a single element and also contains the product of elements in two equations

but the third equations has the product of three elements. This type is denoted as 2.3.

The matrix

which differs from (16) in a single element, is characterized by the equations

with two products of two unknowns in the third row. This type is denoted as 2.4.

When two diagonal elements are replaced the type of equations depends on a choosing the third element. Consider two matrices

with identical replaced diagonal elements and common first equation

differ from 2.2 in the first equation. They are denoted as 2.5. For the matrix b), the equations

with a single product in the second row and two products in the third row are denoted as 2.6.

The last equation of (7) contains the k! summands with products of k elements while the single summand has all unknown multipliers. Replacement of k elements using variants of (10) gives spectral equations of the k order only for unique case when the main diagonal of a matrix is replaced. Other variants of replacement lead to equations of the lower order. This conclusion is done without proving due to analysis of all spectral equations of the third order matrix.

For the matrix

we write the system (2) as

Two of fore elements can be replaced in six ways. Only one of those with replacing the elements a_{12}, а_{21} leads to incompliance equations in the form (6). From five remaining ways, fore are replacements of rows and columns and lead to linear spectral equations. Replacing the main diagonal elements a_{12}, а_{21} leads to the second order equation.

Replacement of rows and columns gives the replacing matrices

and linear equations

We present them in the form of (13) as

where

For the matrices (29), the conditions (14) are given by

If these conditions are satisfied, the equations take the form

Substituting the equations into (29) one gives the matrices

with a spectrum that equal to a given set

Replacing the diagonal elements a_{11}, а_{22} gives the matrix

with the equation

Its solution

where

with a given spectrum.

Example 1. Given a set

After calculating the matrices (31), we obtain

They have the spectrum that contains the elements from the given set.

With these results, the solution (32)

as well as the matrices (33)

are complex.

From _{i} in (12) is found via analysis of equations.

Replacing rows and columns one gives the matrices

with appropriate equations. For example, for the matrix 1), we obtain the following equations

They can be presented in the form (13) as

where

As it mentioned above, choice of replacing elements in different rows leads to leaner equations of the type 1.2. Matrices and equations for all of remaining variants for such types are resulted in Appendix 2. The first equation is not cited because it remains identical for replacements with other elements.

In Appendix 2, we consider matrices and equations of the types 2.1 - 2.4 when a single diagonal element is replaced. In addition, we result equations of the type 2.5 and 2.6 when two diagonal elements are replaced.

For the third order matrix, 63 of 64 variants for choosing three elements lead to equations of the first and second order. The remaining matrix

is characterized by the third order equation

The result obtained can be generalized for an arbitrary order matrix.

Example 2. With a set

The matrix 1). Let’s calculate

The matrices are non-singular, hence, there are all the solutions:

With these solutions, replacing matrices (34)

take the spectrum

that is equal to a given set with calculating accuracy.

The matrix 2). Omitting intermediate calculations, here and further, we find matrices

Among all the matrices, only R_{5} is singular and hence the solution Х_{5} does not exist.

With the remaining matrices, the solutions to the systems (36) are

Substituting them into the matrices (34)

one forms the spectrum

that is equal to a given set.

The matrix 3). Let’s evaluate the matrices

The matrices R_{2} and R_{5} are singular and hence the solutions Х_{2} and Х_{5} do not exist. With the remaining solutions

the matrices

take the given spectrum

The matrix 4). Among the matrices

R_{1} and R_{6} are non-singular. With them, the solutions are

For replacing matrices

these solutions provide a given spectrum.

Example 3. This example describes a spectrum transformation with linear replacement of elements located in different rows and columns. Consider the matrix and Equation (12) given in Appendix 2. From the first equation, we define the unknown

at once. Two others are reduced to an equation for Х_{13} and Х_{23 }with the matrix

and the vector

With the set and matrix 1) from example 2, the solution

for the matrix 15)

forms the given spectrum

All calculations were made in MathCAD.

A method for obtaining a matrix spectrum equal to a given set of numbers without transformation to a Frobenius form is stated. Calculating tool is a system of equations, which has been obtained by replacement of arbitrary matrix elements by unknowns. Their number is equal to the size obtained from relationships between matrix elements in the form of main minors and elements of a given set.

The method has many variants for choosing replacing elements and equations to calculate replacing elements from linear to non-linear with an order equal to the size.

AlbertIskhakov,SergeySkovpen, (2015) A Direct Transformation of a Matrix Spectrum. Advances in Linear Algebra & Matrix Theory,05,109-128. doi: 10.4236/alamt.2015.53011

In technical systems, variables having a certain physical sense are used. These variables characterize energy stores such that a speed of moving mass, a solenoid current, a capacity voltage and similar parameters, which are measured by sensors. To obtain a Frobenius matrix it is required both direct and reverse transformation of variables in the feedback loop. The reverse transformation is explained by the fact that a combination of physical variables must enter into the system input. Firmware implementing the transformation needs additional hardware expenses, and software realization needs expenditure of time. This leads to delay in the feedback loop and to deteriorate dynamical properties of system. As a result, the advantage of a control method based on the variation of system spectrum is used not to the full owing to the features of the method using to implement a spectrum transformation.

For the type 1.2, replacement of the element а_{11} (number 7) refers to the matrix (16) and Equation (17):

8)

9)

10)

Replacing the element a_{22} gives

11)

12)

13)

14)

Replacing the element а_{33} gives

15)

16)

17)

18)

For the type 2.1, replacement of the element а_{11} (number 1) refers to the matrix (18) and Equation (19):

2)

2)

Replacing the element а_{22} gives

3)

4)

Replacing the element а_{33} gives

5)

6)

For the type 2.2, replacement of the element а_{11} (number 7) refers to the matrix (20) and Equation (21):

8)

8)

Replacing the element а_{22} gives

9)

10)

Replacing the element а_{33} gives

11)

12)

The type 2.3 (number 13) refers to the matrix (22) and Equation (23):

14)

15)

For the type 2.4, replacement of the element а_{11} (number 16) refers to the matrix (24) and Equation (25):

17)

18)

19)

Replacing the element а_{22} gives

20)

21)

22)

23)

Replacing the element а_{33} gives

24)

25)

26)

27)

For the type 2.5, replacement of the elements a_{11},_{ }a_{22} (number 28) refers to the matrix (a) (26) and Equation (27)

29)

29)

Replacing the elements a_{11}, a_{33} gives

30)

31)

Replacing the elements a_{22}, a_{33} gives

32)

33)

For the type 2.6, replacement of the element a_{11}, a_{22} (number 34) refers to the matrix (b) (26) and Equation (28):

35)

36)

37)

Replacing the elements a_{11}, a_{33} gives

38)

39)

40)

41)

Replacing the elements a_{22}, a_{33} gives

42)

43)

44)

45)