Eigenstructure Assignment Method and Its Applications to the Constrained Problem ()
1. Introduction
Eigenstructure assignment method plays a capital role in control theory of linear systems. The state feedback control law is used in this end, leading to change eigenvalues or eigenvectors of the open loop to desired ones in the closed loop. This method is usually realized in order to perform optimal or stabilizing control laws ([1] -[9] ). These articles and the references therein constitute a comprehensive summary and an important bibliography on the control of linear systems with input saturation. Indeed, authors have used different concepts leading to many methods for eigenstructure assignment control laws to regulate linear systems with input saturation.
Throughout this paper, we will be interested in continuous time systems of the form
(1)
The matrices A and B are real and constant:
and
with
. The vector
represents the state vector of the system and
is the control vector. We suppose that the spectrum of the matrix A contains
desirable or stable eigenvalues
and
undesirable or unstable eigenvalues
. We also suppose that the pair
is stabilizable.
Presence of undesirable eigenvalues makes System (1) unstable and, in [4] or in [10] , methods to overcome the instability of System (1), keeping unchanged the open-loop stable eigenvalues and the corresponding eigenspace
and replacing the remaining undesirable eigenvalues by other chosen values, were given. But, in these methods, additional conditions on System (1) should be satisfied.
In this paper, we try to get rid of these additional conditions on System (1). First, we should give some outlines on the methods described in [4] or in [10] . In [4] , the method, called the inverse procedure, consists of giving a matrix
with some desirable or stable spectrum and then computing, when possible, a full rank feedback matrix
such that:
(2)
and the kernel of
is the stable subspace. In this case, the spectrum of the matrix
is stable since it is constituted by the desirable eigenvalues of A and the chosen spectrum of H. In other words, the eigenvalues of H will replace, in the closed-loop, the undesirable eigenvalues of A. With the change of variables
in the initial system, we get
(3)
which allows one to focus on the unstable part of this system.
The inverse procedure described in [4] ensures the existence of a matrix F that satisfies the conditions mentioned above and gives a way to compute it under the following conditions:
• The matrix H is diagonalizable; that is, there exist linearly independent eigenvectors
of H associated to some stable eigenvalues
.
• The endomorphism induced by the matrix A on the stable subspace is diagonalizable; that is, there exist linearly independent eigenvectors
of A in the subspace
.
• The spectrum of A and the spectrum of H are disjoint.
• The matrix
![](https://www.scirp.org/html/htmlimages\12-1560054x\7400fce4-25c8-44f8-8005-a8b09c710bb7.png)
is invertible.
Under these conditions and according to the method described in [4] , the matrix
is unique and is given by
(4)
where
is the null vector of
.
In [10] , the condition b) is kept and a new assumption should be fulfilled:
1. the matrix
is invertible.
The method described in [10] consists of computing a matrix
such that its rows span the orthogonal of the stable subspace by taking, for example the first
rows of
and then, for a stable matrix
. Then, according to the method described in [10] , the feedback matrix
is given by
where
is the inverse matrix of the solution of the Sylvester equation
(5)
and
is a matrix such that
.
In this paper, we generalize the method described in [10] to a more general class of systems for which the condition d’) is not necessary. In fact, no additional condition is needed to deal with System (1). As in [10] , the feedback matrix F is always given by KV where K is an invertible matrix of
such that
is the solution of the Sylvester Equation (5).
The methods described in [10] or in [4] are, in fact, partial pole placement methods in which the desirable eigenvalues of the matrix A are kept in the closed-loop. But, it may happen that these desirable eigenvalues are close to the imaginary axis which causes a slow convergence rate to the origin. To overcome this problem, a total pole placement is needed. The technique of augmentation (see [5] and [6] ) allows one to perform a total pole placement when the matrix B needs not be of full rank. This is possible with the inverse procedure [4] , but not with the method described in [10] since, under the condition 1, the matrix B is of full rank.
So, from one hand, the method that we present constitutes a generalization of the two methods described in [10] or in [4] without any other additional condition on the systems and, from another hand, allows one to make, if necessary, a total pole placement by the use of the augmentation technique.
The paper is organized as follows: In Section 1, definitions, notations and some known facts are presented to be used in the sequel. The main results are presented in Section 2 together with an illustrative example. Some particular cases are presented in Section 3. Section 4 is devoted to the total eigenstructure problem with the illustrative example of the double integrator.
2. Preliminaries
• 2.1. Notations and Definitions
•
is the identity matrix of
for
.
• Matrices of the form
are called scalar matrices for
and
.
• If
is a
real matrix, for some
, its transpose, the
real matrix, will be denoted by
.
• When we view
as an euclidean space, the usual inner product
is considered, that is,
![](https://www.scirp.org/html/htmlimages\12-1560054x\7e33de18-2dd6-49db-ac68-eeaea384ce05.png)
where
and
are two vectors of
.
• If
is a nonempty subset of
, its orthogonal will be denoted by
, that is• ![](https://www.scirp.org/html/htmlimages\12-1560054x\b0807ea8-f1f7-4d1f-b636-818ce4718184.png)
• For a real number
, we define
• ![](https://www.scirp.org/html/htmlimages\12-1560054x\8c4c710a-9b59-4224-a740-a2cffef38b40.png)
• If
is a
real matrix, for some
,
will denote the coefficient of
corresponding to the
row-column.
• If
is a
real matrix, for some
,
is the
real matrix defined by
![](https://www.scirp.org/html/htmlimages\12-1560054x\67d38e9a-6cb4-4976-8734-5284fe74a4ec.png)
where
and ![](https://www.scirp.org/html/htmlimages\12-1560054x\305b955e-50dc-4c08-ac4a-dd0782ed3799.png)
• For real matrices (or real vectors)
and
, we say that
is less than or equal to
if every component of
is less than or equal to the corresponding component of
. We then write
.
• If
is a
real matrix, for some
,
(a)
is the kernel of
: the subspace of
of vectors
such that
.
(b)
is the image of
: the subspace of
spanned by the columns of
and also the set of vectors of the form
with
.
• If
is a
real matrix, for some
,
denotes its spectrum in the field of complex numbers
.
• If
is a complex number then
is the real part of
.
2.2. Some Notes on the Stable-Unstable Subspaces
1. To System (1), we associate:
(a) The two polynomials
![](https://www.scirp.org/html/htmlimages\12-1560054x\6629416c-dfa4-4e4a-b64c-1eda3bbfefd3.png)
![](https://www.scirp.org/html/htmlimages\12-1560054x\89681684-cf16-4e92-bb32-5b9af3489cf9.png)
that are factors of the characteristic polynomial of
.
In case of
, polynomial Q is just the constant polynomial 1.
(b) The two subspaces of
;
and
.
In case of
, the subspace
is just the trivial subspace
.
Remark 1
1. Since
and
are coprime,
and
are complementary subspaces of
. Moreover, we have
![](https://www.scirp.org/html/htmlimages\12-1560054x\2e523510-bfad-4452-a1c6-4c8e604e8a0e.png)
and also
![](https://www.scirp.org/html/htmlimages\12-1560054x\3899fcbb-acc6-43f8-88cc-ccf6fd105e08.png)
2. Since pair
is stabilizable, we have
![](https://www.scirp.org/html/htmlimages\12-1560054x\0009800e-3d74-41c9-b5ab-0b302188a788.png)
2.3. Some Notes on Sylvester Equation
Many problems in analysis and control theory can be solved using the well known of Sylvester equation. This equation is widely studied or used in the literature ([10] -[15] ). Since Sylvester equation plays a central role in the development of this work, we shall recall conditions under which it has a unique solution. A Sylvester equation is any equation of the form
(6)
where M is a
real or complex matrix, N is a
real or complex matrix and C is a
real or complex matrix while matrix X stands for an unknown
real or complex matrix. The following well known result gives a sufficient condition for the existence and uniqueness of a solution of Sylvester Equation (6).
Theorem 1
If spectrums of matrices M and N are disjoint,
, then Sylvester Equation (6) has a unique solution.
2.4. System (1) with Constraints on the Control
Consider System (1) with the assumption that the control
is constrained to be in the region
of
defined by
![](https://www.scirp.org/html/htmlimages\12-1560054x\f885ce4e-550a-4e42-ab57-6a8227655653.png)
where
and
are positive vectors in
. Note that the region
is a non symmetrical polyhedral set as is generally the case in practical situations. Let us first consider the unconstrained case where the regulator problem for System (1) consists in realizing a feedback law as
(7)
where
is chosen in
with full rank
. In this case, System (1) becomes
(8)
The stability of the closed loop System (8) is obtained if, and only if,
(9)
for all eigenvalues
of the matrix
. In the constrained case, the approach proposed in ([4] -[6] ) consists of giving conditions allowing the choice of a stabilizing controller (7) in such a way that the state is constrained to evolve in a specified region of
defined by
(10)
Note that the domain
is bounded only in case of
. In fact, when
, the subspace
has dimension
and is a subset of this domain. Suppose now that there is a matrix
such that
(11)
Hence, by letting
, we get
(12)
and then
. We would get
for all
whenever
. We say that D is positively invariant with respect to the motion of System (12). More generally, we give the following definition of positive invariance.
Definition 1
A nonempty subset
of
is said to be positively invariant with respect to the motion of System (12) if, for every initial state
in
, the motion
remains in
for every
.
The following theorem gives necessary and sufficient conditions for domain D to be positively invariant with respect to the motion of System (12).
Theorem 2 ([6])
The domain D is positively invariant with respect to the motion of System (12) if, and only if,
(13)
where
is the real vector ![](https://www.scirp.org/html/htmlimages\12-1560054x\03f34595-5e8e-4bf0-add6-cd32369cdad5.png)
Till now, we have supposed the existence of a matrix H that satisfies Equation (11). The following result, which does not take into account the constrained problem, gives necessary and sufficient conditions for its existence.
Theorem 3 ([4])
is positively invariant with respect to the motion of System (12) if, and only if, there is a matrix
such that Equation (11) is satisfied.
Note that
is positively invariant with respect to the motion of System (12) is the same as
is stable by matrix A. If the constrained problem is taken into account, the following theorem gives necessary and sufficient conditions for positive invariance of the domain of states
.
Theorem 4 ([6])
The domain
is positively invariant with respect to System (8) if, and only if, there is a matrix
such that Equation (11) is satisfied and
![](https://www.scirp.org/html/htmlimages\12-1560054x\692e24d4-5899-4e55-b059-7ff0e7fc8850.png)
3. Main Results
At first, we compute, and fix in the sequel, some matrix
such that its rows span the subspace
. Since the dimension of the subspace
is
, the matrix
is of full rank
.
Proposition 1 There is a unique matrix
such that
. This matrix is given by
(14)
and its spectrum is
.
wang#title3_4:spwang#title3_4:spProof.
Let
be the endomorphism of
canonically associated to matrix
, that is,
![](https://www.scirp.org/html/htmlimages\12-1560054x\30b0edbb-d204-4673-b6f2-3758ad5ca7f2.png)
The subspace
is stable under f (that is, f(x) belongs to
whenever x is in
). So, one can define the endomorphism g induced by f on
. Matrix
, when identified with its column vectors, can be seen as a basis of
. In this basis, if we denote by L the matrix of g, we will have
(15)
Let now
. By transposition of formula (15), we get
. Since g is induced by f on
, its spectrum is
which is also the spectrum of Λ. Formula (14) derives from the fact that V is of full rank m and shows the uniqueness of the matrix Λ. ,
Proposition 2
For any matrix
such that
, there is an invertible matrix
such that
and
is of full rank
.
Proof.
Let
such that
. Since dimension of
is n − m, rank formula shows that F is of full rank m. Let now
such that
. Then,
and then
. But matrix
is invertible, so U = 0 and this shows that matrix
is invertible. Clearly,
is invertible. For
, we have
![](https://www.scirp.org/html/htmlimages\12-1560054x\1b45bb96-0ac6-4e2c-a23d-7b6e14a59307.png)
Since
, we have
and then
. So
![](https://www.scirp.org/html/htmlimages\12-1560054x\f88041d3-dc01-4c55-9070-9eac34bdef09.png)
Since this equality holds for all
, we have
. Theorem 5 Let
be a set of complex numbers stable under complex conjugation such that
and
with spectrum
. Then, there is matrix
of full rank
such that
and
if, and only if, Sylvester equation
(16)
has a unique invertible solution
such that
.
wang#title3_4:spwang#title3_4:spProof.
The if part: Let
be of full rank matrix such that
and
. From Proposition 2, there is an invertible matrix
such that F = KV. Since VA = ΛV , we get
![](https://www.scirp.org/html/htmlimages\12-1560054x\953beb46-a25a-4ab6-b4af-dcd59f0a96ef.png)
Then
is an invertible solution to Sylvester Equation (16).
The only if part: Suppose now that Sylvester Equation (16) has an invertible solution X and let
. Matrix F = KV satisfies
because matrix K is invertible and then is of full rank m. We also have
![](https://www.scirp.org/html/htmlimages\12-1560054x\fe24fb87-54b4-42d2-a16a-9740f11289cc.png)
,
Remark 2
Because we are focusing on the partial assignment problem, eigenvalues of matrix H should be desirable, that is, matrix H should be Hurwitz. The undesirable eigenvalues
of A must all be different from those of H. So, assumption
in Theorem 5 above is necessary.
Example 1 Consider the linear time-invariant multivariable system described by
![](https://www.scirp.org/html/htmlimages\12-1560054x\0fb6b565-2204-4212-9456-80991a1b5bd1.png)
with
![](https://www.scirp.org/html/htmlimages\12-1560054x\d00b5f7a-ec84-4cba-aead-996d627039c4.png)
The control vector is submitted to the constraint
such that,
![](https://www.scirp.org/html/htmlimages\12-1560054x\d74b07bf-af5c-4ebb-94a8-9c17ec7035ba.png)
Eigenvalues of A are
and
and pair
is stabilizable. We get first the matrix
![](https://www.scirp.org/html/htmlimages\12-1560054x\33d039ac-16e9-4d32-b3e8-c283c0cca839.png)
Note that the rows of V are orthogonal to the eigenvector of A associated to the stable eigenvalue
of A. We, then, choose the matrix
![](https://www.scirp.org/html/htmlimages\12-1560054x\2a42e136-55ac-4a42-b26e-e4c45b9d3133.png)
This matrix is not diagonalizable and its eigenvalues are
. Besides, inequality (13) is satisfied. We use the formula
to get
![](https://www.scirp.org/html/htmlimages\12-1560054x\6bbc3cd5-9ccd-441e-ad0b-db2cbdd3b7f6.png)
Then, we solve the equation (15) and use the inverse of its solution to get the feedback matrix
![](https://www.scirp.org/html/htmlimages\12-1560054x\d73d8ab4-2ed5-4995-8a94-f55225080340.png)
Finally,
is the unique eigenvalue of the matrix
.
Figure 1 shows that, starting from two different and admissible controls, the corresponding trajectories, in the control space, converge to the origin without saturations.
3. Particular Cases
3.1. Case of Single Input Linear Systems
We discuss the particular case of single input linear systems, that is, when
. As described in the general case, we start by computing matrix V which is in case of
a row vector that is orthogonal to
and is easy to get. Matrix
is only the real
; the unique undesirable eigenvalue of
. If we choose H to be a
![](https://www.scirp.org/html/htmlimages\12-1560054x\9616965c-7544-408b-9fc4-56e5b5b3e6f7.png)
Figure 1. Trajectories of the system
from two different initial controls in
.
negative real number, Theorem 5 ensures that all matrices F (row vectors) are of the form KV where
is a nonzero real solution to the simple “Sylvester equation”
![](https://www.scirp.org/html/htmlimages\12-1560054x\122a8abe-0e43-4bdc-8bd1-dd69e332c0e7.png)
This equation has a nonzero solution if, and only if, the real number
is nonzero. That is,
(17)
Formula
shows that
is a left eigenvector of
associated to the undesirable eigenvalue
and then
for every
. But pair
is stabilizable, so
![](https://www.scirp.org/html/htmlimages\12-1560054x\21da4a19-d1c5-43a7-b1e8-4d114a91c6f9.png)
If we suppose that
, then we should have
![](https://www.scirp.org/html/htmlimages\12-1560054x\f7092cc4-0dec-462c-a0c4-6719735dfc5b.png)
and then
. We also have
and
. This shows that
which is not true.
As a consequence, we have the following result.
Proposition 3 For a given left eigenvector
of
associated to the unique eigenvalue
and for a negative real number
, we have
and matrix
given by
(18)
The spectrum of
is then
.
3.2. Case Where the Matrix VB Is Nonsingular
We have seen in the last paragraph that, when
, a necessary and sufficient condition for Sylvester Equation (15) to have nonsingular solution (nonzero real number in fact) is that the real
is nonzero. We also have seen that this last condition;
, is equivalent to the fact that pair
is stabilizable. In case of
, it may happen that pair
is stabilizable but matrix
is singular as will show the following example.
Example 2
Consider System (1) with
![](https://www.scirp.org/html/htmlimages\12-1560054x\ce61165e-248e-4778-96a6-a21e4a41c234.png)
Eigenvalues of A are
and
. The last desirable eigenvalue
is associated to the eigenvector
of
that spans the subspace
. Matrix V is then
![](https://www.scirp.org/html/htmlimages\12-1560054x\bf5b8143-0f54-4ad2-ba3f-50b2c7be2f3f.png)
Matrix
is singular since it is
![](https://www.scirp.org/html/htmlimages\12-1560054x\83c721df-a973-47eb-b9c4-64d9fb1f760b.png)
Controllability matrix is given by
![](https://www.scirp.org/html/htmlimages\12-1560054x\83ee4cd9-b925-4e9a-9019-ec6aca750cb7.png)
and is of full rank 3. This shows that pair
is stabilizable since it is even controllable.
The case of matrix
is nonsingular can be seen as a general case of
when
. So, it deserves a special study that will be the aim in the sequel of this paragraph. In the following theorem, we give a necessary and sufficient condition for matrix
to be nonsingular.
Theorem 6
Matrix
is nonsingular if, and only if,
and
are complementary subspaces of
; that is,
.
Proof.
The if part: Since VB is nonsingular, matrix B is of full rank m. To complete the proof of the if part, we shall show that
since the dimension of the subspace
is n − m. Recall that VX = 0 for any vector X in
. If now X is both in
and Im(B), then there is
such that X = BU and then
![](https://www.scirp.org/html/htmlimages\12-1560054x\00e8a4bd-6d4d-4f40-93ad-a25a35d17cbc.png)
This shows that U = 0 since VB is nonsingular and, then, X = 0.
The only if part: From the fact that VX = 0 for any vector X in
; that is
, V is of full rank m; that is
. Moreover, from the fact that
, we get
![](https://www.scirp.org/html/htmlimages\12-1560054x\6a8ed42f-c618-4a22-95a9-1c4dec37ca78.png)
since V is linear. This shows that the rank of VB is m and that it is nonsingular. The following theorem gives another method to get a partial pole assignment under the assumption that
is nonsingular.
Theorem 7
Suppose that matrix
is nonsingular and let matrix
be such that
,
and
is nonsingular. Then,
![](https://www.scirp.org/html/htmlimages\12-1560054x\a0a89d79-b763-4cf7-afe3-10f6d10d482f.png)
where
.
wang#title3_4:spwang#title3_4:spProof.
Let
and
. Then,
![](https://www.scirp.org/html/htmlimages\12-1560054x\2a3209dc-2a9e-4d8a-bf7d-6708707dbeb7.png)
This shows that
is a nonsingular solution to Sylvester Equation (16) and, by Theorem 5, that matrix
satisfies
,
Example 3
Consider System (1) with
![](https://www.scirp.org/html/htmlimages\12-1560054x\c636764e-e710-47e6-af4d-985e7c6f5611.png)
Eigenvalues of A are
and
. The last desirable eigenvalue
is associated to the eigenvector
of
that spans the subspace
. Matrix
is then
![](https://www.scirp.org/html/htmlimages\12-1560054x\1c1b2178-1adb-43e3-ad62-e7e227b74a2d.png)
Matrix
is nonsingular and matrix
is given by
![](https://www.scirp.org/html/htmlimages\12-1560054x\16e4a20c-2b69-40ac-a0c1-84a7495fb872.png)
Now choose
that are the eigenvalues of the matrix
. Then,
![](https://www.scirp.org/html/htmlimages\12-1560054x\2ef94f6e-0198-4fd0-8b5b-ca060a7572a9.png)
4. A Total Eigenstructure Problem
Note that, when m and n are equal, matrix V described in the last section is now any invertible matrix of
. That is why, we suppose
. Then matrix
is simply
and Sylvester Equation (15) becomes
(19)
where H is a real
matrix with desirable spectrum. So, if the unique solution X of Equation (20) is invertible, then the spectrum of
and the one of H are equal, where matrix F is
. Suppose now that
. The technique of augmentation (see [5] and [6] ) consists of augmenting the matrix
by adding zeros in order to get a new matrix
. One should also complete the control vector
by fictive real numbers to get
. We also replace the assumption: “the pair
is stabilizable” by “the pair
is controllable.” System (1) which becomes
(20)
does not change. The matrix
is now a real
matrix with some desirable spectrum
which does not contain any eigenvalue of matrix
. If the unique solution
of Sylvester equation
(21)
is invertible, we let
. The matrices
and
have the same spectrum
. If
is the matrix of the first
rows of
, then
assigns the spectrum of
to
.
Example 4
Consider the problem of stabilization of the following double integrator system
(22)
with
![](https://www.scirp.org/html/htmlimages\12-1560054x\544ad44c-94b6-482e-bcb3-acd1ea3b0d8b.png)
subject to the constraint
. The matrix
and the vector
are respectively augmented to
![](https://www.scirp.org/html/htmlimages\12-1560054x\f21a9c4c-5ae6-4f91-9ef6-df574ed9f190.png)
such that
where
and
are fictive positive real numbers. Let, for example,
and
to get the domain
in the Example 1. We choose the matrix
![](https://www.scirp.org/html/htmlimages\12-1560054x\8348a41d-528b-4c93-9d0c-130ebd02bac9.png)
for which the eigenvalues are
and the inequality (13) is satisfied. Then, we solve the Sylvester equation (20) and get
![](https://www.scirp.org/html/htmlimages\12-1560054x\c45d389f-b9f4-4834-816f-4898f365249c.png)
The feedback matrix
, which represents the first row of
, is then
![](https://www.scirp.org/html/htmlimages\12-1560054x\13a19e31-d556-4a3b-86f2-2308ed89f3d1.png)
The spectrum of
is then
.
As in Figure 1 but in the state space, Figure 2 plots two trajectories in the state space starting from two different and admissible initial states and, thanks to the asymptotic stability of the system and the invariance positive property, shows the state convergence to the origin without leaving the domain imposed by the constraints.
5. Conclusion
A method for partial or total eigenstructure assignment problem was presented and examples to illustrate the method were given. The method uses Sylvester equation to find the feedback matrix F when some matrix
is given with a desirable spectrum that will replace all the undesirable eigenvalues of the initial matrix A of System (1) in the closed-loop. This method generalizes the one proposed in [10] without additional conditions on Sys-
tem (5) and allows us to deal with the problem of asymmetrical constraints on the control vector. Examples to show its importance are presented.