Numerical Computation of Structured Singular Values for Companion Matrices

In this article, the computation of μ-values known as Structured Singular Values SSV for the companion matrices is presented. The comparison of lower bounds with the well-known MATLAB routine mussv is investigated. The Structured Singular Values provides important tools to analyze the stability and instability analysis of closed loop time invariant systems in the linear control theory as well as in structured eigenvalue perturbation theory.


Introduction
The µ-values [1] is an important mathematical tool in control theory, it allows to discuss the problem arising in the stability analysis and synthesis of control systems.To quantify the stability of a closed-loop linear time-invariant system subject to the structured perturbations, the structures addressed by the SSV are very general and allow covering all types of parametric uncertainties that can be incorporated into the control system by using real and complex Linear Fractional Transformations LFT's.For more detail please see [1]- [7] and the references therein for the applications of SSV.
The versatility of the SSV comes at the expense of being notoriously hard, in fact Non-deterministic Polynomial time that is NP hard [8] to compute.The numerical algorithms, which are being used in practice, provide both upper and lower bounds of SSV.An upper bound of the SSV provides sufficient conditions to guarantee robust stability analysis of feedback systems, while a lower bound provides sufficient conditions for instability analysis of the feedback systems.
The widely used function mussv available in the Matlab Control Toolbox computes an upper bound of the SSV using diagonal balancing and Linear Matrix Inequlaity techniques [9] [10].The lower bound is computed by using the generalization of power method developed in [11] [12].
In this paper, the comparison of numerical results to approximate the lower bounds of the SSV associated with pure complex uncertainties is presented.
Overview of the article.Section 2 provides the basic framework.In particular, it explain how the computation of the SSV can be addressed by an inner-outer algorithm, where the outer algorithm determines the perturbation level  and the inner algorithm determines a (local) extremizer of the structured spectral value set.In Section 3, we explain that how the inner algorithm works for the case of pure complex structured perturbations.An important characterization of extremizers shows that one can restrict himself to a manifold of structured perturbations with normalized and low-rank blocks.A gradient system for finding extremizers on this manifold is established and analyzed.The outer algorithm is addressed in Section 4, where a fast Newton iteration for determining the correct perturbation level  is developed.Finally, Section 5 presents a range of numerical experiments to compare the quality of the lower bounds to those obtained with mussv.

Framework
Consider a matrix  ∆ may be constrained to stay real in the definition of  .The integer S denotes the number of repeated scalar blocks (that is, scalar multiples of the identity) and F denotes the number of full blocks.This implies . In order to distinguish complex and real scalar blocks, assume that the first S S ′ ≤ blocks are complex while the (possibly) remaining S S′ − blocks are real.Similarly assume that the first F F ′ ≤ full blocks are complex and the (possibly) remaining F F′ − blocks are real.The literature (see, e.g., [1]) usually does not consider real full blocks, that is, F F ′ = .In fact, in control theory, full blocks arise from uncertainties associated to the frequency response of a system, which is complex-valued.
For simplicity, assume that all full blocks are square, although this is not necessary and our method extends to the non-square case in a straightforward way.
Similarly, the chosen ordering of blocks should not be viewed as a limiting assumption; it merely simplifies notation.
The following definition is given in [1], where In Definition 2.1,

( )
det ⋅ denotes the determinant of a matrix and in the fol- lowing use the convention that the minimum over an empty set is +∞ .In particular, is a positively homogeneous function, i.e., ( ) ( ) for any 0. .This can be refined further by exploiting the properties of µ  , see [14].These relations between µ  and 2 M , the largest singular value of M, justifies the name structured singular value for The important special case when  only allows for complex perturbations, that is, S S′ = and F F′ = , deserves particular attention.In this case one can In turn, there is * ∆ ∈  such that ( ) This gives the following alternative expression: where ( ) ρ ⋅ denotes the spectral radius of a matrix.For any nonzero eigenvalue λ of M , the matrix

A Reformulation Based on Structured Spectral Value Sets [13]
The structured spectral value set of where ( ) Λ ⋅ denotes the spectrum of a matrix.Note that for purely complex *  , the set ( ) allows us to express the SSV defined in Equation (2) as that is, as a structured distance to singularity problem.This gives us that ( ) For a purely complex perturbation set *  , one can use Equation (3) to alter- natively express the SSV as where and one can have that ( ) , where D denotes the open complex unit disk, if and only if

Problem under Consideration [13]
Let us consider the minimization problem where ( ) . By the discussion above, the SSV ( ) is the reciprocal of the smallest value of  for which ( ) . This suggests a two-level algorithm: In the inner algorithm, we attempt to solve Equation (8).In the outer algorithm, we vary  by an iterative procedure which exploits the knowledge of the exact derivative of an extremizer say ( ) .Due to the lack of global optimality criteria for Equation (8), the only way to increase the robustness of the method is to compute several local optima.
The case of a purely complex perturbation set *  can be addressed analogously by letting the inner algorithm determine local optima for

Pure Complex Perturbations [13]
In this section consider the solution of the inner problem discussed in Equation (9) in the estimation of ( ) and a purely complex perturbation set , , ; , , : ,

Extremizers
Now, make use of the following standard eigenvalue perturbation result, see, e.g., [15].Here and in the following, denote d dt =  ( ) Our goal is to solve the maximization problem discussed in Equation ( 9), which requires finding a perturbation opt ∆ such that ( ) The following result provides an important characterization of local extremizers.
Theorem 3.3.[13].Let ( ) such that the size of the components , k k x z equals the size of the kth block in opt ∆ , additionally assume that The following theorem allows us to replace full blocks in a local extremizer by rank-1 matrices.
Theorem 3.4.[13].Let ( ) tremizer and let , , x z λ be defined and partitioned as in Theorem 3.3.Assuming that Equation ( 13) holds, every block h ∆ has a singular value 1 with asso- ciated singular vectors for some 1 h γ = .Moreover, the matrix ( ) is also a local extremizer, i.e., ( ) ( ) Remark 3.1.[13].Theorem 3.3 allows us to restrict the perturbations in the structured spectral value set shown in Equation ( 4) to those with rank-1 blocks, which was also shown in [1].Since the Frobenius and the matrix 2-norms of a rank-1 matrix are equal, one can equivalently search for extremizers within the submanifold ( )

A System of ODEs to Compute Extremal Points
of ( ) In order to compute a local maximizer for λ , with , , , , , , where d I denotes the , d d -matrix of all ones.
Lemma 3.5.[13].For , , , , , where The local optimization problem [13].Let us recall the setting from Section As a consequence of Lemma 3.1, see also Equation (15), to have where * z M y = and the dependence on t is intentionally omitted. Letting  as in Equation ( 18), now aim at determining a di- rection Z ∆ =  that locally maximizes the increase of the modulus of λ .This amounts to determining ( ) Lemma 3.6.[13].With the notation introduced above and , x z partitioned as in Equation ( 11), a solution of the optimization problem discussed in Equation (25) is given by ( ) * * , 1, , where ( ) x t is an eigenvector, of unit norm, associated to a simple eigenvalue as well.The differential Equation (30) is a gradient system because, by definition, the right-hand side is the projected gradient of ( ) The following result follows directly from Lemmas 3.1 and 3.6.
Theorem 3.8.[13].Let ( ) The following lemma establishes a useful property for the analysis of stationary points of Equation (30).Lemma 3.9.[13]  Remark 3.3.The choice of i ν , j η originating from Lemma 3.6., to achieve unit norm of all blocks in Equation (29), is completely arbitrary.Other choices would be also acceptable and investigating an optimal one in terms of speed of convergence to stationary points would be an interesting issue.
The following result characterizes stationary points of Equation (30).

Projection of Full Blocks on Rank-1 Manifolds [13]
In order to exploit the rank-1 property of extremizers established in Theorem 3.4, one can proceed in complete analogy to [16] in order to obtain for each full block an ODE on the manifold  of (complex) rank-1 matrices.Express , j j j j j j j j j j j j j j p q p q p q p q σ σ σ σ where j σ ∈  and , j m j j p q ∈  have unit norm.The parameters j σ ∈   , , j m j j p q ∈    are uniquely determined by , , j j j p q σ and j ∆  when imposing the orthogonality conditions * * 0, 0 j j j j p p q q = =   .
In the differential Equation ( 26) replace the right-hand side by its orthogonal projection onto the tangent space . j j j j j j j j j j j j S j j j j j j S j j j j j Re i The derivation of this system of ODEs is straightforward; the interested reader can see [17] for details.
The monotonicity and the characterization of stationary points follows analogously to those obtained for Equation (33); and also refer to [16] for the proofs.
As a consequence one can use the ODE in Equation (30) instead of Equation ( 26) and gain in terms of computational complexity.

Choice of Initial
Value Matrix and 0  [13]   In our two-level algorithm for determining  use the perturbation ∆ ob- tained for the previous value  as the initial value matrix for the system of ODEs.However, it remains to discuss a suitable choice of the initial values ( ) 0 0 ∆ = ∆ and 0  in the very beginning of the algorithm.
For the moment, let us assume that M is invertible and write ( ) which aim to have as close as possible to singularity.To determine 0 ∆ , one can perform an asymptotic analysis around 0 0 ≈  .For this purpose, let us consider the matrix valued function


In order to have the locally maximal decrease of ( ) where the positive diagonal matrix D is chosen such that 0 1 ∆ ∈ .This is always possible under the genericity assumptions ( 12) and ( 13).The orthogonal projector P  onto *  can be expressed in analogy to Equation (21) for P *  , with the notable difference that ( ) ( ) Note that there is no need to form 1 M − ; x and y can be obtained as the eigenvectors associated to a largest eigenvalue of M.However, attention needs to be paid to the scaling.Since the largest eigenvalue of M is This can be improved in a simple way by computing this expression for 0  for several eigenvalues of M (say, the m largest ones) and taking the smallest computed 0  .For a sparse matrix M, the matlab function eigs (an interface for ARPACK, which implements the implicitly restarted Arnoldi Method [18] [19] allows to efficiently compute a predefined number m of Ritz values. Another possible, very natural choice for 0  is given by ( ) where ( ) is the upper bound for the SSV computed by the matlab function mussv.

Fast Approximation of ( )
This section discuss the outer algorithm for computing a lower bound of ( ) . Since the principles are the same, one can treat the case of purely complex perturbations in detail and provide a briefer discussion on the extension to the case of mixed complex/real perturbations.

Purely Complex Perturbations
In the following let ( ) λ  denote a continuous branch of (local) maximizers for

Numerical Experimentation
This section provides the comparison of the numerical results for lower bounds of SSV computed by well-known Matlab function mussv and the algorithm [13] for companion matrices with different dimensions.
In the second column, it is presented the set of block diagonal matrices denoted by BLK.In the third, fourth and fifth columns, it is presented the upper and lower bounds In the following table, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [13] for the companion matrix M given bellow.In the very first column, it is presented the dimension of the matrix M.
In the second column, it is presented the set of block diagonal matrices denoted by BLK.In the third, fourth and fifth columns, it is presented the upper and In the following table, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [13] for the companion matrix M given bellow.In the very first column, it is presented the dimension of the matrix M.
In the second column, it is presented the set of block diagonal matrices denoted by BLK.In the third, fourth and fifth columns, it is presented the upper and  In the following table, it is we presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [13] for the companion matrix M given bellow.In the very first column, it is presented the dimension of the matrix M.

Conclusion
In this article, the problem of approximating structured singular values for the companion matrices is considered.The obtained results provide a characterization of extremizers and gradient systems, which can be integrated numerically in order to provide approximations from below to the structured singular value of a matrix subject to general pure complex and mixed real and complex block perturbations.The experimental results show the comparison of the lower bounds of structured singular values for companion matrices computed by MUSSV and alogorithm [13].
and an underlying perturbation set with prescribed block diagonal structure, (

2 ⋅
denotes the matrix 2- norm and I the n n × identity matrix.Definition 2.1.[13].Let n n M × ∈  and consider a set  .Then the SSV (or µ-value) ( )

.
For general  , the SSV can only become smaller and thus gives us the upper bound , both the spectral radius and the matrix 2-norm are included as special cases of the SSV.
x y are right and left eigenvectors of 0 C associated to 0 λ , that is, ( ) right and left eigenvectors x and y scaled such that

1  1 Z
target function in Equation (25) follows from Equation (23), while the constraints in Equation (24) and Equation (25) ensure that Z is in the tangent space of * at ∆ .In particular Equation (25) implies that the the norms of the blocks of ∆ are conserved.Note that Equation (25) only becomes well-posed after imposing an additional normalization on the norm of Z.The scaling chosen in the following lemma aims at * ∈  .

1 Z
is the reciprocal of the absolute value of the right-hand side in Equation (27), if this is different from zero, and 1 i ν = otherwise.Similarly, 0 j ζ > is the reciprocal of the Frobenius norm of the matrix on the right hand side in Equation (27), if this is different from zero, and 1 = j ζ otherwise.If all right-hand sides are different from zero then * * ∈  .Corollary 3.7.[13].The result of Lemma 3.6 can be expressed as statement is an immediate consequence of Lemma 3.5.The system of ODEs.Lemma 3.6 and Corollary 3.7 suggest to consider the following differential equation on the manifold * 1 real diagonal matrix D * ∈  .Moreover if ( ) can obtain that the differ- ential Equation (29) is equivalent to the following system of differential equa- ( ) χ τ denote an eigenvalue of ( ) G τ with smallest modulus.Letting x and y denote the right and left eigenvectors corresponding to and x have to be scaled accordingly.A possible choice for 0  is obtained by solving the following simple linear equation, resulting from the first order expansion of the eigenvalue at 0 τ = : the stationary points ( ) ∆  of the system of ODEs in Equation (30).The computation of the SSV is equivalent to the smallest solution  of the equation( ) 1 λ = .In order to approximate this solution, aim at computing   such that the boundary of the   -spectral value set is locally contained in the unit disk and its boundary ( ) λ  , assume that ( ) λ  is simple and that ( ) ∆ ⋅ and ( ) λ ⋅ are smooth in a neighborhood of  .The following theorem gives an explicit and easily computable expression for the derivative of ( ) λ  .Theorem 4.1.[13].Suppose that Assumption 4.1 holds for ( )  .Let ( )x  and ( ) y  be the corresponding right and left eigenvectors of ( ) M ∆   , scaled according to Equation (22).Consider the partitioning of ( ) suppose that Assumptions (12) and (13) hold.Then

Example 2 .
Consider the following four dimensional companion matrix of the form,

and 2 ˆ0=
.5247.∆ = For this example, one can obtain the upper bound upper 1.9221 PD µ = while the lower bound as lower 1.9268.PD µBy using the algorithm[13], one can obtain the perturbation * * ∆ Apply the Matlab routine mussv, one can obtain the perturbation ∆ with Apply the Matlab routine mussv, one can obtain the perturbation ∆ with and the lower bound New l µ computed by algorithm[13] respectively.
In the following call λ a largest eigenvalue if λ equals the spectral radius.
Definition 3.2.[13].A matrix * ∆ ∈  such that 2 1 ∆ ≤ and M ∆  has a largest eigenvalue that locally maximizes the modulus of ( ) the block diagonal matrix obtained by entrywise multiplication of C with the matrix I *  defined in Equation (20).Then the orthogonal projection of C onto * is given by . Let ( ) *