Computing Structured Singular Values for Delay and Polynomial Eigenvalue Problems

In this article the computation of the Structured Singular Values (SSV) for the delay eigenvalue problems and polynomial eigenvalue problems is presented and investigated. The comparison of bounds of SSV with the well-known MATLAB routine mussv is investigated.


Introduction
The µ-values [1] is an important mathematical tool in control theory; it allows discussing the problem arising in the stability analysis and synthesis of control systems to quantify the stability of a closed-loop linear time-invariant systems subject to structured perturbations.The structures addressed by the SSV is very general and allows covering all types of parametric uncertainties that can be incorporated into the control system by using real and complex Linear Fractional Transformations LFT's.For more detail please see [1]- [7] and the references therein for the applications of SSV.
The versatility of the SSV comes at the expense of being notoriously hard, in fact Non-deterministic Polynomial time that is NP hard [8] to compute.The numerical algorithms which are being used in practice provide both upper and lower bounds of SSV.An upper bound of the SSV provides sufficient conditions to guarantee robust stability analysis of feedback systems, while a lower bound provides sufficient conditions for instability analysis of the feedback systems.computes an upper bound of the SSV using diagonal balancing and Linear Matrix Inequality techniques [9] [10].The lower bound is computed by using the generalization of power method developed in [11] [12].

The widely used function mussv available in the Matlab Control Toolbox
In this paper the comparison of numerical results to approximate the lower bounds of the SSV associated with mixed real and pure complex uncertainties is presented.
Overview of the article.Section 2 provides the basic framework.In particular, it explains how the computation of the SSV can be addressed by an inner-outer algorithm, where the outer algorithm determines the perturbation level  and the inner algorithm determines a (local) extremizer of the structured spectral value set.In Section 3 it is explained that how the inner algorithm works for the case of pure complex structured perturbations.An important characterization of extremizers shows that one can restrict himself to a manifold of structured perturbations with normalized and low-rank blocks.A gradient system for finding extremizers on this manifold is established and analyzed.The outer algorithm is addressed in Section 4, where a fast Newton iteration for determining the correct perturbation level  is developed.Finally, Section 5 presents a range of numerical experiments to compare the quality of the lower bounds to those obtained with mussv.

Framework
Consider a matrix where i r I denotes the , i i r r identity matrix.Each of the scalars i s and the , j j m m matrices j ∆ may be constrained to stay real in the definition of  .
The integer N denotes the number of repeated scalar blocks (that is, scalar multiples of the identity) and F denotes the number of full blocks.This implies . In order to distinguish complex and real scalar blocks, assume that the first N N ′ ≤ blocks are complex while the (possibly) remaining N N′ − blocks are real.Similarly assume that the first F F ′ ≤ full blocks are complex and the remaining F F′ − blocks are real.The literature (see, e.g., [1]) usually does not consider real full blocks, that is, F F ′ = .In fact, in control theory, full blocks arise from uncertainties associated to the frequency response of a system, which is complex-valued.
For simplicity, assume that all full blocks are square, although this is not necessary and our method extends to the non-square case in a straightforward way.Similarly, the chosen ordering of blocks should not be viewed as a limiting assumption; it merely simplifies notation.
The following definition is given in [1], where In Definition 2.1,

( )
det ⋅ denotes the determinant of a matrix and in the following we make use of the convention that the minimum over an empty set is +∞ .In particular, is a positively homogeneous function, i.e., ( ) ( ) for any 0.
general  , the SSV can only become smaller and thus gives us the upper bound ( ) . This can be refined further by exploiting the properties of µ  , see [14].These relations between µ  and where ( ) ρ ⋅ denotes the spectral radius of a matrix.For any nonzero eigenvalue λ of A, the matrix

A Reformulation Based on Structured Spectral Value Sets [13]
The structured spectral value set of where ( ) Λ ⋅ denotes the spectrum of a matrix.Note that for purely complex *  , the set ( ) allows us to express the SSV defined in Equation ( 2) as ( ) ( ) ( ) that is, as a structured distance to singularity problem.This gives us that ( ) For a purely complex perturbation set *  , one can use Equation (3) to alternatively express the SSV as where and one can have that ( ) , where D deno- tes the open complex unit disk, if and only if

Problem under Consideration [13]
Let us consider the minimization problem where ( ) . By the discussion above, the SSV, ( ) is the reciprocal of the smallest value of  for which ( ) . This suggests a two-level algorithm: In the inner algorithm, we attempt to solve Equation (8).In the outer algorithm, we vary  by an iterative procedure which exploits the knowledge of the exact derivative of an extremizer say ( ) ∆  with respect to  .We address Equation ( 8) by solving a system of Ordinary Differential Equations (ODE's).In general, this only yields a local minimum of Equation ( 8) which, in turn, gives an upper bound for  and hence a lower

Pure Complex Perturbations [13]
In this section, we consider the solution of the inner problem discussed in Equation ( 9) in the estimation of ( ) and a purely complex perturbation set , , ; , , : ,

Extremizers
Now, make use of the following standard eigenvalue perturbation result, see, e.g., [15].Here and in the following, denote d dt = and 0 0 , v w are right and left eigenvectors of 0 C associated to 0 λ , that is, ( ) ( ) Our goal is to solve the maximization problem discussed in Equation ( 9), which requires finding a perturbation local ∆ such that ( ) ∆ ≤ .In the following we call λ a largest eigenvalue if λ equals the spectral radius.Definition 3.2.[13].A matrix * ∆ ∈  such that 2 1 ∆ ≤ and A∆  has a largest eigenvalue that locally maximizes the modulus of ( ) The following result provides an important characterization of local extremizers.
Theorem 3.3.[13].Let ( ) be a local extremizer of ( ) such that the size of the components , that is, all blocks of local ∆ have unit 2-norm.The following theorem allows us to replace full blocks in a local extremizer by rank-1 matrices.Theorem 3.4.[13].Let ( ) remizer and let , , v u λ be defined and partitioned as in Theorem 3.3.Assuming that Equation ( 13) holds, every block h ∆ has a singular value 1 with associated singular vectors for some is also a local extremizer, i.e., ( ) ( ) ( )

A system of ODEs to Compute Extremal Points of ( )
In order to compute a local maximizer for λ , with denote the orthogonal projection, with respect to the Frobenius inner product, of a matrix To derive a compact formula for this projection, use the pattern matrix , , , , , , where d I denotes the , d d -matrix of all ones.
Here, 0 i ν > is the reciprocal of the absolute value of the right-hand side in Equation ( 23), if this is different from zero, and Proof.The statement is an immediate consequence of Lemma 3.5.
The system of ODEs.Lemma 3.6 and Corollary 3.7 suggest to consider the following differential equation on the manifold where ( ) v t is an eigenvector, of unit norm, associated to a simple eigenvalue for some fixed 0 >  .Note that ( ) ( ) ( ) The following lemma establishes a useful property for the analysis of stationary points of Equation (26).Lemma 3.9.[13] ν , j η originating from Lemma 3.6., to achieve unit norm of all blocks in Equation (25), is completely arbitrary.Other choices would be also acceptable and investigating an optimal one in terms of speed of convergence to stationary points would be an interesting issue.
The following result characterizes stationary points of Equation (26).

Projection of Full Blocks on Rank-1 Manifolds [13]
In order to exploit the rank-1 property of extremizers established in Theorem 3.4, one can proceed in complete analogy to [16] in order to obtain for each full block an ODE on the manifold M of (complex) rank-1 matrices.The derivation of this system of ODEs is straightforward; the interested reader can see [17] for full details.
The monotonicity and the characterization of stationary points follows analogously to those obtained for Equation (26); and also refer to [16] for the proofs.As a consequence one can use the ODE in Equation (28) instead of Equation ( 26) and gain in terms of computational complexity.

Choice of Initial Value Matrix and 0  [13]
In our two-level algorithm for determining  use the perturbation ∆ obtained for the previous value  as the initial value matrix for the system of ODEs.However, it remains to discuss a suitable choice of the initial values ( ) 0 0 ∆ = ∆ and 0  in the very beginning of the algorithm.
For the moment, let us assume that A is invertible and write ( ) which aim to have as close as possible to singularity.To determine 0 ∆ , one can perform an asymptotic analysis around 0 0 ≈  .For this purpose, let us consider the matrix valued function and let denote ( ) χ τ denote an eigenvalue of ( ) G τ with smallest modulus.Letting v and w denote the right and left eigenvectors corresponding to


In order to have the locally maximal decrease of ( ) where the positive diagonal matrix D is chosen such that 0 1 ∆ ∈ .This is always possible under the genericity assumptions ( 12) and ( 13).The orthogonal projector P  onto *  can be expressed in analogy to Equation (21) for P *  , with the notable difference that ( ) ( )  .Note that there is no need to form 1 A − ; v and w can be obtained as the eigenvectors associated to a largest eigenvalue of A. However, attention needs to be paid to the scaling.Since the largest eigenvalue of A is This can be improved in a simple way by computing this expression for 0  for several eigenvalues of A (say, the m largest ones) and taking the smallest computed 0  .For a sparse matrix A, the matlab function eigs (an interface for ARPACK, which implements the implicitly restarted Arnoldi Method [18] [19] allows to efficiently compute a predefined number m of Ritz values. Another possible, very natural choice for 0  is given by ( )

Fast Approximation of ( )
This section discuss the outer algorithm for computing a lower bound of ( ) . Since the principles are the same, one can treat the case of purely complex perturbations in detail and provide a briefer discussion on the extension to the case of mixed complex/real perturbations.

Purely Complex Perturbations
In the following let ( ) λ  denote a continuous branch of (local) maximizers for .In order to approximate this solution, aim at computing   such that the boundary of the   -spectral value set is locally contained in the unit disk and its boundary ( ) is tangential to the unit circle.This provides a lower bound 1   for ( ) Now make the following generic assumption.
The following theorem gives an explicit and easily computable expression for the derivative of ( ) λ  .Then

Numerical Experimentation
This section provides the comparison of the lower bounds of SSV computed by well-known Matlab function mussv and the algorithm [13].
We consider the following delay eigenvalue problem of the form: where, Example 1.Consider the following two dimensional matrix 1 A taken from above mentioned delay eigenvalue problem.By using the algorithm [13], one can obtain the perturbation * * ∆  with 1.0000 0.0000 0 , 0 1.0000 0.0000 In the following Table 1, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [13] for the matrix 1 A given bellow.In the very first column, it is presented the dimension of the matrix 1 A .In the second column, it is presented the set of block diagonal matrices denoted by BLK.In the third, fourth and fifth columns, it is presented the upper and lower bounds    By using the algorithm [13], one can obtain the perturbation * * ∆  with 1.0000 0.0000 0 , 0 1.0000 0.0000 In Table 2, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [13] for the matrix 2 A given bellow.In the very first column, it is presented the dimension of the matrix 2 A .In the second column, it is presented the set of block diagonal matrices denoted by BLK.In the third, fourth and fifth columns, it is presented the upper and lower bounds mussv     In Table 3, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [13] for the matrix 3 A given bellow.In the very first column, it is presented the dimension of the matrix 3 A .In the second column, it is presented the set of block diagonal matrices denoted by BLK.In the third, fourth and fifth columns, it is presented the upper and lower bounds mussv   By using the algorithm [13], one can obtain the perturbation * * ∆  with 1.0000 0.0000 0 0 0 1.0000 0.0000 0 , 0 0 1.0000 0.0000 In Table 5, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [13] for the matrix 5 A given bellow.In the very first column, it is presented the dimension of the matrix 5 A .In the second column, it is presented the set of block diagonal matrices denoted by BLK.In the third, fourth and fifth columns, it is presented the upper and lower bounds mussv

Conclusion
In this article the problem of approximating structured singular values for the delay eigenvalue and polynomial eigenvalue problems is considered.The obtained results provide a characterization of extremizers and gradient systems, which can be integrated numerically in order to provide approximations from below to the structured singular value of a matrix subject to general pure complex and mixed real and complex block perturbations.The experimental results show the comparison of the lower bounds of structured singular values with once computed by MUSSV and alogorithm [13].
How to cite this paper: Rehman, M.-U., Majeed, D., Nasreen, N. and Tabassum, S. (2017) Computing Structured Singular Values for Delay and Polynomial Eigenvalue perturbation set with prescribed block diagonal structure, (

2 ⋅
denotes the matrix 2-norm and I the n n × identity matrix.Definition 2.1.[13].Let n n A × ∈  and consider a set  .Then the SSV (or µ-value) ( ) This gives the following alternative expression: , both the spectral radius and the matrix 2-norm are included as special cases of the SSV.
the lack of global optimality criteria for Equation(8), the only way to increase the robustness of the method is to compute several local optima.The case of a purely complex perturbation set *  can be addressed ana- logously by letting the inner algorithm to determine a local optima for ( ) of ( )C t converging to a simple eigenvalue 0 λ of ( ) right and left eigenvectors v and w scaled such that maximal local increase.Then derive a system of ODEs satisfied by this choice of ( ) t ∆ .Orthogonal projection onto *  .In the following, we make use of the Frobenius inner product ( )


denote the block diagonal matrix obtained by entrywise multiplication of C with the matrix I *  defined in Equation (20).Then the orthogonal projection of C onto *

1  1 
as in Equation (18), now we aim at determining a direction U ∆ =  that locally maximizes the increase of the modulus of λ .This amounts to determining ( ) in Equation (20) follows from Equation (19), while the constraints in Equation (21) ensure that U is in the tangent space of * at ∆ .In particular Equation (20) implies that the the norms of the blocks of ∆ are conserved.Note that Equation (20) only becomes well-posed after imposing an additional normalization on the norm of U. The scaling chosen in the following lemma aims at * 1

0 jζ 1 U
> is the reciprocal of the Frobenius norm of the matrix on the right hand side in Equation (24), if this is different from zero, and1 j ζ = otherwise.If allright-hand sides are different from zero then * * ∈  .Corollary 3.7.[13].The result of Lemma 3.6 can be expressed as The differential Equation (30) is a gradient system because, by definition, the right-hand side is the projected gradient of follows directly from Lemmas 3.1 and 3.6.Theorem 3.8.[13].Let ( ) * 1 t ∆ ∈ satisfy the differential Equation (26).If( ) for 0  is obtained by solving the following simple linear equation, resulting from the first order expansion of the eigenvalue at 0 is the upper bound for the SSV computed by the matlab function mussv.
the stationary points ( ) ∆  of the system of ODEs in Equation (30).The computation of the SSV is equivalent to the smallest solution  of the equation
Apply the Matlab routine mussv, one can obtain the perturbation ∆ with obtained by mussv.

2 .
Consider the following two dimensional matrix 2 A taken from above mentioned delay eigenvalue problem.
Apply the Matlab routine mussv, one can obtain the perturbation ∆ with obtained by mussv.

Example 3 .
Consider the following three dimensional matrix 3 A taken from above mentioned delay eigenvalue problem.

4 .
Consider the following three dimensional matrix 4 A taken from above mentioned delay eigenvalue problem.along with the perturbation set Apply the Matlab routine mussv, one can obtain the perturbation ∆ with The same lower bound can be obtained 7.9298 lower New µ = as the one obtained by mussv.
Apply the Matlab routine mussv, one can obtain the perturbation ∆ with obtained by mussv.

Table 1 .
Comparison of lower bounds of SSV with MATLAB function mussv.

Table 2 .
Apply the Matlab routine mussv, one can obtain the perturbation ∆ with Comparison of lower bounds of SSV with MATLAB function mussv.