^{1}

^{2}

^{1}

In this article, the computation of μ-values known as Structured Singular Values SSV for the companion matrices is presented. The comparison of lower bounds with the well-known MATLAB routine mussv is investigated. The Structured Singular Values provides important tools to analyze the stability and instability analysis of closed loop time invariant systems in the linear control theory as well as in structured eigenvalue perturbation theory.

The m-values [

The versatility of the SSV comes at the expense of being notoriously hard, in fact Non-deterministic Polynomial time that is NP hard [

The widely used function mussv available in the Matlab Control Toolbox computes an upper bound of the SSV using diagonal balancing and Linear Matrix Inequlaity techniques [

In this paper, the comparison of numerical results to approximate the lower bounds of the SSV associated with pure complex uncertainties is presented.

Overview of the article. Section 2 provides the basic framework. In particular, it explain how the computation of the SSV can be addressed by an inner-outer algorithm, where the outer algorithm determines the perturbation level ϵ and the inner algorithm determines a (local) extremizer of the structured spectral value set. In Section 3, we explain that how the inner algorithm works for the case of pure complex structured perturbations. An important characterization of extremizers shows that one can restrict himself to a manifold of structured perturbations with normalized and low-rank blocks. A gradient system for finding extremizers on this manifold is established and analyzed. The outer algorithm is addressed in Section 4, where a fast Newton iteration for determining the correct perturbation level ϵ is developed. Finally, Section 5 presents a range of numerical experiments to compare the quality of the lower bounds to those obtained with mussv.

Consider a matrix M ∈ ℂ n , n and an underlying perturbation set with prescribed block diagonal structure,

B = { d i a g ( δ 1 I r 1 , ⋯ , δ s I r S , Δ 1 , ⋯ , Δ F ) , δ i ∈ ℂ ( ℝ ) , Δ j ∈ ℂ m j , m j ( ℝ m j , m j ) } , (1)

where I r i denotes the r i , r i identity matrix. Each of the scalars δ i and the m j , m j matrices Δ j may be constrained to stay real in the definition of B . The integer S denotes the number of repeated scalar blocks (that is, scalar multiples of the identity) and F denotes the number of full blocks. This implies ∑ i = 1 S r i + ∑ j = 1 F m j = n . In order to distinguish complex and real scalar blocks, assume that the first S ′ ≤ S blocks are complex while the (possibly) remaining S − S ′ blocks are real. Similarly assume that the first F ′ ≤ F full blocks are complex and the (possibly) remaining F − F ′ blocks are real. The literature (see, e.g., [

For simplicity, assume that all full blocks are square, although this is not necessary and our method extends to the non-square case in a straightforward way. Similarly, the chosen ordering of blocks should not be viewed as a limiting assumption; it merely simplifies notation.

The following definition is given in [

Definition 2.1. [

μ B ( M ) : = 1 m i n { ‖ Δ ‖ 2 : Δ ∈ B , d e t ( I − M Δ ) = 0 } . (2)

In Definition 2.1, det ( ⋅ ) denotes the determinant of a matrix and in the following use the convention that the minimum over an empty set is + ∞ . In particular, μ B ( M ) = 0 if d e t ( I − M Δ ) ≠ 0 for all Δ ∈ B .

Note that μ Δ ( M ) is a positively homogeneous function, i.e.,

μ B ( α M ) = α μ B ( M ) for any α ≥ 0.

For B = ℂ n × n , it follows directly from Definition 1 that μ B = ‖ M ‖ 2 . For general B , the SSV can only become smaller and thus gives us the upper bound μ B ( M ) ≤ ‖ M ‖ 2 . This can be refined further by exploiting the properties of μ B , see [

The important special case when B only allows for complex perturbations, that is, S = S ′ and F = F ′ , deserves particular attention. In this case one can write B ∗ instead of B . Note that Δ ∈ B ∗ implies e i φ Δ ∈ B ∗ for any φ ∈ ℝ . In turn, there is Δ ∈ B ∗ such that ρ ( M Δ ) = 1 if and only if there is Δ ′ ∈ B ∗ , with the same norm, such that M Δ ′ has the eigenvalue 1, which implies d e t ( I − M Δ ′ ) = 0 . This gives the following alternative expression:

B ∗ = 1 m i n { ‖ Δ ‖ 2 : Δ ∈ B ∗ , ρ ( M Δ ) = 1 } , (3)

where ρ ( ⋅ ) denotes the spectral radius of a matrix. For any nonzero eigenvalue λ of M , the matrix Δ = λ − 1 I satisfies the constraints of the minimization problem shown in Equation (3). This establishes the lower bound ρ ( M ) ≤ μ B ∗ ( M ) for the case of purely complex perturbations. Note that μ B ∗ = ρ ( M ) for B ∗ = { δ I : δ ∈ ℂ } . Hence, both the spectral radius and the matrix 2-norm are included as special cases of the SSV.

The structured spectral value set of M ∈ ℂ n × n with respect to a perturbation level ϵ is defined as

Λ ϵ B ( M ) = { λ ∈ Λ ( ϵ M Δ ) : Δ ∈ B , ‖ Δ ‖ 2 ≤ 1 } , (4)

where Λ ( ⋅ ) denotes the spectrum of a matrix. Note that for purely complex B ∗ , the set Λ ϵ B ( M ) is simply a disk centered at 0. The set

Σ ϵ B ( M ) = { ξ = 1 − λ : λ ∈ Λ ϵ B ( M ) } (5)

allows us to express the SSV defined in Equation (2) as

μ B ( M ) = 1 arg min { 0 ∈ Σ ϵ B ( M ) } (6)

that is, as a structured distance to singularity problem. This gives us that 0 ∈ Σ ϵ B ( M ) if and only if μ B ( M ) < 1 / ϵ .

For a purely complex perturbation set B ∗ , one can use Equation (3) to alternatively express the SSV as

μ B ∗ ( M ) = 1 arg min { max | λ | = 1 } (7)

where λ ∈ Λ ϵ B ∗ ( M ) and one can have that Λ ϵ B ∗ ( M ) ⊂ D , where D denotes the open complex unit disk, if and only if μ B ∗ ( M ) < 1 / ϵ .

Let us consider the minimization problem

ξ ( ϵ ) = arg min | ξ | (8)

where ξ ∈ Σ ϵ B ( M ) for some fixed ϵ > 0 . By the discussion above, the SSV μ B ( M ) is the reciprocal of the smallest value of ϵ for which ξ ( ϵ ) = 0 . This suggests a two-level algorithm: In the inner algorithm, we attempt to solve Equation (8). In the outer algorithm, we vary ϵ by an iterative procedure which exploits the knowledge of the exact derivative of an extremizer say Δ ( ϵ ) with respect to ϵ . We address Equation (8) by solving a system of Ordinary Differential Equations (ODE's). In general, this only yields a local minimum of Equation (8) which, in turn, gives an upper bound for ϵ and hence a lower bound for μ B ( M ) . Due to the lack of global optimality criteria for Equation (8), the only way to increase the robustness of the method is to compute several local optima.

The case of a purely complex perturbation set B ∗ can be addressed analogously by letting the inner algorithm determine local optima for

λ ( ϵ ) = arg max | λ | (9)

where λ ∈ Λ ϵ B ∗ ( M ) which then yields a lower bound for μ B ∗ ( M ) .

In this section consider the solution of the inner problem discussed in Equation (9) in the estimation of μ B ∗ ( M ) for M ∈ ℂ n , n and a purely complex perturbation set

B * = { d i a g ( δ 1 I r 1 , ⋯ , δ S I r S ; Δ 1 , ⋯ , Δ F ) : δ i ∈ ℂ , Δ j ∈ ℂ m j , m j } (10)

Now, make use of the following standard eigenvalue perturbation result, see, e.g., [

Lemma 3.1. Consider a smooth matrix family C : ℝ → ℂ n , n and let λ ( t ) be an eigenvalue of C ( t ) converging to a simple eigenvalue λ 0 of C 0 = C ( 0 ) as t → 0 . Then λ ( t ) is analytic near t = 0 with

λ ˙ ( 0 ) = y 0 * C 1 x 0 y 0 * x 0 ,

where C 1 = C ˙ ( 0 ) and x 0 , y 0 are right and left eigenvectors of C 0 associated to λ 0 , that is, ( C 0 − λ 0 I ) x 0 = 0 and y 0 * ( C 0 − λ 0 I ) = 0 .

Our goal is to solve the maximization problem discussed in Equation (9), which requires finding a perturbation Δ o p t such that ρ ( ϵ M Δ o p t ) is maximal among all Δ ∈ B ∗ with ‖ Δ ‖ 2 ≤ 1 . In the following call λ a largest eigenvalue if | λ | equals the spectral radius.

Definition 3.2. [

The following result provides an important characterization of local extremizers.

Theorem 3.3. [

Δ o p t = d i a g ( δ 1 I r 1 , ⋯ , δ S I r S ; Δ 1 , ⋯ , Δ F ) , ‖ Δ o p t ‖ 2 = 1,

be a local extremizer of Λ ϵ B ∗ ( M ) . Assume that ϵ M Δ o p t has a simple largest eigenvalue λ = | λ | e i θ , with the right and left eigenvectors x and y scaled such that s = e i θ y * x > 0 . Partitioning

x = ( x 1 T ⋯ x S T , x S + 1 T ⋯ x S + F T ) T , z = M * y = ( z 1 T ⋯ z S T , z S + 1 T ⋯ z S + F T ) T , (11)

such that the size of the components x k , z k equals the size of the kth block in Δ o p t , additionally assume that

z k * x k ≠ 0 ∀ k = 1, ⋯ , S (12)

‖ z S + h ‖ 2 ⋅ ‖ x S + h ‖ 2 ≠ 0 ∀ h = 1 , ⋯ , F . (13)

Then

| δ k | = 1 ∀ k = 1 , ⋯ , S and ‖ Δ h ‖ 2 = 1 ∀ h = 1 , ⋯ , F ,

that is, all blocks of Δ o p t have unit 2-norm.

The following theorem allows us to replace full blocks in a local extremizer by rank-1 matrices.

Theorem 3.4. [

Δ * = d i a g ( δ 1 I r 1 , ⋯ , δ S I r S , u 1 v 1 * , ⋯ , u F v F * )

is also a local extremizer, i.e., ρ ( ϵ M Δ o p t ) = ρ ( ϵ M Δ * ) .

Remark 3.1. [

B 1 * = d i a g ( δ 1 I r 1 ⋯ δ S I r S , Δ 1 ⋯ Δ F ) : δ i ∈ ℂ , | δ i | = 1 , Δ j ∈ ℂ m j , m j , ‖ Δ j ‖ F = 1. (14)

In order to compute a local maximizer for | λ | , with λ ∈ Λ ϵ B ∗ ( M ) , First construct a matrix valued function Δ ( t ) such that a largest eigenvalue λ ( t ) of ϵ M Δ ( t ) has maximal local increase. Then derive a system of ODEs satisfied by this choice of Δ ( t ) .

Orthogonal projection onto B ∗ . In the following make use of the Frobenius inner product 〈 A , B 〉 = trace ( A * B ) for two m , n matrices A , B . Let

C B ∗ = P B ∗ ( C ) . (15)

denote the orthogonal projection, with respect to the Frobenius inner product, of a matrix C ∈ ℂ n × n onto B ∗ . To derive a compact formula for this projection, use the pattern matrix

I B ∗ = d i a g ( I r 1 , ⋯ , I r S , I m 1 , ⋯ , I m F ) , (16)

where I d denotes the d , d -matrix of all ones.

Lemma 3.5. [

C ⊙ I B ∗ = d i a g ( C 1 , ⋯ , C S , C S + 1 , ⋯ , C S + F )

denote the block diagonal matrix obtained by entrywise multiplication of C with the matrix I B ∗ defined in Equation (20). Then the orthogonal projection of C onto B ∗ is given by

C B ∗ = P B ∗ ( C ) = d i a g ( γ 1 I r 1 , ⋯ , γ S I r S , Γ 1 , ⋯ , Γ F ) (17)

where γ i = t r ( C i ) / r i , i = 1 , ⋯ , S , and Γ 1 = C S + 1 , ⋯ , Γ F = C S + F .

The local optimization problem [

‖ y ‖ = ‖ x ‖ = 1 , y * x = | y * x | e − i θ . (18)

As a consequence of Lemma 3.1, see also Equation (15), to have

d d t | λ | 2 = 2 | λ | R e ( z * Δ ˙ x e i θ y * x ) = 2 | λ | | y * x | R e ( z * Δ ˙ x ) , (19)

where z = M * y and the dependence on t is intentionally omitted.

Letting Δ ∈ B 1 * , with B 1 * as in Equation (18), now aim at determining a direction Δ ˙ = Z that locally maximizes the increase of the modulus of λ . This amounts to determining

Z = d i a g ( ω 1 I r 1 , ⋯ , ω s I r S , Ω 1 , ⋯ , Ω F ) (20)

as a solution of the optimization problem

λ Z * = arg max { R e ( z * Z x ) }

subject to R e ( δ ¯ i ω i ) = 0, i = 1 : S ,

and R e 〈 Δ j , Ω j 〉 = 0, j = 1 : F . (21)

The target function in Equation (25) follows from Equation (23), while the constraints in Equation (24) and Equation (25) ensure that Z is in the tangent space of B 1 * at Δ . In particular Equation (25) implies that the the norms of the blocks of Δ are conserved. Note that Equation (25) only becomes well-posed after imposing an additional normalization on the norm of Z. The scaling chosen in the following lemma aims at Z ∈ B 1 * .

Lemma 3.6. [

Z * = d i a g ( ω 1 I r 1 , ⋯ , ω S I r S , Ω 1 , ⋯ , Ω F ) , (22)

with

ω i = ν i ( x i * z i − R e ( x i * z i δ ¯ i ) δ i ) , i = 1, ⋯ , S (23)

Ω j = ζ j ( z S + j x S + j * − R e 〈 Δ j , z S + j x S + j * 〉 Δ j ) , j = 1, ⋯ , F . (24)

Here, ν i > 0 is the reciprocal of the absolute value of the right-hand side in Equation (27), if this is different from zero, and ν i = 1 otherwise. Similarly, ζ j > 0 is the reciprocal of the Frobenius norm of the matrix on the right hand side in Equation (27), if this is different from zero, and ζ j = 1 otherwise. If all right-hand sides are different from zero then Z * ∈ B 1 * .

Corollary 3.7. [

Z * = D 1 P B ∗ ( z x * ) − D 2 Δ (25)

where P B ∗ ( ⋅ ) is the orthogonal projection and D 1 , D 2 ∈ B ∗ are diagonal matrices with D 1 positive.

Proof. The statement is an immediate consequence of Lemma 3.5.

The system of ODEs. Lemma 3.6 and Corollary 3.7 suggest to consider the following differential equation on the manifold B 1 * :

Δ ˙ = D 1 P B ∗ ( z x * ) − D 2 Δ , (26)

where x ( t ) is an eigenvector, of unit norm, associated to a simple eigenvalue λ ( t ) of ϵ M Δ ( t ) for some fixed ϵ > 0 . Note that z ( t ) , D 1 ( t ) , D 2 ( t ) depend on Δ ( t ) as well. The differential Equation (30) is a gradient system because, by definition, the right-hand side is the projected gradient of Z ↦ R e ( z * Z x ) .

The following result follows directly from Lemmas 3.1 and 3.6.

Theorem 3.8. [

The following lemma establishes a useful property for the analysis of stationary points of Equation (30).

Lemma 3.9. [

P B ∗ ( z ( t ) x ( t ) * ) ≠ 0, (27)

where z ( t ) = M * y ( t ) .

Remark 3.3. The choice of ν i , η j originating from Lemma 3.6., to achieve unit norm of all blocks in Equation (29), is completely arbitrary. Other choices would be also acceptable and investigating an optimal one in terms of speed of convergence to stationary points would be an interesting issue.

The following result characterizes stationary points of Equation (30).

Theorem 3.10. [

d d t | λ ( t ) | 2 = 0 ⇔ Δ ˙ ( t ) = 0 ⇔ Δ ( t ) = D P B ∗ ( z ( t ) x ( t ) * ) , (28)

for a specific real diagonal matrix D ∈ B ∗ . Moreover if λ ( t ) has (locally) maximal modulus over the set Λ ϵ B ∗ ( M ) then D is positive.

In order to exploit the rank-1 property of extremizers established in Theorem 3.4, one can proceed in complete analogy to [

Δ j = σ j p j q j * , Δ ˙ j = σ ˙ j p j q j * + σ j p ˙ j q j * + σ j p j q ˙ j *

where σ j ∈ ℂ and p j , q j ∈ ℂ m j have unit norm. The parameters σ ˙ j ∈ ℂ , p ˙ j , q ˙ j ∈ ℂ m j are uniquely determined by σ j , p j , q j and Δ ˙ j when imposing the orthogonality conditions p j * p ˙ j = 0 , q j * q ˙ j = 0 .

In the differential Equation (26) replace the right-hand side by its orthogonal projection onto the tangent space T Δ j M (and also remove the normalization constant) to obtain

Δ ˙ j = P Δ j ( z S + j x S + j * − R e 〈 Δ j , z S + j x S + j * 〉 Δ j ) . (29)

Note that the orthogonal projection of a matrix Z ∈ ℂ m j , m j ont T Δ j M at Δ j = σ j p j q j * ∈ M is given by

P Δ j ( Z ) = Z − ( I − p j p j * ) Z ( I − q j q j * ) .

Following the arguments of [

σ ˙ j = p j * Z q j

p ˙ j = ( I − p j p j * ) Z q j σ j − 1

q ˙ j = ( I − q j q j * ) Z * p j σ ¯ j − 1 .

Inserting Z = z S + j x S + j * − R e 〈 Δ j , z S + j x S + j * 〉 Δ j , one can obtain that the differ- ential Equation (29) is equivalent to the following system of differential equations for σ j , p j and q j , where set α j = p j * z S + j ∈ ℂ , β j = q j * x S + j ∈ ℂ :

σ ˙ j = α j β ¯ j − R e ( α ¯ j β j σ j ) σ j = i ℑ ( α j β ¯ j σ ¯ j ) σ j p ˙ j = ( z S + j − α j p j ) β ¯ j σ j − 1 q ˙ j = ( x S + j − β j q j ) α ¯ j σ ¯ j − 1 . (30)

The derivation of this system of ODEs is straightforward; the interested reader can see [

The monotonicity and the characterization of stationary points follows analogously to those obtained for Equation (33); and also refer to [

In our two-level algorithm for determining ϵ use the perturbation Δ obtained for the previous value ϵ as the initial value matrix for the system of ODEs. However, it remains to discuss a suitable choice of the initial values Δ ( 0 ) = Δ 0 and ϵ 0 in the very beginning of the algorithm.

For the moment, let us assume that M is invertible and write

I − ϵ 0 M Δ 0 = M ( M − 1 − ϵ 0 Δ 0 ) ,

which aim to have as close as possible to singularity. To determine Δ 0 , one can perform an asymptotic analysis around ϵ 0 ≈ 0 . For this purpose, let us consider the matrix valued function

G ( τ ) = M − 1 − τ Δ 0 ,

and let denote χ ( τ ) denote an eigenvalue of G ( τ ) with smallest modulus. Letting x and y denote the right and left eigenvectors corresponding to χ ( 0 ) = χ 0 = | χ 0 | e i θ , scaled such that e i θ y * x > 0 , Lemma 3.1 implies

d d τ | χ ( τ ) | 2 | τ = 0 = 2 R e ( χ χ ˙ ) = − 2 R e ( χ ¯ y * Δ 0 x y * x ) = − 2 | χ 0 | R e ( y * Δ 0 x e i θ y * x ) = − 2 | χ 0 | | y * x | R e 〈 y x * , Δ 0 〉 .

In order to have the locally maximal decrease of | χ ( τ ) | 2 at τ = 0 choose

Δ 0 = D P B ( y x * ) , (31)

where the positive diagonal matrix D is chosen such that Δ 0 ∈ B 1 . This is always possible under the genericity assumptions (12) and (13). The orthogonal projector P B onto B ∗ can be expressed in analogy to Equation (21) for P B ∗ , with the notable difference that γ l = R e ( t r ( C l ) / r l ) for l = S ′ + 1 , ⋯ , S . Note that there is no need to form M − 1 ; x and y can be obtained as the eigenvectors associated to a largest eigenvalue of M. However, attention needs to be paid to

the scaling. Since the largest eigenvalue of M is 1 | χ 0 | e − i θ , y and x have to be scaled accordingly.

A possible choice for ϵ 0 is obtained by solving the following simple linear equation, resulting from the first order expansion of the eigenvalue at τ = 0 :

| χ ( ϵ 0 ) | 2 + d d τ | χ ( τ ) | 2 | τ = 0 ϵ 0 = 0.

This gives

ϵ 0 = | χ 0 | | y * x | 2 R e 〈 y x * , Δ 0 〉 = | χ 0 | | y * x | 2 ‖ P B ( y x * ) ‖ . (32)

This can be improved in a simple way by computing this expression for ϵ 0 for several eigenvalues of M (say, the m largest ones) and taking the smallest computed ϵ 0 . For a sparse matrix M, the matlab function eigs (an interface for ARPACK, which implements the implicitly restarted Arnoldi Method [

Another possible, very natural choice for ϵ 0 is given by

ϵ 0 = 1 μ ¯ B ( M ) (33)

where μ ¯ B ( M ) is the upper bound for the SSV computed by the matlab function mussv.

This section discuss the outer algorithm for computing a lower bound of μ B ( M ) . Since the principles are the same, one can treat the case of purely complex perturbations in detail and provide a briefer discussion on the extension to the case of mixed complex/real perturbations.

Purely Complex PerturbationsIn the following let λ ( ϵ ) denote a continuous branch of (local) maximizers for

m a x λ ∈ Λ ϵ B ∗ ( M ) | λ | ,

computed by determining the stationary points Δ ( ϵ ) of the system of ODEs in Equation (30). The computation of the SSV is equivalent to the smallest solution ϵ of the equation | λ ( ϵ ) | = 1 . In order to approximate this solution, aim at computing ϵ ⋆ such that the boundary of the ϵ ⋆ -spectral value set is locally contained in the unit disk and its boundary ∂ Λ ϵ ⋆ B ∗ ( M ) is tangential to the unit circle. This provides a lower bound 1 / ϵ ⋆ for μ B ( M ) .

Now make the following generic assumption.

Assumption 4.1. [

The following theorem gives an explicit and easily computable expression for the derivative of | λ ( ϵ ) | .

Theorem 4.1. [

d | λ ( ϵ ) | d ϵ = 1 | y ( ϵ ) * x ( ϵ ) | ( ∑ i = 1 S | z i ( ϵ ) * x i ( ϵ ) | + ∑ j = 1 F ‖ z S + j ( ϵ ) ‖ ‖ y S + j ( ϵ ) ‖ ) > 0. (34)

This section provides the comparison of the numerical results for lower bounds of SSV computed by well-known Matlab function mussv and the algorithm [

Example 1. Consider the following three dimensional companion matrix of the form,

M = [ 0 7 − 6 1 0 0 0 1 0 ] ,

along with the perturbation set

B = { d i a g ( δ 1 I 1 , δ 2 I 1 , δ 3 I 1 ) : δ 1 , δ 2 , δ 3 ∈ ℂ } .

Apply the Matlab routine mussv, one can obtain the perturbation Δ ^ with

Δ ^ = [ − 0.3333 0 0 0 − 0.3333 0 0 0 − 0.3333 ] ,

and ‖ Δ ^ ‖ 2 = 0.3333 . For this example, one can obtain the upper bound μ P D upper = 3.0103 while the lower bound as μ P D lower = 3.0000 .

By using the algorithm [

Δ ∗ = [ − 1.0000 + 0.0000 i 0 0 0 − 1.0000 + 0.0000 i 0 0 0 − 1.0000 + 0.0000 i ] ,

and ϵ ∗ = 0.3333 while ‖ Δ ∗ ‖ 2 = 1. The same lower bound can be obtained μ New lower = 3.0000 as the one obtained by mussv.

Example 2. Consider the following four dimensional companion matrix of the form,

M = [ 0.7500 − 1.7500 0.5000 − 0.7500 1.0000 0 0 0 0 1.0000 0 0 0 0 1.0000 0 ] ,

along with the perturbation set

B = { d i a g ( δ 1 I 1 , Δ 2 , δ 3 I 1 ) : δ 1 ∈ ℂ , Δ 2 ∈ ℂ 2,2 , δ 3 ∈ ℂ } .

Apply the Matlab routine mussv, one can obtain the perturbation Δ ^ with

Δ ^ = [ 0.5217 0 0 0 0 − 0.4267 0.2393 0 0 0.1582 − 0.0887 0 0 0 0 − 0.5217 ] ,

and ‖ Δ ^ ‖ 2 = 0.5247. For this example, one can obtain the upper bound μ P D upper = 1.9221 while the lower bound as μ P D lower = 1.9268.

By using the algorithm [

Δ ∗ = [ 1.0000 + 0.0005 i 0 0 0 0 − 0.8180 + 0.0003 i 0.4593 + 0.0000 i 0 0 0.3008 − 0.0254 i − 0.1689 + 0.0142 i 0 0 0 0 − 0.9829 − 0.1839 i ] ,

and ϵ ∗ = 1.9136 while ‖ Δ ∗ ‖ 2 = 1 and obtain the lower bound μ New lower = 0.5226 .

Example 3. Consider the following fifth dimensional companion matrix of the form,

M = [ − 3 1 − 2 3 − 5 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 ] ,

along with the perturbation set

B = { d i a g ( δ 1 I 1 , δ 2 I 1 , δ 3 I 1 , Δ 2 ) : δ 1 , δ 2 , δ 3 ∈ ℂ , Δ 2 ∈ ℂ 2 , 2 } .

Apply the Matlab routine mussv, one can obtain the perturbation Δ ^ with

Δ ^ = [ − 0.2803 0 0 0 0 0 − 0.2803 0 0 0 0 0 − 0.2803 0 0 0 0 0 − 0.1512 0.0234 0 0 0 0.2321 − 0.0359 ] ,

and ‖ Δ ^ ‖ 2 = 0.2803. For this example, one can obtain the upper bound μ P D upper = 3.5876 while the lower bound as μ P D lower = 3.5674.

By using the algorithm [

Δ ∗ = [ − 1.0000 0 0 0 0 0 − 1.0000 0 0 0 0 0 − 1.0000 0 0 0 0 0 − 0.5393 − 0.0000 i 0.0835 − 0.0000 i 0 0 0 0.8281 − 0.0000 i − 0.1282 + 0.0000 i ] ,

and ϵ ∗ = 0.2803 while ‖ Δ ∗ ‖ 2 = 1 and obtain the lower bound μ New lower = 3.5674 .

Example 4. Consider the following nine dimensional companion matrix of the form,

M = [ − 2 3 1 − 9 − 6 2 1 − 12 14 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 ] ,

along with the perturbation set

B = { d i a g ( δ 1 I 1 , δ 2 I 1 , δ 3 I 1 , Δ 2 , Δ 3 , δ 4 ) : δ 1 , δ 2 , δ 3 ∈ ℂ , Δ 2 ∈ ℂ 2 , 2 , Δ 3 ∈ ℂ 3 , 3 , δ 4 ∈ ℂ } .

Apply the Matlab routine mussv, one can obtain the perturbation Δ ^ with

and ‖ Δ ^ ‖ 2 = 0.2976. For this example, one can obtain the upper bound μ P D upper = 3.4181 while the lower bound as μ P D lower = 3.3603.

By using the algorithm [

and ϵ ∗ = 0.2976 while ‖ Δ ∗ ‖ 2 = 1 and obtain the lower bound μ New lower = 3.3603 .

In the following table, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [

M = [ 0 7 − 6 1 0 0 0 1 0 ] ,

and

In the following table, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [

M = [ 0.7500 − 1.7500 0.5000 − 0.7500 1.0000 0 0 0 0 1.0000 0 0 0 0 1.0000 0 ] ,

and

In the following table, it is presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [

M = [ − 3 1 − 2 3 − 5 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 ] ,

and

In the following table, it is we presented the comparison of the bounds of SSV computed by MUSSV and the algorithm [

M = [ − 2 3 1 − 9 − 6 2 1 − 12 14 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 ] ,

and

In this article, the problem of approximating structured singular values for the companion matrices is considered. The obtained results provide a characteriza- tion of extremizers and gradient systems, which can be integrated numerically in order to provide approximations from below to the structured singular value of a matrix subject to general pure complex and mixed real and complex block perturbations. The experimental results show the comparison of the lower bounds of structured singular values for companion matrices computed by MUSSV and alogorithm [

Rehman, M.-U. Tabassum, S. (2017) Numerical Computation of Structured Singular Values for Companion Matrices. Journal of Applied Mathematics and Physics, 5, 1057-1072. https://doi.org/10.4236/jamp.2017.55093