ALAMTAdvances in Linear Algebra & Matrix Theory2165-333XScientific Research Publishing10.4236/alamt.2020.102003ALAMT-101146ArticlesPhysics&Mathematics Constrained Low Rank Approximation of the Hermitian Nonnegative-Definite Matrix HaixiaChang1*School of Statistics and Mathematics, Shanghai Lixin University of Accounting and Finance, Shanghai, China30042020100222337, May 202026, June 2020 29, June 2020© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/

In this paper, we consider a constrained low rank approximation problem: , where E is a given complex matrix, p is a positive integer, and is the set of the Hermitian nonnegative-definite least squares solution to the matrix equation . We discuss the range of p and derive the corresponding explicit solution expression of the constrained low rank approximation problem by matrix decompositions. And an algorithm for the problem is proposed and the numerical example is given to show its feasibility.

Low Rank Approximation Hermitian Matrix Nonnegative-Definite Matrix Least Square
1. Introduction

Throughout, let ℂ m × n denote the set of all complex m × n matrices, U ( n ) the set of all n × n unitary matrices, and ℍ ( n ) the set of all n × n Hermitian matrices, ℍ 0 + ( n ) the set of Hermitian nonnegative-definite matrices. The symbols I, A ∗ , A † , r ( A ) and ‖ A ‖ , respectively stand for the identity matrix with the appropriate size, the conjugate transpose, the Moore-Penrose inverse, the rank and the Frobenius norm of A ∈ ℂ m × n . If a square matrix A is inverse, then the inverse matrix of A is denoted by A − 1 .

In the last few years, the structured low rank matrix approximation has been one of the topics of very active research in matrix theory and the applications. We know the empirical data collected in a matrix generally satisfy either the special structure properties or the desirable rank as is expected in the original system. Solving a low rank approximation of a general data matrix is an important task in many disciplines.

The structured low rank approximation problem can be written as follows: given a matrix E, a positive integer p, and a matrix class Ω , find a matrix X satisfying

m i n X ∈ Ω , r ( X ) ≤ p ‖ X − E ‖ ,

which is concluded by M.T. Chu, et al. in 2003, see . The structured low rank approximation problem and applications associated with different constraint set Ω have been extensively studied. Generally speaking, Ω is with linear structure, e.g., symmetric Toeplitz, Hankel, upper Hessenberg, Slyvester, correlation, CP, or banded matrices with fixed bandwidth, etc., which can be referred to  - . For examples, in the process of noise removal in signal processing or image enhancement, the underlying covariance matrix with Toeplitz or block Toeplitz structure, see  . In the model reduction problem in encoding and filter design, the underlying structure matrix is Hankel, see  . In computer algebra, approximating the greatest common divisor of polynomials can be formulated as a low rank approximation problem with Sylvester structure, see . In computing the nearest Euclidean distance, the symmetric nonnegative matrix of rank 5 is a necessary condition for the approximation, see . In the asset portfolio, the structured low rank approximation is about correlation matrix, see .

On the other hand, the problems for a (skew) Hermitian solution, Hermitian nonnegative-definite solution, and Hermitian nonnegative-definite least squares solution to the linear matrix equation

A X A ∗ = B (1.1)

in the literature have been widely studied, where A ∈ ℂ m × n and B ∈ ℍ 0 + ( m ) are given matrices. Baksalary , Groβ , Khatri and Mitra , derived a general Hermitian nonnegative-definite solution to (1.1), respectively. Groβ also obtains a representation of the general Hermitian nonnegative-definite solution to Equation (1.1), which admits an easy way to obtain solutions of minimal and maximal rank, respectively. Dai and Lancaster studied a similar problem of Equation (1.1) with the real setting. Zhang and Cheng , Wei and Wang  studied the fixed rank Hermitian nonnegative definite solution to the matrix equation A X A * = B and the least squares problem A X A * = B in Frobenius norm, which discussed the ranges of the rank k and derived expressions of the solutions by applying the SVD of the matrix of A. Liu et al., in  studied the rank constrained matrix best approximation problem with respect to (skew) Hermitian matrices by the singular value decompostion of B. For the rank constrained of (skew)Hermitian or Hermitian nonnegative definite least squares of (1.1) in spectral norm, the authors applied the norm-preserving dilation theorem and the matrix decomposition to obtain the soulution in  .

Motivated by the above work, we in this paper study the constrained low rank approximation of Hermitian nonnegative definite matrix. It can be stated as follows.

Problem 1. Given A ∈ ℂ m × n , B ∈ ℂ m × m , E ∈ ℂ n × n , and a positive integer p. Find X ^ such that

X ^ : = a r g m i n X ∈ Ω , r ( X ) = p ‖ X − E ‖ ,

where Ω = { X | X ∈ ℍ 0 + ( n ) , ‖ A X A ∗ − B ‖ = m i n } .

This paper is organized as follows. We give some preliminary results in Section 2. In Section 3, we firstly characterize the matrix set Ω of the Hermitian nonnegative-definite least squares solution to the matrix equation A X A ∗ = B by matrix decompositions. Then we use the techniques of partition matrix to discuss the range of p and establish the corresponding explicit solution to Problem 1. In Section 4, an algorithm is designed to determine the solution to Problem 1 and an example is presented to illustrate the results obtained in this paper.

2. Preliminaries

In this section, we give some preliminary results.

Lemma 2.1. (See  ) For given matrices M ∈ ℂ n 1 × n 1 , N ∈ ℂ n 2 × n 2 , and Y ∈ ℂ n 1 × n 2 , then the partition matrix

X = [ M Y Y ∗ N ] ∈ H 0 + ( n )

if and only if

M ∈ H 0 + ( n 1 ) , N − Y ∗ M † Y ∈ H 0 + ( n 2 ) , R ( Y ) ⊆ R ( M ) ;

or

M ∈ H 0 + ( n 1 ) , N = N 0 + Y ∗ M † Y , Y = M M † Y ,

where N 0 ∈ H 0 + ( n 2 ) . Furthermore, the equality

r ( X ) = r ( M ) + r ( N − Y ∗ M † Y ) ,

holds.

The following lemma is cited by Lemma 2.2 in .

Lemma 2.2. Given n × n the Hermitian matrix G which has the form of

G = Q d i a g ( Λ + , Λ − ) Q ∗ ,

where Q is an n × n unitary matrix,

Λ + = d i a g ( λ 1 , λ 2 , ⋯ , λ n 1 ) , Λ − = d i a g ( λ n 1 + 1 , λ n 1 + 2 , ⋯ , λ n ) ,

and λ 1 ≥ λ 2 ≥ ⋯ ≥ λ n 1 > 0 , λ n 1 + 1 ≤ λ n 1 + 2 ≤ ⋯ ≤ λ n ≤ 0 . The parameter p is a given nonnegative integer. Then there exists Y ˜ ∈ H 0 + ( n ) with r ( Y ˜ ) = p such that

Y ˜ : = arg m i n r ( Y ) = p , Y ∈ H 0 + ( n ) ‖ Y − G ‖

if and only if 0 ≤ p ≤ n 1 . If 0 ≤ p ≤ n 1 , then

min r ( Y ) = p , Y ∈ H 0 + ( n ) ‖ Y − G ‖ = ∑ k = p + 1 n λ i 2 .

Furthermore, if λ p > λ p + 1 , then

Y ˜ = Q d i a g ( λ 1 , λ 2 , ⋯ , λ p , 0 , ⋯ , 0 ) Q ∗ .

If k < p < l ≤ n 1 and λ k > λ k + 1 = ⋯ = λ l > λ l + 1 , then

Y ˜ = Q d i a g ( λ 1 , λ 2 , ⋯ , λ k , λ p P P ∗ , 0 , ⋯ , 0 ) Q ∗ ,

where P is an arbitrary matrix satisfying P ∈ ℂ ( l − k ) × ( p − k ) and P ∗ P = I p − k .

3. The General Solution to Problem 1

In this section, we consider the general solution of Problem 1 proposed in Section 1.

Suppose that the matrices A ∈ ℂ m × n , and B ∈ ℂ m × m are given with r a n k ( A ) = r A . Let B = B H + B S , where

B H = B + B ∗ 2 , B S = B − B ∗ 2 , (3.1)

B H is an m × m Hermitian matrix, and B S is an m × m skew-Hermitian matrix. Let the singular value decomposition (SVD) of A be as follows

A = U ( Σ r A 0 0 0 ) V ∗ , (3.2)

where the diagonal elements of the diagonal matrix Σ r A is the singular values of A, and U ∈ U ( m ) , V ∈ U ( n ) . Let

U ∗ B H U = ( B 0 B 1 B 1 ∗ B 2 ) , (3.3)

where B 0 ∈ ℍ ( r A ) , B 2 ∈ ℍ ( m − r A ) . By (2)-(4), the unitary invariance of Frobenius norm, and assume

V ∗ X V = ( Y 0 Y 1 Y 1 ∗ Y 2 ) , (3.4)

Y 0 ∈ ℍ 0 + ( r A ) , Y 2 ∈ ℍ 0 + ( n − r A ) , we get

‖ A X A ∗ − B ‖ 2 = ‖ A X A ∗ − B H ‖ 2 + ‖ B S ‖ 2 = ‖ U [ Σ r A 0 0 0 ] V ∗ X V [ Σ r A 0 0 0 ] U ∗ − B H ‖ 2 + ‖ B S ‖ 2 = ‖ [ Σ r A 0 0 0 ] V ∗ X V [ Σ r A 0 0 0 ] − U ∗ B H U ‖ 2 + ‖ B S ‖ 2 = ‖ [ Σ r A 0 0 0 ] [ Y 0 Y 1 Y 1 ∗ Y 2 ] [ Σ r A 0 0 0 ] − [ B 0 B 1 B 1 ∗ B 2 ] ‖ 2 + ‖ B S ‖ 2 = ‖ [ Σ r A Y 0 Σ r A − B 0 − B 1 − B 1 ∗ − B 2 ] ‖ 2 + ‖ B S ‖ 2 = ‖ Σ r A Y 0 Σ r A − B 0 ‖ 2 + 2 ‖ B 1 ‖ 2 + ‖ B 2 ‖ 2 + ‖ B S ‖ 2

Then m i n X ∈ Ω ‖ A X A ∗ − B ‖ is consistent if and only if

min Y 0 ∈ H 0 + ( r A ) ‖ Σ r A Y 0 Σ r A − B 0 ‖ . (3.5)

is solvable.

We obtain the following result.

Theorem 3.1. Given A ∈ ℂ m × n and B ∈ ℂ m × m with r a n k ( A ) = r A . The notation B H is defined in (2). Let the SVD of A be as (3). Partition U ∗ B H U as (4) with B 0 ∈ ℍ ( r A ) , B 2 ∈ ℍ ( m − r A ) . Suppose the matrix B 0 has s positive eigenvalues, then B 0 has the following decomposition

B 0 = W d i a g ( Λ + , Λ − ) W ∗ , (3.6)

where W ∈ U ( r A ) , Λ + = d i a g ( λ 1 , ⋯ , λ s ) , λ 1 ≥ ⋯ ≥ λ s > 0 , Λ − = d i a g ( λ s + 1 , ⋯ , λ r A ) , λ s + 1 ≤ ⋯ ≤ λ r A ≤ 0 . And the eigendecomposition of Σ r A − 1 W d i a g ( Λ + , 0 ) W ∗ Σ r A − 1 is

Σ r A − 1 W d i a g ( Λ + , 0 ) W ∗ Σ r A − 1 = L [ D 1 0 0 0 ] L ∗ , (3.7)

in which L ∈ U ( r A ) and a diagonal matrix D 1 ∈ ℍ + ( s ) . Assume

Q = V [ L 0 0 I n − r A ] . (3.8)

Then the Hermitian nonnegative definite solution to the least squares solution of A X A ∗ = B can be expressed as

X = Q [ D 1 0 Y 11 0 0 0 Y 11 ∗ 0 Y 2 ] Q ∗ (3.9)

where Y 11 ∈ ℂ s × ( n − r A ) and Y 2 − Y 11 ∗ D 1 − 1 Y 11 ∈ ℍ 0 + ( n − r ( A ) ) .

Proof. According to above analysis, the matrix equation A X A ∗ = B has the Hermitian nonnegative-definite least squares solution if and only if (3.5) is consistent. By Lemma 2.2 and (3.6), the Formula (3.5) holds if and only if

Y 0 = Σ r A − 1 W d i a g ( Λ + , 0 ) W ∗ Σ r A − 1 . (3.10)

It follows from r ( Y 0 ) = s and Y 0 ∈ ℍ 0 + ( r ( A ) ) that the eigendecomposition of Y 0 in (3.10) has the form of (3.7). Substituting (3.10) into (3.4), we get

X = V [ L [ D 1 0 0 0 ] L ∗ Y 1 Y 1 ∗ Y 2 ] V ∗ . (3.11)

Assume L ∗ Y 1 = [ Y 11 Y 12 ] , Y 11 ∈ ℂ s × ( n − r A ) , Y 12 ∈ ℂ ( r A − s ) × ( n − r A ) , then by (3.8), Q ∈ U ( n ) and (3.11) becomes

X = V [ L ( D 1 0 0 0 ) L ∗ L ( Y 11 Y 12 ) ( Y 11 ∗ Y 12 ∗ ) L ∗ Y 2 ] V ∗ = V [ L 0 0 I n − r A ] [ D 1 0 Y 11 0 0 Y 12 Y 11 ∗ Y 12 ∗ Y 2 ] [ L ∗ 0 0 I n − r A ] V ∗ = Q [ D 1 0 Y 11 0 0 Y 12 Y 11 ∗ Y 12 ∗ Y 2 ] Q ∗ (3.12)

It implies that X ∈ ℍ 0 + ( n ) in (3.12) if and only if [ D 1 0 Y 11 0 0 Y 12 Y 11 ∗ Y 12 ∗ Y 2 ] ∈ ℍ 0 + ( n ) . By Lemma 2.1, it is equivalent to [ D 1 0 Y 11 0 0 0 Y 11 ∗ 0 Y 2 ] , where Y 2 − Y 11 ∗ D 1 − 1 Y 11 ∈ ℍ 0 + ( n − r ( A ) ) .

By Theorem 3.1, for X ∈ Ω defined in Problem 1, X has the form of (3.9) with Y 11 ∈ ℂ s × ( n − r A ) and Y 2 − Y 11 ∗ D 1 − 1 Y 11 ∈ ℍ 0 + ( n − r ( A ) ) . Now we consider Problem 1, and give their explicit expression of the solution.

Theorem 3.2. Given E ∈ ℂ n × n and Ω in problem 1 is nonempty. The notations are the same as Theorem 3.1, and the solution set of Ω is (10). Let

E 1 = E + E * 2 , E 2 = E − E * 2 . (3.13)

The eigenvalue decomposition of Q ∗ E 1 Q has the form of

Q ∗ E 1 Q = T [ Ψ + 0 0 Ψ − ] T * , (3.14)

where T ∈ U ( n ) , Ψ + = d i a g ( ψ 1 , ⋯ , ψ d ) , ψ 1 ≥ ⋯ ≥ ψ d > 0 , Ψ − = d i a g ( ψ d + 1 , ⋯ , ψ n ) , ψ d + 1 ≤ ⋯ ≤ ψ n ≤ 0 . Suppose

T [ Ψ + 0 0 0 ] T * = [ E 11 E 12 E 13 E 12 ∗ E 22 E 23 E 13 ∗ E 23 ∗ E 33 ] , (3.15)

where E 11 ∈ ℍ ( s ) , E 13 ∈ ℂ s × ( n − r A ) , and E 33 ∈ ℍ ( n − r A ) . And the eigenvalue decomposition of E 33 − E 13 ∗ D 1 − 1 E 13 is as follows

E 33 − E 13 ∗ D 1 − 1 E 13 = H [ Δ + Δ − ] H ∗ , (3.16)

where H ∈ U ( n − r A ) , Δ + = d i a g ( u 1 , ⋯ , u t ) , u 1 ≥ ⋯ ≥ u t > 0 , Δ − = d i a g ( u t + 1 , ⋯ , u n − r A ) , u t + 1 ≤ ⋯ ≤ u n − r A ≤ 0 . For a positive integer p, m i n r ( X ) = p , X ∈ Ω ‖ X − E ‖ is consistent if and only if

s ≤ p ≤ s + n − r A . (3.17)

Then the solution X ^ in Problem 1 is given as follows from the three cases:

1) If p = s , then

X ^ = Q [ D 1 0 0 0 ] Q ∗ . (3.18)

2) If s < p ≤ s + t , then

X ^ = Q [ D 1 0 E 13 0 0 0 E 13 ∗ 0 Y 2 ] Q ∗ , (3.19)

where

Y 2 = H d i a g ( u 1 , ⋯ , u p − s , 0 , ⋯ , 0 ) H ∗ + E 13 * D 1 − 1 E 13 ; (3.20)

3) If p > s + t , then X is the form of (3.19), where

Y 2 = H d i a g ( u 1 , ⋯ , u t P P ∗ , 0 , ⋯ , 0 ) H ∗ + E 13 * D 1 − 1 E 13 , (3.21)

and P is an arbitary matrix satisfying P ∗ P = I p − s − t .

Proof. According to Theorem 3.1, if X ∈ Ω , then X has the formula of (3.9), where Q ∈ U ( n ) , D 1 ∈ ℍ + ( s ) is a diagonal matrix, Y 11 ∈ ℂ s × ( n − r A ) and Y 2 − Y 11 ∗ D 1 − 1 Y 11 ∈ ℍ 0 + ( n − r A ) . By Lemma 2.1, r ( X ) = r ( D 1 ) + r ( Y 2 − Y 11 ∗ D 1 − 1 Y 11 ) . Therefore s ≤ r ( X ) = p ≤ s + n − r A .

1) If p = s , then Y 2 = 0 and Y 11 = 0 in (3.9). We get (3.18).

2) If s < p ≤ s + n − r A . Furthermore, we know E = E 1 ⊕ E 2 , where E 1 , E 2 are defined as (3.13). By the unitary invariance of Frobenius norm that

‖ X − E ‖ 2 = ‖ X − E 1 ‖ 2 + ‖ E 2 ‖ 2 = ‖ Q [ D 1 0 Y 11 0 0 0 Y 11 ∗ 0 Y 2 ] Q ∗ − E 1 ‖ 2 + ‖ E 2 ‖ 2 = ‖ [ D 1 0 Y 11 0 0 0 Y 11 ∗ 0 Y 2 ] − Q ∗ E 1 Q ‖ 2 + ‖ E 2 ‖ 2 = ‖ [ D 1 0 Y 11 0 0 0 Y 11 ∗ 0 Y 2 ] − T [ Ψ + 0 0 0 ] T * ‖ 2 + ‖ T [ 0 0 0 Ψ − ] T * ‖ 2 + ‖ E 2 ‖ 2

= ‖ [ D 1 − E 11 − E 12 Y 11 − E 13 − E 12 ∗ − E 22 − E 23 Y 11 ∗ − E 13 ∗ − E 23 ∗ Y 2 − E 33 ] ‖ 2 + ‖ Ψ − ‖ 2 + ‖ E 2 ‖ 2 = ‖ D 1 − E 11 ‖ 2 + 2 ‖ E 12 ‖ 2 + 2 ‖ Y 11 − E 13 ‖ 2 + ‖ E 22 ‖ 2 + 2 ‖ E 23 ‖ 2             + ‖ Y 2 − E 33 ‖ 2 + ‖ Ψ − ‖ 2 + ‖ E 2 ‖ 2

Hence, m i n r ( X ) = p , X ∈ Ω ‖ X − E ‖ is consistent if and only if

m i n Y 11 ∈ ℂ s × ( n − r A ) ‖ Y 11 − E 13 ‖ (3.22)

and

m i n Y 2 − Y 11 ∗ D 1 − 1 Y 11 ∈ ℍ 0 + ( n − r A ) , r ( Y 2 − Y 11 ∗ D 1 − 1 Y 11 ) = p − s ‖ Y 2 − E 33 ‖ (3.23)

are solvable. (3.22) is consistent if and only if Y 11 = E 13 . And (3.23) holds if and only if there exists Z ∈ ℍ 0 + ( n − r A ) such that min r ( Z ) = p − s ‖ Z − ( E 33 − E 13 ∗ D 1 − 1 E 13 ) ‖

holds. Make the eigendecomposition of ( E 33 − E 13 ∗ D 1 − 1 E 13 ) as (3.16). By Lemma 2.2, we discuss the problem from two cases:

1) When p − s ≤ t , we take Y 2 is of (3.20). The explicit expression of the solution to Problem 1 is (3.19) with (3.20).

2) When p − s > t , we take Y 2 is of (3.21). In this case, we obtain the representation of the solution to Problem 1, which has the form of (3.19) with (3.21).

The proof is completed.

4. Numerical Examples

We in this section propose an algorithm for finding the solution of Problem 1 and give illustrative numerical example.

Algorithm 4.1. 1) Input A , B , E ;

2) Calculate r A , B H by (3.1), E 1 by (3.13);

3) Make the SVD of A with the form of (3.2);

4) Calculate B 0 by (3.3), make the eigendecomposition of B 0 with the form of (3.6), and compute s ;

5) If p does not satisfy (3.17), then the solution set of Problem 1 is empty and terminate the algorithm. Otherwise, continue with the following step;

6) Make the eigendecomposition of Σ r A − 1 W d i a g ( Λ + ,0 ) W ∗ Σ r A − 1 with the form of (3.7);

7) Calculate Q , E 1 , by (3.8), (3.13), respectively;

8) Make the eigenvalue decomposition of Q ∗ E 1 Q by 3.14. and Compute E 13 , E 33 by (3.15);

9) If p = s , calculate X by (3.18), or go to step 10;

10) If p > s , make the eigendecomposition of E 33 − E 13 ∗ D 1 − 1 E 13 with the form of (3.16) and calculate t;

11) If 0 < p − s ≤ t , calculate X by (3.19) with (3.20); otherwise, go to (12);

12) If p − s > t , calculate X by (3.19) with (3.21).

Example 4.2. Suppose n = 6 , A ∈ ℂ 5 × 6 , B ∈ ℂ 6 × 6 , E ∈ ℂ 6 × 6 and

A = [ − 2.2100 + 1.2000 i 2.0000 − 1.0000 i 0.0000 + 0.0000 i − 0.0900 + 1.3500 i 4.3200 − 2.3100 i 1.2120 + 1.9804 i 0.0000 + 3.2510 i 0.0000 + 0.0000 i 2.1800 − 0.3200 i − 0.2300 + 1.5000 i − 0.1724 − 0.0995 i 1.0500 − 2.0000 i 2.4000 + 1.2500 i 0.0000 + 0.0000 i 0.0000 + 3.2106 i − 1.6080 + 0.0000 i 2.2560 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 3.2000 + 0.0000 i − 4.2000 + 1.2000 i 0.0000 + 0.0000 i 1.1025 − 0.2400 i − 0.8600 − 0.2408 i 1.2010 + 0.7000 i ] ,

B = [ 4.2000 + 2.6500 i 1.5800 − 0.2300 i 2.2020 + 1.5680 i 0.0000 + 2.4200 i 2.5160 − 0.7600 i 1.0120 − 0.2820 i − 3.5100 + 1.5600 i 1.3800 − 0.8000 i 1.3050 + 2.8000 i 1.8800 − 0.7890 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 2.9000 − 1.0680 i 0.0000 − 1.5300 i 5.6800 + 1.6030 i − 4.0150 + 2.1200 i 3.4200 − 2.6500 i 2.9500 + 0.0000 i 2.4400 − 0.6405 i 1.4800 − 1.0503 i 3.2750 − 1.4980 i 0.0000 + 1.3450 i ] ,

E = [ 0.1020 + 0.0000 i − 1.2400 + 0.0056 i 0.1460 − 0.2320 i 0.5000 + 0.0000 i − 0.1520 + 0.1270 i 1.2000 + 0.0045 i − 1.2400 − 0.0056 i − 0.0040 + 0.0000 i − 0.1630 − 0.0550 i − 0.2390 − 0.0920 i 0.1400 − 0.1800 i 0.0000 + 0.0000 i 0.1460 + 0.2320 i − 0.1630 + 0.0550 i 1.0000 + 0.0000 i 0.2310 + 0.0000 i − 0.1650 − 0.0550 i 0.0000 + 0.2040 i 0.5000 + 0.0000 i − 0.2390 + 0.0920 i 0.2310 + 0.0000 i 0.0800 + 0.0000 i − 0.2400 − 0.0920 i 0.0000 + 0.0000 i − 0.1520 − 0.1270 i 0.1400 + 0.1800 i − 0.1650 + 0.0550 i − 0.2400 + 0.0920 i − 0.1024 + 0.0000 i − 0.0002 + 0.0250 i 1.2000 − 0.0045 i 0.0000 + 0.0000 i 0.0000 − 0.2040 i 0.0000 + 0.0000 i − 0.0002 − 0.0250 i 0.0256 + 0.0000 i ] .

We get r a n k ( A ) = 4 and s = 2 . According to Theorem 3.2, Problem 1 is solvable if and only if 2 ≤ p ≤ 4 . By Algorithm 4.1, we show the corresponding explicit expression of the solution to Problem 1 for p = 2 , 3 , 4 , respectively. When p = 2 , the solution expression in Problem 1 is as follows

X ^ = [ 1.2974 − 0.0000 i − 0.2777 − 0.1086 i 0.0000 + 0.0000 i 0.3705 + 1.1896 i 2.0296 + 0.4850 i 0.5477 + 0.1255 i − 0.2777 + 0.1086 i 0.1536 − 0.0000 i − 0.0000 − 0.0000 i − 0.2185 − 0.1705 i − 0.6026 − 0.1101 i − 0.0764 + 0.0194 i 0.0000 − 0.0000 i − 0.0000 + 0.0000 i 0.0000 − 0.0000 i 0.0000 + 0.0000 i 0.0000 − 0.0000 i 0.0000 − 0.0000 i 0.3705 − 1.1896 i − 0.2185 + 0.1705 i 0.0000 − 0.0000 i 1.2482 − 0.0000 i 0.9739 − 1.5609 i 0.2479 − 0.4986 i 2.0296 − 0.4850 i − 0.6026 + 0.1101 i 0.0000 + 0.0000 i 0.9739 + 1.5609 i 3.9124 + 0.0000 i 0.8259 + 0.0973 i 0.5477 − 0.1255 i − 0.0764 − 0.0194 i 0.0000 + 0.0000 i 0.2479 + 0.4986 i 0.8259 − 0.0973 i 0.2744 − 0.0000 i ] ,

and min X ∈ Ω , r ( X ) = 2 ‖ X − E ‖ = 6.8009 . When p = 3 , we obtain the solution of Problem 1 is

X ^ = [ 1.3943 − 0.0000 i − 0.2179 − 0.1781 i 0.1028 − 0.0987 i 0.3658 + 1.1650 i 2.0332 + 0.5482 i 0.5969 + 0.1872 i − 0.2179 + 0.1781 i 0.2064 − 0.0000 i 0.1206 − 0.0503 i − 0.1991 − 0.1300 i − 0.6215 − 0.0512 i − 0.0834 + 0.0625 i 0.1028 + 0.0987 i 0.1206 + 0.0503 i 0.9996 − 0.0000 i 0.1914 + 0.0274 i − 0.0726 + 0.0312 i − 0.0765 + 0.0560 i 0.3658 − 1.1650 i − 0.1991 + 0.1300 i 0.1914 − 0.0274 i 1.1509 − 0.0000 i 0.9242 − 1.5232 i 0.2816 − 0.4730 i 2.0332 − 0.5482 i − 0.6215 + 0.0512 i − 0.0726 − 0.0312 i 0.9242 + 1.5232 i 3.9276 + 0.0000 i 0.8785 + 0.0925 i 0.5969 − 0.1872 i − 0.0834 − 0.0625 i − 0.0765 − 0.0560 i 0.2816 + 0.4730 i 0.8785 − 0.0925 i 0.3103 − 0.0000 i ] .

and min X ∈ Ω , r ( X ) = 3 ‖ X − E ‖ = 6.7283 . When p = 4 , the solution can be expressed as X ^ = X 1 + X 2 , where

X 1 = [ 1.3774 + 0.0000 i − 0.2359 − 0.1724 i 0.0365 + 0.0116 i 0.3563 + 1.1835 i 2.0347 + 0.5457 i 0.5957 + 0.1710 i − 0.2359 + 0.1724 i 0.1853 − 0.0000 i 0.0132 + 0.0454 i − 0.2154 − 0.1134 i − 0.6191 − 0.0534 i − 0.0793 + 0.0448 i 0.0365 − 0.0116 i 0.0132 − 0.0454 i 0.0195 + 0.0000 i 0.0339 + 0.0381 i − 0.0503 + 0.0311 i 0.0245 − 0.0159 i 0.3563 − 1.1835 i − 0.2154 + 0.1134 i 0.0339 − 0.0381 i 1.1254 − 0.0000 i 0.9277 − 1.5230 i 0.2986 − 0.4835 i 2.0347 − 0.5457 i − 0.6191 + 0.0534 i − 0.0503 − 0.0311 i 0.9277 + 1.5230 i 3.9271 + 0.0000 i 0.8762 + 0.0942 i 0.5957 − 0.1710 i − 0.0793 − 0.0448 i 0.0245 + 0.0159 i 0.2986 + 0.4835 i 0.8762 − 0.0942 i 0.2946 − 0.0000 i ]

and X 2 = G [ 0 0 0 1.0597 P P * ] G * with

G = [ 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.1015 + 0.0752 i − 0.1452 − 0.4186 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0833 + 0.1140 i − 0.0157 − 0.4949 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i − 0.0930 + 0.9572 i − 0.1758 + 0.2104 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i − 0.0253 + 0.1528 i 0.3757 − 0.3926 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0022 − 0.0217 i − 0.0494 + 0.0586 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0798 − 0.0919 i − 0.4129 + 0.1083 i ]

and P ∈ ℂ 2 × 2 satisfying P * P = I 2 .

Remark The Algorithm 4.1 can be applied for small sizes of matrices in Problem 1. In the process of computation, it only involves once singular value decomposition and four times eigendecompositions. Hence it has good numerical stability.

5. Conclusion

In this paper, we have studied the constrained low rank approximation problem m i n r ( X ) = p , X ∈ Ω ‖ X − E ‖ , where Ω is the set of the Hermitian nonnegative-definite least squares solution to the matrix equation A X A ∗ = B , by the matrix decompositions and partitioned matrix techniques. We have established the necessary and sufficient condition for solvability of the problem. We have also derived the corresponding explicit solution expressions for constraints of different rank range. And the algorithm has been presented and the numerical example shows its feasibility.

Fund

This research was supported by the Natural Science Foundation of China (11601328), and the research fund of Shanghai Lixin University of Accounting and Finance (AW-22-2201-00118).

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Chang, H.X. (2020) Constrained Low Rank Approximation of the Hermitian Nonnegative-Definite Matrix. Advances in Linear Algebra & Matrix Theory, 10, 22-33. https://doi.org/10.4236/alamt.2020.102003

ReferencesChu, M.T., Funderlic, R.E. and Plemmons, R.J. (2003) Structured Low Rank Approximation. Linear Algebra and its Applications, 366, 157-172. https://doi.org/10.1016/S0024-3795(02)00505-0Duan, X.F., Bai, J.C., Zhang, M.J. and Zhang, X.J. (2014) On the Generalized Low Rank Approximation of the Correlation Matrices Arising in the Asset Portfolio. Linear Algebra and its Applications, 461, 1-17. https://doi.org/10.1016/j.laa.2014.07.026Markovsky, I. (2008) Structured Low-Rank Approximation and Its Applications. Automatica, 44, 891-909. https://doi.org/10.1016/j.automatica.2007.09.011Zhang, Z.Y. and Wu, L.X. (2003) Optimal Low-Rank Approximation to a Correlation Matrix. Linear Algebra and its Applications, 364, 161-187. https://doi.org/10.1016/S0024-3795(02)00551-7Li, H.B., Stoica, E.P. and Li, J. (1999) Computationally Efficient Maximum Likelihood Estimation of Structured Covariance Matrices. IEEE Transactions on Signal Processing, 47, 1314-1323. https://doi.org/10.1109/78.757219Williams, D.E. and Johnson, D.H. (1993) Robust Estimation on Structured Covariance Matrices. IEEE Transactions on Signal Processing, 41, 2891-2906. https://doi.org/10.1109/78.236511De Moor, B. (1994) Total Least Squares for Affinely Structured Matrices and the Noisy Realization Problem. IEEE Transactions on Signal Processing, 42, 3104-3113. https://doi.org/10.1109/78.330370Park, H., Lei, Z. and Rosen, J.B. (1999) Low Rank Approximation of a Hankel Matrix by Structured Total Least Norm. BIT Numerical Mathematics, 39, 757-779. https://doi.org/10.1023/A:1022347425533Karmarkar, N.K. and Lakshman, Y.N. (1998) On Approximate GCDs of Unvariate Polynomials. Journal of Symbolic Computation, 26, 653-666. https://doi.org/10.1006/jsco.1998.0232Glunt, W., Hayden, T.L., Hong, S. and Wells, J. (1990) An Alternating Projection Algorithm for Computing the Nearest Euclidean Distance Matrix. SIAM Journal on Matrix Analysis and Applications, 11, 589-600. https://doi.org/10.1137/0611042Fan, J.Y. and Zhou, A.W. (2016) The CP-Matrix Approximation Problem. SIAM Journal on Matrix Analysis and Applications, 37, 171-194. https://doi.org/10.1137/15M1012086Huckle, T., Serra-Capizzano S. and Tablino-Possio, C. (2004) Preconditioning Strategies for Hermitian Indefinite Toeplitz Linear Systems SIAM Journal on Scientific Computing, 25, 1633-1654. https://doi.org/10.1137/S1064827502416332Baksalary, J.K. (1984) Nonnegative Definite and Positive Definite Solutions to the Matrix Equation AXAH=B. Linear and Multilinear Algebra, 16, 133-139. https://doi.org/10.1080/03081088408817616Groβ, J. (2000) Nonnegative-Define and Positive Solutions to the Matrix Equation AXAH=B—Revisited. Linear Algebra and its Applications, 321, 123-129. https://doi.org/10.1016/S0024-3795(00)00033-1Khatri, C.G. and Mitra, S.K. (1976) Hermitian and Nonnegative Definite Solutions of Linear Matrix Equations. SIAM Journal on Applied Mathematics, 31, 579-585. https://doi.org/10.1137/0131050Zhang, X. and Cheng, M. (2003) The Rank-Constrained Hermitian Nonnegative-Definite and Positive-Definite Solutions to the Matrix Equation AXAH=B. Linear Algebra and its Applications, 370, 163-174. https://doi.org/10.1016/S0024-3795(03)00385-9Wei, M.S. and Wang, Q. (2007) On Rank-Constrained Hermitian Nonnegative-Definite Least Squares Solutions to the Matrix Equation AXAH=B. International Journal of Computer Mathematics, 84, 945-952. https://doi.org/10.1080/00207160701458344Liu, X.F., Li, W. and Wang, H.X. (2017) Rank Constrained Matrix Best Approximation Problem with Respect to (Skew) Hermitian Matrices. Journal of Computational and Applied Mathematics, 319, 77-86. https://doi.org/10.1016/j.cam.2016.12.029Wei, M. and Shen, D. (2012) Minimum Rank Solutions to the Matrix Approximation Problems in the Spectral Norm. SIAM Journal on Matrix Analysis and Applications, 33, 940-957. https://doi.org/10.1137/110851134Shen, D., Wei, M. and Liu, Y. (2015) Minimum Rank (Skew) Hermitian Solutions to the Matrix Approximation Problem in the Spectral Norm. Journal of Computational and Applied Mathematics, 288, 351-365. https://doi.org/10.1016/j.cam.2015.04.033Zhang, F.Z. (1999) Matrix Theory: Basic Results and Techniques. Springer, New York.Albert, A. (1969) Condition for Positive and Nonnegative Definite in Terms of Pseudoinverse. SIAM Journal on Applied Mathematics, 17, 434-440. https://doi.org/10.1137/0117041