_{1}

^{*}

In Kronecker products works, matrices are sometimes regarded as vectors and vectors are sometimes made in to matrices. To be precise about these reshaping we use the vector and diagonal extraction operators. In the present paper, the results are organized in the following ways. First, we formulate the coupled matrix linear least-squares problem and present the efficient solutions of this problem that arises in multistatic antenna array processing problem. Second, we extend the use of connection between the Hadamard (Kronecker) product and diagonal extraction (vector) operator in order to construct a computationally-efficient solution of non-homogeneous coupled matrix differential equations that useful in various applications. Finally, the analysis indicates that the Kronecker (Khatri-Rao) structure method can achieve good efficient while the Hadamard structure method achieve more efficient when the unknown matrices are diagonal.

Linear matrix and matrix differential equations show up in various fields including engineering, mathematics, physics, statistics, control, optimization, economic, linear system and linear differential system problems. For instance, the Lyapunov equations and (where A^{*} is the conjugate transpose of A) are used to analyze of the stability of continuous-time and discrete-time systems, respectively [

(where is the transpose of B) has been used to characterize structured covariance matrices [

Coupled matrix and matrix differential equations have also been widely used in stability theory of differential equations, control theory, communication systems, perturbation analysis of linear and non-linear matrix equations and other fields of pure and applied mathematics and also recently in the context of the analysis and numerical simulation of descriptor systems. For instance, the canonical system

With the boundary conditions and has been used to the solution of optimal control problem with the performance index [

and the general class of non-homogeneous coupled matrix differential equations:

where are given scalar matrices, is a given matrix function, are the unknown diagonal matrix functions to be solved and; and where denotes the derivative of matrix function.. (where is the set of all matrices over the complex number field and when, we write instead of).

Examples of such situation are singular [

Let us recall some concepts that will be used below.

Given two matrices and

, then the Kronecker product of A and B is defined by (e.g. [8-12])

While if, , and let and be the columns of A and B, respectively, namely

,.

The columns of the Kronecker product are for all i, j combinations in lexicographic order namely,

Thus, the Khatri-Rao product of A and B is defined by [13,14]:

consists of a subset of the columns of. Notice that is of order and is of order. This observation can be expressed in the following form [

where the selection matrix is of order and

and is an column vector with a unity element in the k-th position and zeros elsewhere.

Additionally, if both matrices and have the same size, then the Hadamard product of A and B is defined by [8-11,16]:

This product is much simpler than Kronecker and Khatri-Rao products and it can be connected with isomorphic diagonal matrix representations that can have a certain interest in many fields of pure and applied mathematics, for example, Tauber [

; (11)

The Kronecker product and vector operator affirming their capability of solving some matrix and matrix differential equations. Such equations can be readily converted into the standard linear equation form by using the well-known identity (e.g. [17,18]):

Where denotes a vectorization by columns of a matrix. The need to compute the, and are due its appearance in the solutions of coupled matrix differential equations. Here

For any matrix, the spectral representation of and assures that [9,18]:

where and are the eigenvalues and the corresponding eigenvectors of A, and is the eigenvectors of matrix.

Finally, for any matrices A, B, C, , we shall make a frequent use the following properties of the Kronecker product (e.g. [9,18-20]) which are used to establish our results.

1)

2);;

3);

4);

In this paper, we present the efficient solution of coupled matrix linear least-squares problem and extend the use of diagonal extraction (vector) operator in order to construct a computationally-efficient solution of nonhomogeneous coupled matrix linear differential equations.

The multistatic antenna array processing problem can be written in matrix notation as [

where, and are given (complex valued) matrices; and where the unknown matrix is diagonal. We also assume that n < mp, so that we suggest using a least-squares approach, viz.,

where is called Frobenius norm of A. Using the identity in Equation (13) we can transform (21) into the vector LSP form:

which has the well-known solution:

provided is invertible.

Applying the direct vector transformation in Equation (13) to results in a highly inefficient leastsquare problem, because VecX is very sparse. Liv-Ari [

, X is diagonal (24)

which involves the so-called Khatri-Rao product, as well as the diagonal extraction operator:

which forms a column vector consisting of the diagonal elements of the square matrix X, instead of the much longer column vector VecX. In addition, if Y is any matrix of order, then

As we have observed earlier, when the unknown matrix X is diagonal, solving for VecX is highly inefficient, since most of the elements of X vanish. Instead Liv-Ari [

Notice that consists of only the nontrivial (i.e., diagonal) elements of the matrix X. The explicit solution of (27) is

provided is invertible.

It turns out that this expression can also be implemented using Hadamard product, resulting in a significant reduction in computational cost, as implied the following result [

where and.

When, we observe that the left-hand side expression in Equation (29) requires multiplications, while forming the equivalent right-hand side expression requires only multiplications. Thus the latter offers significant computational savings, especially when.

Now, using (26) we can rewrite (28) in the more compact form:

This expression which requires (multiply and add) operations is much more efficient than (28), which requires operations. It means that the computational advantage of using the Hadamard product expression is particularly evident when, which implies that . In order to be able to use (30) we must ascertain that the matrix is invertible. This will hold, for instance, when both A and B have full column rank.

As for the diagonal extraction operator, we observe that for any square matrix,

If Y is diagonal, then we also have

, Y is diagonal. (32)

Moreover, the columns of the selection matrix are mutually orthonormal, viz.,

Using (32) and (11), we get the fundamental relation between the Hadamard product and diagonal extraction operator which is given by

, X is diagonal (34)

where A, B and X is diagonal matrix.

Now we will discuss the efficient and more efficient least-squares solutions of coupled matrix linear equations:

, (35)

where A, , E, are given scalar matrices and X, are unknown matrices to be solved. We also assume that, so that the coupled matrix linear Equations (35) is over-determined, which suggests using a least squares approach. We consider the coupled matrix linear least-squares problem (CLSP):

The solution procedure presented here may be considered as a continuation of the method proposed to solve least-squares problem in (21).

Using the identity (13) we can transform (36) into the vector CLSP form [

which has the following solution

One can easily show that

where is a unitary matrix. So

Suppose that and, we then have

Now the least—squares solutions (38) can be rewrite into the form:

This gives

where and.

In order to be able to use (38) and (43) we must ascertain that the matrix:

is invertible if and only one

and

are invertible matrices.

As we observed, when the unknown matrices X and are diagonal, solving for VecX and VecY are highly inefficient, since most of the elements of X and Y vanish. Instead we can use the more compact vectorization identity (24) to rewrite the coupled matrix linear least-squares problem (37) in the reduced-order vector form:

Notice that and consists of only the nontrivial (i.e., diagonal) elements of matrices X and Y. The explicit efficient solution of (44) is

where and.

In order to be able to use (45), we must ascertain that the matrix

and

are invertible matrices.

It turns out that the expression (45) can also be implemented using Hadamard product by the same technique in the expression (30). Note that the least squares solutions in term of Hadamard product is more efficient than (45) and (43).

The solution procedure presented here may be considered as a continuation of the method proposed to solve the homogenous coupled matrix differential equations in [

, (46)

where, are given scalar matrices, and is the unknown matrix function to be solved. In fact the unique solution of (46) is given by:

Theorem 3.1 Let, are given scalar matrices, is a given matrix function and is the unknown matrix. Then the general solution of the non-homogeneous matrix differential equation:

, (48)

is given by

Where is well-definedwhich involves the convolution product of two matrices and.

Proof: Suppose that is the particular solution of (48). The product rule of differentiation gives

.

Substituting these in (48) we obtain

Thus

Multiplying both sides of (50) by gives

Integrating both sides of (51) between 0 and t gives

Hence, by assumption, we conclude that the particular solution of equation (48) is

Now from (47) and (53) we get (49).

Theorem 3.2 Let, , are given scalar matrices, is a given matrix function and is unknown diagonal matrix function. Then the general solution of non-homogeneous matrix differential equation

is given by

Proof: Using the identity (34) we can transform (54) into the vector form:

Now, applying (49), then the unique solution of (56) is

If we put in Theorem 3.2 we obtain the following result.

Corollary 3.3 Let A, B, are given scalar matrices. Then general solution of the homogeneous matrix differential equation:

is given by

Now we will discuss the general class of non-homogeneous coupled matrix differential equations which defined in (4): By using the -notation of (4), we have

Let

Now (59) can be written as

and the general solution is given by:

Note that there is many special cases can be considered from the above general class coupled matrix differential equations; now we will discuss some important special cases in the next results.

Theorem 3.4 Let A, B, C, D, E, are given scalar matrices such that

;

are given matrix functions and, are the unknown diagonal matrices. Then the general solution of non-homogeneous coupled matrix differential equations:

is given by

Proof: Using the identity (34) we can transform (62) into the vector form:

From (61), this system has the following solution:

Now we will deal with

Since, then we have

Then

But

;

.

So

Due to (67) we have

; (68)

Now substitute (68) and (69) in (65), we get (63).

If we put in Theorem 3.4 we obtain the following result.

Corollary 3.5 Let A, B, C, D, E, are given scalar matrices such that

and, are the unknown diagonal matrices. Then the general solution of homogeneous coupled matrix differential equations:

is given by

Corollary 3.6 Let, , E, are given scalar matrices and, are the unknown diagonal matrices. Then the general solution of homogeneous coupled matrix differential equations:

is given by

Proof: For any matrix, it is easily to show that

; (74)

Now put in Corollary 3.5 we have

Similarly,

.

While if we applying the fundamental relation between and Kronecker product defined in (13) and using the same technique in the proof of Theorem 3.4 we obtain (for any matrix) the following result.

Theorem 3.7 Let A, B, C, D, E, are given scalar matrices such that, , , are given matrix functions and, are the unknown matrices. Then the general solution of non-homogeneous coupled matrix differential equations:

is given by

If we put and in Theorem 3.7 and using properties (16)-(19) we obtain the following results.

Corollary 3.8 Let B, D, E, are given scalar matrices such that and, are the unknown matrices. Then the general solution of homogeneous coupled matrix differential equations:

is given by

Corollary 3.9 Let A, C, E, are given scalar matrices such that and, are the unknown matrices .Then the general solution of homogeneous coupled matrix differential equations:

is given by

We have studied an explicit characterization of the mappings

in terms of the selection matrix as in (11) and (12). We have also observed that the same matrix relates the two operators and as in (31) and (32). We used the fundamental relation between the Hadamard (Kronecker) product and diagonal extraction (vector) operator in (34) and (13) to derive our main results in Section 2 and 3 and, subsequently, to construct a computationally-efficient solution of coupled matrix least-squares problem and non-homogeneous coupled matrix differential equations. In fact, the Kronecker (Hadamard) product and operator () affirming their capability of solving matrix and matrix differential equations fast (more fast when the unknown matrices are diagonal). To demonstrate the usefulness of applying some properties of the Kronecker products, suppose we have to solve, for example, the following system:

where A, are given scalar matrices and is unknown matrix to be solved. Then it is not hard by using the -notation to establish the following equivalence:

and thus also by using the -notation product to establish the following equivalence:

, X is diagonal. (84)

If we ignore the Kronecker (Hadamard) product structure, then we need to solve the following both matrix equations:

• (85)

Here, Y can be obtained in arithmetic operations (flops) by using LU factorization of matrix B (Forward Substitution).

• (86)

Here X can be obtained also in operations (flops) by using LU factorization of matrix A (Back Substitution).

Now without exploiting the Kronecker product structure, an system defined in (82) would normally (by Gaussian elimination) require operations to solve. But when we use Kronecker product structure:, the calculations shows that can be obtained only in operations by using LU factorization of matrices A and B [20, pp. 87]. We can say that the system of the form: can be solved fast and the Kronecker structure also a voids the formation of matrices, only the smaller lower and upper triangular matrices L_{A}, L_{B}, U_{A}, U_{B} are needed. While if X is diagonal matrix and use the Hadamard product structure:, the calculations shows that can be obtained only in operations by using LU factorization of.

We can say that the system of the form: can be solved more fast than Kronecker structure, only the very smaller lower and upper triangular matrices and are needed. For example, consider A, B are 3 × 3 matrices and C is 9 × 1 vector. To demonstrate the usefulness of applying Kronecker product and -notation, we return to the system problem. If is non-singular and regarding with LU factorizetions of and, then a solution of system exists and can be written as:

First, the lower triangular system can be solved by forward substitution as the following:

i.e.,

which can be solved in operations. The first three equations are:

•. (88)

•. (89)

•

(90)

Now the next three equations are:

•. (91)

•. (92)

•

. (93)

The first boldface expression in (91) can be computed as. The second boldface expression in (92) can be also computed as.

While the third boldface expression in (93) can be also computed as.

We use the previous expressions for obtaining, and in the first set of equations to simplify the second set of three equations. The simplified second set of equations becomes

Solving the second set of equations takes operations and the forward solve step takes operations, so obtaining z_{4}, z_{5} and z_{6} takes time. This simplification and using the work from the previous solution step continuous so that solving each of n-sets of n-equations takes time, resulting in an overall solution time of. Exploiting the Kronecker structure reduce the usual, expected time to solve to.

One final note regarding the exploitation of the Kronecker structure of the system remains. Suppose the matrices A and B are different sizes. Then, the time required to solve the system is, where is the size of A and is the size of B. In our work, the modeler has some choice for the size of the A and B matrices. Thus, a wise choice would make small, reducing the effect of the term in the computation time.

While when X is diagonal matrix and applying -notation, we return to the system problem:. If is non-singular matrix and regarding with LU factorizations of , then a solution of system exists and can be written as:

First, the lower triangular system can be solved by forward substitution as the following:

which can be solved in operations as follows:

•. (98)

•. (99)

•. (100)

The solution of coupled matrix linear least-squares problems and coupled matrix differential equations is studied and some important special cases are discussed. The analysis indicates that solving for is efficient and solving for is more efficient when the unknown matrices are diagonal. Although the algorithms are presented for non-homogeneous coupled matrix and matrix linear differential equations, the idea adopted can be easily extended to study coupled matrix nonlinear differential equations, e.g., the coupled matrix Riccati differential equations.

The author expresses his sincere thanks to referee (s) for careful reading of the manuscript and several helpful suggestions. The author also gratefully acknowledges that this research was partially supported by Deanship of Scientific Research/University of Dammam/Kingdom of Saudi Arabia/under the Grant No. 2012152.