1. Introduction
Firstly, we state some symbols that are used in this paper. The set of all real column vectors with n coordinates by
, and the set of all
real matrices by
are denoted. Let
and
stand for the set of all
real symmetric matrices and the set of all
real anti-symmetric matrices, respectively. The set of all
complex matrices is denoted by
, and
stands for the set of all
Hermitian matrices. For
, if the sum of the elements in each row and the sum of the elements in each column are both equal to 0, then A is called an indeterminate admittance matrix.
is denoted to be the set of all indeterminate admittance matrices. For
, if A is not only an indeterminate admittance matrix, but also a symmetry matrix, then A is called a symmetry indeterminate admittance matrix. Similarly, for
, A stands for an anti-symmetric indeterminate admittance matrix if A is an indeterminate admittance matrix and an anti-symmetric matrix.
and
are denoted to be the set of all symmetry indeterminate admittance matrices and the set of all anti-symmetric indeterminate admittance matrices, respectively. For
, if A is also a Hermitian matrix, then A is called a Hermitian indeterminate admittance matrix.
is denoted to be the set of all Hermitian indeterminate admittance matrices. The transpose matrix, the conjugate transpose matrix and the Moore-Penrose generalized inverse of matrix A are denoted by
,
and
, respectively. The identity matrix of order n is denoted by
. The trace of matrix
is
![]()
where
is the jth column of the identity matrix
. The 2-norm of the vector x by
is denoted. For
, we define the inner product:
, then
is a Hilbert inner product space and the norm of a matrix generated by this inner product is the matrix Frobenius norm
.
Definition 1 ( [1] ). For matrix
, let
, and denote the following vector by
:
(1)
Definition 2 ( [1] ). For matrix
, let
,
,
,
,
, and denote the following vector by
:
(2)
Definition 3 ( [1] ). For matrix
, let
,
,
,
, and denote the following vector by
:
(3)
It is well known that indeterminate admittance matrices play important roles in circuit modeling and lattices network and so on [2] [3] . In this paper, we mainly discuss the least squares problem associated with indeterminate admittance matrices, and derive it as follows.
Problem I. Given
,
,
,
,
, and
, let
![]()
Find
such that
(4)
The solution
is also called the least squares Hermitian indeterminate admittance solution of complex matrix equation
(5)
with the least norm.
For studying Problem I mentioned above, we first state some Lemmas.
Lemma 1. ( [4] ) The matrix equation
, with
and
, has a solution
if and only if
(6)
in this case it has the general solution
(7)
where
is an arbitrary vector.
Lemma 2. ( [4] ) The least squares solutions of the matrix equation
, with
and
, can be represented as
(8)
where
is an arbitrary vector, and the least squares solution with the least norm is
.
Direct and iterative methods on solving the matrix equations associated with the constrained matrix (such as Hermitian matrix, anti-Hermitian matrix, bisymmetric matrix, reflexive matrix) sets have been widely investigated. See [5] - [25] and references cited therein. Yuan, Liao and Lei [1] derived the least squares symmetric solution with the least norm of real matrix equation
by using the vec-operator, Kronecker product and the Moore-Penrose generalized inverse. In order to avoid the difficulties of the coefficient matrices with large size from the Kronecker product, Yuan and Liao [26] recently improved this method, defined a matrix-vector product, and successfully carried out a special vectorization of the matrix equation
to derive the least squares Hermitian solution with the least norm. Based on these methods, we continue to study Problem I in this paper.
We now briefly introduce the contents of our paper. In Section 2, by using the Moore-Penrose generalized inverse and the Kronecker product, we derive the least squares Hermitian indeterminate admittance solution with the least norm for the complex matrix Equation (5). In Section 3, we firstly discuss a class of linear least squares problem in Hilbert inner product
, and analysis a matrix-vector product of
. Then we present the explicit expression of the solution for the complex matrix Equation (5) by using the method.
2. Method I for the Solution of Problem I
In this section, we present the expression of the least square Hermitian indeterminate admittance solution of complex matrix Equation (5) with the least norm by using the Moore-Penrose generalized inverse and the Kronecker product of matrices.
Definition 4. For
,
, the symbol
stands for the Kronecker product of A and B.
Theorem 3. Suppose
and
. Then
1)
(9)
where
is represented as (2), and the matrix
is of the following form
(10)
2)
(11)
where
is represented as (3), and the matrix
is of the following form
(12)
Proof. 1) For
, X can be expressed as
![]()
![]()
It then follows that
![]()
![]()
![]()
![]()
Thus we have
![]()
Conversely, if the matrix
satisfies
, then it is easy to see that
.
2) For
, X can be expressed as
![]()
It then follows that
![]()
![]()
Thus we have
![]()
Conversely, if the matrix
satisfies
, then it is easy to see that
. The proof is completed.
Theorem 4. Suppose
, then
(13)
where
and
are represented as (2) and (3), and the matrix
are in the forms (10) and (12).
Proof. For
, then
, we have
![]()
Thus we can get
. Then
and
. By (9) and (11),
![]()
Conversely, if the matrix
satisfies
, then it is easy to see that
. The proof is completed.
We now consider Problem I by using the Moore-Penrose generalized inverse and Kronecker product of matrices.
Theorem 5. Given
,
,
,
,
, and
,
are defined as (10) and (12),
are defined as (1), (2) and (3). Then the set
of the problem can be expressed as
(14)
where
![]()
![]()
where y is an arbitrary vector.
Furthermore, the unique least squares Hermitian indeterminate admittance solution with the least norm
can be expressed as
(15)
Proof. By Theorem 4, we can get
![]()
![]()
![]()
Thus, by Lemma 2,
![]()
By Theorem 2, it follows that
![]()
Thus we have
![]()
The proof is completed.
We now discuss the consistency of the complex matrix Equation (5). By Lemma 1 and Theorem 3, we can get the following conclusions.
Corollary 6. The matrix Equation (5) has a solution
if and only if
(16)
In this case, denote by
the solution set of (5). Then
(17)
Furthermore, if (16) holds, then the matrix Equation (5) has a unique solution
if and only if
(18)
In this case,
(19)
The least norm problem
![]()
has a unique solution
and
can be expressed as (15).
3. Method II for the Solution of Problem I
The method for solving Problem I used in this section is from [26] . We concisely recall it as follows.
Definition 5. Let
,
and
,
. Define
1)
;
2)
.
Let
,
, and
. By Definition 5, we have the following facts which are useful in this paper.
1)
;
2)
;
3)
;
4)
;
5)
;
6)
is no meaning.
Suppose
,
,
,
,
,
,
. Then
7)
;
8)
;
9)
;
10)
.
Suppose
,
,
,
,
. Then
11)
.
Suppose
,
,
,
.
. Then
12)
;
13)
.
Lemma 7. ( [26] ) Given matrices
and
, let
(20)
Let
that satisfies
(21)
If the matrix Equation (21) is consistent, then the solution set of the matrix Equation (21) is exactly the solution set of the following consistent system
(22)
Lemma 8. ( [26] ) Given
and the matrices
, let
![]()
such that
(23)
Then the solution set of (23) is the solution set of the system (22).
We now analyze the structure of the complex matrix equation
over
with the new product that we have presented.
Let
(24)
where
![]()
Let
(25)
Note that
.
Let
(26)
Note that
. We can get the following lemmas.
Lemma 9. Suppose
, then
(27)
where
is represented as (2), and the matrix
is in the form (25).
Lemma 10. Suppose
, then
(28)
where
is represented as (3), and the matrix
is in the form (26).
Lemma 11. Suppose
, then
(29)
where
and
are represented as (2) and (3). The matrix
are in the form (25) and (26).
Theorem 12. Suppose
,
,
,
and
. Let
and
, where
is the ith column vector of matrix A, and
is the ith column vector of matrix C,
(30)
where
is the jth row vector of matrix B, and
is the jth row vector of matrix D. Then
1)
(31)
2) Let
, where
![]()
![]()
Thus
(32)
(33)
3) Let
, where
![]()
![]()
Thus
(34)
(35)
Proof. 1)
![]()
2) By (1), Definition 5 and Lemma 7, we can get
![]()
![]()
![]()
3) The proof is similar to that of (2), so we omit it.
The proof is completed.
We now use Lemmas 7 - 11, and Theorem 12 to consider the least squares Hermitian indeterminate admittance solution for the matrix Equation (5). The following notations and lemmas are necessary for deriving the solutions.
For
,
,
,
,
, and
, let
![]()
![]()
(36)
where
,
![]()
![]()
![]()
![]()
Theorem 13. Let
,
,
,
,
, and
,
are defined as (25) and (26),
are defined as (2) and (3). let
be as in (36). Then
can be expressed as
(37)
where y is an arbitrary vector.
Furthermore, the unique least squares Hermitian indeterminate admittance solution with the least norm
can be expressed as
(38)
Proof. By Theorem 4, we can get
![]()
Then by Lemma 11, the least squares problem
![]()
with respect to the Hermitian indeterminate admittance matrix X is equivalent to the following consistent matrix equation
![]()
Thus, by Lemma 2,
if and only if
![]()
From Lemma 11, it follows that
![]()
where y is an arbitrary vector. it yields that
![]()
The proof is completed.
We now discuss the consistency of the complex matrix Equation (5). By Lemma 1 and Theorem 13, we can get the following conclusions.
Corollary 14. The matrix Equation (5) has a solution
if and only if
(39)
In this case, denote by
the solution set of (5). Then
(40)
where y is an arbitrary vector.
Furthermore, if (39) holds, then the matrix Equation (5) has a unique solution
if and only if
(41)
In this case,
(42)
The least norm problem
![]()
has a unique solution
and
can be expressed as (38).
4. Conclusion
In this paper, we mainly consider the least squares Hermitian indeterminate admittance problem of the complex matrix equation
. We derive the explicit solution of this complex matrix equation over
The paper provide a direct method to solve the least squares admittance problem of complex matrix equation
. More works such as iterative methods, error analysis and numerical stability need to be investigated in future.
Funding
The research is supported by Natural Science Foundation of China (No. 11571220), Guangdong Natural Science Fund of China (No. 2015A030313646), and the Characteristic Innovation Project (Natural Science) of the Education Department of Guangdong Province (No. 2015KTSCX148).