1. Introduction
The second semi-tensor product of the matrix is a new matrix multiplication constructed by Professor Cheng by replacing
in the matrix semi-tensor product with
in 2019. [1] In the same year, Professor Cheng systematically introduced the second semi-tensor product of the matrix. [2] It provides a new way to solve the problem of the control system. For example, in control theory, the cross-dimensional system is a very important dimension-free system. [3] There are many mathematical models that can describe this cross-dimensional system, such as generators, spacecraft, and biological systems. And we know that switching is the classic way to solve the problem of variable dimensional systems. But, the disadvantage is that it neglects the dynamic characteristics of the system in the process of changing dimension. However, the second matrix semi-tensor product can provide a new way to establish a unified morphological model for such switching systems, so as to better discuss the inter-dimensional systems.
The research of matrix equation
not only has important theoretical significance, but also has wide application in parameter identification, dynamic science, biology, dynamic analysis, nonlinear programming and so on. Mitra studied the solutions of a pair of linear matrix equations
and
. [4] Zhang et al. [5] proposed an iterative method to solve the equation
when X is a symmetric matrix. Ji et al. [6] investigated the solution of the matrix equation
under the semi-tensor product. Liu et al. [7] studied cyclic solution and optimal approximation of the quaternion stein equation. L. Chen [8] found a networked evolutionary model of a snow-drift game based on a semi-tensor product.
The second semi-tensor product of the matrix is a new kind of multiplication in recent years, so few people have studied the solvability of the matrix equation
for the second semi-tensor product. In 2021, Wang Jin [9] studied the solution of the matrix equation
under the second semi-tensor product, and this paper will study the solution of the matrix equation
combined with the conclusion of the matrix equation
.
In this paper,
represents the n-dimensional vector space over the real number field, and
represents the linear space of all
matrices over the real number field.
represents the linear space of all
matrices over a complex number field.
represents the
dimensional matrix with elements
.
is the smallest common multiple of integer
,
is the greatest common divisor of integer
.
represents the column expansion of matrix A.
represents
dimensional matrix, whose elements are all 1.
is the rounding down function.
represents the identity matrix of dimensions
.
2. Preliminaries
Definition 2.1. [6] Let
,
, the definition of the Kronecker product of A and B is:
Definition 2.2. [6] Let
, then
Lemma 2.1. [10] Let
,
,
, there is
In particular, when A takes
, there is
when C takes
, there is
Definition 2.3. [6] The left (or right) second semi-tensor product of matrix, denoted as
,
,
, is defined as:
where
,
is a
-dimensional matrix.
3. The Solution of Matrix-Vector Second Semi-Tensor
Product Equation
In this section, we explore the problem of solving the matrix-vector second semi-tensor product equation of the following form:
(1)
where
is given, and
is an unknown vector. Here we first study the case of
, and then consider the general case.
3.1. The Case
In this part, the solvability of Equation (1) under
is studied. Similar to the proof of the necessary conditions for the dimensionality of matrix semi-tensor product equations, we can obtain the dimension necessary condition for the solution of the matrix-vector second semi-tensor product Equation (1):
Lemma 3.1. [6] If
, and the matrix-vector second semi-tensor product Equation (1) has a p-dimensional solution vector, then
must be positive integers and the Equation (1) satisfy
Remark 3.1. The condition in the lemma is necessary for the matrix-vector second semi-tensor product Equation (1) to have a solution, which is called the compatibility condition for the matrix-vector second semi-tensor product Equation (1). Matrices A, B and C are said to be compatible if they satisfy the conditions.
According to Lemma 3.1, if the matrix-vector second semi-tensor product Equation (1) with
has a solution, then
,
. Let
and by definition we can get:
where
is p equal partition of matrix A.
Let
, we can get:
Lemma 3.2. When
, the matrix-vector second semi-tensor product Equation (1) can be written as:
Theorem 3.1. The matrix-vector second semi-tensor product Equation (1) with
has a solution, if and only if
and C are linearly correlated in matrix space
, and when
is linearly independent, the solution is unique.
Similarly, the following corollary can be drawn.
Corollary 3.1. If the matrix-vector second semi-tensor product Equation (1) has a solution when
, it must satisfy:
Take
, and let
be the sum of all the elements of row i of
, then
Let
be the sum of all the elements in column i of B and
we have
Combining Lemma 3.2, it can be concluded that
where, for
,
, there is
Further
Let
we have
, where
.
Theorem 3.2. If the matrix-vector second half tensor product Equation (1) has a solution at
, then the matrix
must have the following form:
where
and
,
.
In addition, we can also get the following theorem and corollary:
Theorem 3.3. The matrix-vector second semi-tensor product Equation (1) for
is equivalent to the matrix-vector equation of ordinary multiplication as follows:
Corollary 3.2. If the matrix-vector second semi-tensor product Equation (1) has a solution when
, the rank should satisfy the following conditions:
3.2. The General Case
Lemma 3.3. [6] If
and the matrix-vector second semi-tensor product Equation (1) has a p-dimensional solution vector, then
must be positive integers and the matrix-vector second semi-tensor product Equation (1) satisfy
Next, we assume that the matrix-vector second semi-tensor product Equation (1) always satisfies the compatibility condition. We will find the solution of the matrix-vector second semi-tensor product Equation (1) on
. By the definition of the second semi-tensor product of a matrix,
If
, then the matrix-vector second semi-tensor product Equation (1) is the case of
, and then the solution of the matrix-vector second semi-tensor product Equation (1) is easily obtained according to the previous conclusion.
Next, we study the conditions for the existence of matrix solutions for
. Theorem 3.4. If the matrix-vector second semi-tensor product Equation (1) has a solution at
, then the matrix
must have the following form
where,
and
,
,
is the sum of all the elements in column i of B.
Proof. From Lemma 3.3 we know that the matrix-vector second semi-tensor product Equation (1) has a solution
, where
. Let
,
be integers.
For line i of A, we have:
Let
take
to be the sum of the i-th column elements of B, we have
where
The theorem is proved.
Several examples are given to illustrate the effectiveness of this method.
Example 3.1. Consider the matrix-vector second semi-tensor product equation
, where
is as follows: (For convenience, let’s say
,
,
,
.)
(1)
Note that
, and
, so the given matrix is incompatible, and according to lemma 3.1, there is no solution to this equation.
(2)
It is calculated that
, so the given matrix is compatible, but C does not have a suitable form, and according to Lemma 3.1, there is no solution to this equation.
(3)
It is calculated that
, and the matrix C has a suitable form, so according to Lemma 3.1, this equation may have a solution
. Make
we have
Method 1: Know by definition
From
we have
solving the equation yields
Method 2: From
we have
solving the equation yields
(4)
It is calculated that
, and the matrix C has a suitable form, so according to lemma 3.1, this equation may have a solution
. Let
we have
From
we have
solving the equation yields
(5)
where
, According to lemma 3.3,
, and C has a suitable form, this equation may have a solution
. First, by the definition of the second semi-tensor product of the matrix, we know that
, so take
let
we have
From
we have
further we have
4. The Solution of Matrix Equation
Now, we will study the solvability of the second semi-tensor product equation of the matrix
(2)
where
is known,
is an unknown matrix.
4.1. The Situation of
Lemma 4.1. [6] If
, and the matrix second semi-tensor product Equation (2) has a solution vector of
dimension, then
must be a positive integer and satisfy
where
is any common divisor of n and
.
Theorem 4.1. When
, then the matrix second semi-tensor product Equation (2) is equivalent to
(3)
where
Let
be the sum of all the elements of row i of
, then
is a column p of A divided into equal chunks.
Proof. From Lemma 4.1, we know that
,
,
is the common factor of n and
, and from the definition of the product of the second semi-tensor, we have
(4)
where
is the column p equal block of the matrix A, taking
. Using the operator
we have
is the columnar fourth block of the matrix D. Further
can be obtained. Therefore, using the operator
for the last expression of the Equation (4), we get
4.2. General Situation
Lemma 4.2. [6] If
, and the matrix second semi-tensor product Equation (2) has a solution vector of
dimension, then
must be a positive integer and satisfy
where
is any common divisor of
and
,
.
Similarly, if the compatibility condition is satisfied, just let
, then the matrix second semi-tensor product Equation (2) is in the form of
.
Theorem 4.2. If the matrix second semi-tensor product Equation (2) has a solution when
, then the matrix
must have the following form:
where,
and
.
Proof. A proof similar to theorem 0.4 can be obtained
where
The theorem was proven.
Now we give the concrete steps to solve the matrix second semi-tensor product Equation (2):
Step 1. First, we examine whether the matrix second semi-tensor product Equation (2) satisfies the compatibility condition, that is,
and C satisfies
where,
and
.
Step 2. Find all allowable dimensions
that satisfy Lemma 4.2.
Step 3. Let
.
Step 4. Find
, , we can get:
.
Step 5. Solving
equations yields X.
Since the solution obtained by this method is definite and more limited, we can try to find an approximate solution for the equation when it does not meet the conditions.
Example 4.1. Consider the matrix second semi-tensor product equation
, where
are as follows: (For convenience, let’s say
,
,
,
.)
(1)
There are
, and note that the possible values of
are 1, 2, 4, so the matrix equation may have 1 × 2, 2 × 4, 4 × 8 solutions. It is easier to find the 1 × 2 solution of the equation as
. In addition,
are also solutions of matrix equations.
(2) Let
The matrices A and B are the same as in (1), and it is not difficult to verify that the matrices
are compatible, but the second semi-tensor product equation of the matrix (2) has no solution. It can be seen that the condition of lemma 4.1 is only necessary.
(3)
Through calculation, we can get that the allowable dimensions of the solution are 2 × 3, 6 × 9, and
(4)
Through calculation, we can get that the allowable dimensions of the solution are 2 × 3, 4 × 6, 8 × 12, and
(5)
Through calculation, we can get that the allowable dimensions of the solution are 2 × 3, 4 × 6, 8 × 12, and
5. Conclusions and Suggestions
In this paper, we have investigated the solution of the matrix second semi-tensor product equation
. The second semi-tensor product of the matrix is a new matrix multiplication constructed by Professor Cheng in 2019. Firstly, the solvability of the matrix-vector second semi-tensor product equation has been considered. At the same time, the compatibility conditions, sufficient and necessary conditions and the specific solution methods have been given. Then, the solvability of the second semi-tensor product equation of matrix has been studied. For each part, several examples are given to illustrate the validity of the results.
We expect the conclusions in this article to be useful and believe that they have broader application prospects in control systems, parameter identification, dynamic science, biology, dynamic analysis, nonlinear programming, and other fields.
Acknowledgements
Sincere thanks to the members of JAMP for their professional performance, and special thanks to managing editor Nancy for a rare attitude of high quality.
Funding
This research work was supported by the undergraduate education reform project of Shandong Normal University (Grant No. 2021BJ054) and the Excellent Youth Innovation Team Plan of Shandong Higher Education Institutions (Grant No. 2023KJ197).