Solution of the Matrix Second Semi-Tensor Product Equation A l X l B=C

Abstract

In this paper, the solution of the matrix second semi-tensor product equation A l X l B=C is studied. Firstly, the solvability of the matrix-vector second semi-tensor product equation is investigated. At the same time, the compatibility conditions, the sufficient and necessary conditions and the specific solution methods for the matrix solution are given. Secondly, we further consider the solvability of the second semi-tensor product equation of the matrix. For each part, several examples are given to illustrate the validity of the results.

Share and Cite:

Zhang, H. (2024) Solution of the Matrix Second Semi-Tensor Product Equation A l X l B=C. Journal of Applied Mathematics and Physics, 12, 3261-3280. doi: 10.4236/jamp.2024.1210194.

1. Introduction

The second semi-tensor product of the matrix is a new matrix multiplication constructed by Professor Cheng by replacing I k in the matrix semi-tensor product with J k in 2019. [1] In the same year, Professor Cheng systematically introduced the second semi-tensor product of the matrix. [2] It provides a new way to solve the problem of the control system. For example, in control theory, the cross-dimensional system is a very important dimension-free system. [3] There are many mathematical models that can describe this cross-dimensional system, such as generators, spacecraft, and biological systems. And we know that switching is the classic way to solve the problem of variable dimensional systems. But, the disadvantage is that it neglects the dynamic characteristics of the system in the process of changing dimension. However, the second matrix semi-tensor product can provide a new way to establish a unified morphological model for such switching systems, so as to better discuss the inter-dimensional systems.

The research of matrix equation AXB=C not only has important theoretical significance, but also has wide application in parameter identification, dynamic science, biology, dynamic analysis, nonlinear programming and so on. Mitra studied the solutions of a pair of linear matrix equations A 1 X B 1 = C 1 and A 2 X B 2 = C 2 . [4] Zhang et al. [5] proposed an iterative method to solve the equation AXB=C when X is a symmetric matrix. Ji et al. [6] investigated the solution of the matrix equation AXB=C under the semi-tensor product. Liu et al. [7] studied cyclic solution and optimal approximation of the quaternion stein equation. L. Chen [8] found a networked evolutionary model of a snow-drift game based on a semi-tensor product.

The second semi-tensor product of the matrix is a new kind of multiplication in recent years, so few people have studied the solvability of the matrix equation AXB=C for the second semi-tensor product. In 2021, Wang Jin [9] studied the solution of the matrix equation AX=B under the second semi-tensor product, and this paper will study the solution of the matrix equation AXB=C combined with the conclusion of the matrix equation AX=B .

In this paper, n represents the n-dimensional vector space over the real number field, and m×n represents the linear space of all m×n matrices over the real number field. m×n represents the linear space of all m×n matrices over a complex number field. A= ( a ij ) m×n represents the m×n dimensional matrix with elements a ij . lcm( m,n ) is the smallest common multiple of integer m,n , gcd( m,n ) is the greatest common divisor of integer m,n . vec( A ) represents the column expansion of matrix A. 1 k represents k×k dimensional matrix, whose elements are all 1. [ x ] is the rounding down function. I n represents the identity matrix of dimensions n×n .

2. Preliminaries

Definition 2.1. [6] Let A=( a ij ) m×n , B=( b ij ) n×r , the definition of the Kronecker product of A and B is:

AB=[ a 11 B a 12 B a 1n B a 21 B a 22 B a 2n B a m1 B a m2 B a mn B ].

Definition 2.2. [6] Let A=( a ij ) m×n , then

vec( A )= [ a 11 a m1 a 12 a m2 a 1n a mn ] T .

Lemma 2.1. [10] Let A=( a ij ) m×n , B=( b ij ) n×r , C=( c ij ) r×l , there is

vec( ABC )=( C T A )vec( B ).

In particular, when A takes I n , there is

vec( BC )=( C T I n )vec( B ),

when C takes I r , there is

vec( AB )=( I r A )vec( B ).

Definition 2.3. [6] The left (or right) second semi-tensor product of matrix, denoted as A l B( A r B ) , A m×n , B r×l , is defined as:

A l B=( A J t/n )( B J t/r ) ( mt/n )×( lt/r ) ( A r B=( J t/n A )( J t/r B ) ( mt/n )×( lt/r ) ),

where t=lcm{ n,p } , J k = 1 k 1 k×k is a k×k -dimensional matrix.

3. The Solution of Matrix-Vector Second Semi-Tensor Product Equation

In this section, we explore the problem of solving the matrix-vector second semi-tensor product equation of the following form:

A l X l B=C, (1)

where A m×n ,B r×l ,C h×k is given, and X p is an unknown vector. Here we first study the case of m=h , and then consider the general case.

3.1. The Case m=h

In this part, the solvability of Equation (1) under m=h is studied. Similar to the proof of the necessary conditions for the dimensionality of matrix semi-tensor product equations, we can obtain the dimension necessary condition for the solution of the matrix-vector second semi-tensor product Equation (1):

Lemma 3.1. [6] If m=h , and the matrix-vector second semi-tensor product Equation (1) has a p-dimensional solution vector, then k l , n r must be positive integers and the Equation (1) satisfy k l | n r ,p= ln rk .

Remark 3.1. The condition in the lemma is necessary for the matrix-vector second semi-tensor product Equation (1) to have a solution, which is called the compatibility condition for the matrix-vector second semi-tensor product Equation (1). Matrices A, B and C are said to be compatible if they satisfy the conditions.

According to Lemma 3.1, if the matrix-vector second semi-tensor product Equation (1) with m=h has a solution, then X p , p= n k . Let t 1 =lcm( n,p ) and by definition we can get:

A l X l B= 1 t 1 n ( A 1 t 1 n ) 1 t 1 p ( X 1 t 1 p ) l B =A p n ( X 1 n p ) l B= p n A[ x 1 1 n p x 2 1 n p x p 1 n p ] l B

= p n [ a 1,1 a 1, n p a 1, n p +1 a 1,2 n p a m,1 a m, n p a m, n p +1 a m,2 n p a 1,( p1 ) n p +1 a s,n a m,( p1 ) n p +1 a m,n ][ x 1 1 n p x 2 1 n p x p 1 n p ] l B = p n [ A ˙ 1 A ˙ 2 A ˙ p ][ x 1 1 n p x 2 1 n p x p 1 n p ] l B = p n ( x 1 A ˙ 1 1 n p + x 2 A ˙ 2 1 n p ++ x p A ˙ p 1 n p ) l B = pl nk ( x 1 A ˙ 1 1 n p ( B 1 k l )+ x 2 A ˙ 2 1 n p ( B 1 k l )++ x p A ˙ p 1 n p ( B 1 k l ) ),

where A ˙ j m× n p ,j=1,2,,p is p equal partition of matrix A.

Let B ˙ j = A ˙ j 1 n p ( B 1 k l ) , we can get:

Lemma 3.2. When m=h , the matrix-vector second semi-tensor product Equation (1) can be written as:

pl nk ( x l B ˙ 1 + x 2 B ˙ 2 ++ x p B ˙ p )=C.

Theorem 3.1. The matrix-vector second semi-tensor product Equation (1) with m=h has a solution, if and only if B ˙ 1 , B ˙ 2 ,, B ˙ p and C are linearly correlated in matrix space m× n p , and when B ˙ 1 , B ˙ 2 ,, B ˙ p is linearly independent, the solution is unique.

Similarly, the following corollary can be drawn.

Corollary 3.1. If the matrix-vector second semi-tensor product Equation (1) has a solution when m=h , it must satisfy:

rank[ B ˙ 1 B ˙ 2 B ˙ p ]=rank[ B ˙ 1 B ˙ 2 B ˙ p C ].

Take A ¨ j = A ˙ j 1 n p ,j=1,2,,p , and let w ij be the sum of all the elements of row i of A ˙ j , then

A ¨ j =[ w 1j w 1j w 1j w 2j w 2j w 2j w m1 w mj w mj ],j=1,2,,p.

Let v i be the sum of all the elements in column i of B and

B ij =[ b ij b ij b ij b ij ],

we have

B ˙ j = A ¨ j [ B 11 B 1l B r1 B rl ] = [ w 1j k l v 1 w 1j k l v 1 w 1j k l v 1 w 1j k l v 2 w 2j k l v 1 w 2j k l v 1 w 2j k l v 1 w 2j k l v 2 w mj k l v 1 w mj k l v 1 w mj k l v 1 w mj k l v 2 w 1j k l v l w 1j k l v l w 2j k l v l w 2j k l v l w mj k l v l w mj k l v l ] = k l [ w 1j v 1 w 1j v 1 w 1j v 1 w 1j v 2 w 1j v l w 1j v l w 2j v 1 w 2j v 1 w 2j v 1 w 2j v 2 w 2j v l w 2j v l w mj v 1 w mj v 1 w mj v 1 w mj v 2 w mj v l w mj v l ] = k l [ w 1j w mj ][ v 1 v 1 v 2 v 2 v l v l ] m×k ,

Combining Lemma 3.2, it can be concluded that

C=[ Bloc k 1 ( C ) Bloc k l ( C ) ],

where, for i=1,2,,m , s=1,2,,l , there is

Bloc k s ( C )=[ c 1s c 1s c ms c ms ], c is = c i,( s1 ) k l +1 .

Further

p n j=1 p X j [ w 1j v i w 2j v i w mj v i ]=[ c 1i c 2i c mi ].

Let

W=[ w 11 w 1p w m1 w mp ],

we have v i p n WX= C i ( C ) , where C i ( C )=[ c 1i c 2i c mi ] .

Theorem 3.2. If the matrix-vector second half tensor product Equation (1) has a solution at m=h , then the matrix C=( c ij ),i=1,2,,m;j=1,2,,k must have the following form:

C=[ Bloc k 1 ( C ) Bloc k l ( C ) ],

where

Bloc k s ( C )=[ c 1s c 1s c ms c ms ] m× k l ,

and c is = c i,( s1 ) k l +1 ,i=1,2,,m;s=1,2,,l , c i+1,s / c is = v i+1 / v i .

In addition, we can also get the following theorem and corollary:

Theorem 3.3. The matrix-vector second semi-tensor product Equation (1) for m=h is equivalent to the matrix-vector equation of ordinary multiplication as follows:

v i p n WX= C i ( C ),i=1,2,,l.

Corollary 3.2. If the matrix-vector second semi-tensor product Equation (1) has a solution when m=h , the rank should satisfy the following conditions:

rankW=rank[ W C i ( C ) ],i=1,2,,l.

3.2. The General Case

Lemma 3.3. [6] If mh and the matrix-vector second semi-tensor product Equation (1) has a p-dimensional solution vector, then h m , k l , rk nl must be positive integers and the matrix-vector second semi-tensor product Equation (1) satisfy

β=gcd( h m ,r ),gcd( k l ,β )=1,gcd( h m , k l )=1,p= nhl mrk .

Next, we assume that the matrix-vector second semi-tensor product Equation (1) always satisfies the compatibility condition. We will find the solution of the matrix-vector second semi-tensor product Equation (1) on nhl mrk . By the definition of the second semi-tensor product of a matrix,

A l X l B=( m h ( A 1 h m ) ) l X l B,

If A = m h A 1 h m , then the matrix-vector second semi-tensor product Equation (1) is the case of m=h , and then the solution of the matrix-vector second semi-tensor product Equation (1) is easily obtained according to the previous conclusion.

Next, we study the conditions for the existence of matrix solutions for mh . Theorem 3.4. If the matrix-vector second semi-tensor product Equation (1) has a solution at mh , then the matrix C=( c ij ),i=1,2,,m;j=1,2,,k must have the following form

C=[ Bloc k 11 ( C ) Bloc k 1l ( C ) Bloc k m1 ( C ) Bloc k ml ( C ) ],

where,

Bloc k ij ( C )=[ c ij c ij c ij c ij ] h m × k l ,

and c ij = c ( i1 ) h m +1,( j1 ) k l +1 ,i=1,2,,m;j=1,2,,l , c i,j+1 / c ij = v j+1 / v j , v i is the sum of all the elements in column i of B.

Proof. From Lemma 3.3 we know that the matrix-vector second semi-tensor product Equation (1) has a solution X p , where p= nhl mrk . Let n= l 1 p+ l 2 , l 1 , l 2 be integers.

For line i of A, we have:

Ro w i ( A ) l X l B= m h ( Ro w i ( A ) 1 h m ) mp nh ( X 1 nh mp ) l B = m 2 p n h 2 [ a i1 1 h m a i2 1 h m a in 1 h m ][ x 1 1 nh mp x 2 1 nh mp x p 1 nh mp ] l B = m 2 p n h 2 ( x 1 [ a i,1 1 h m a i,2 1 h m a i, l 1 +1 1 h m × l 2 ] 1 nh mp + x 2 [ a i, l 1 +1 1 h m × h m l 2 a i, l 1 +2 1 h m a i,[ 2n p ]+1 1 h m ×mod( 2n p ) ] 1 nh mp ++ x h m [ a i, nh mp l 1 1 h m × l 2 a i, nh mp 1 h m ] 1 nh mp + x h m +1 [ a i, nh mp +1 1 h m a i, nh mp + l 1 1 h m a i, nh mp + l 1 +1 1 h m × l 2 ] 1 nh mp

++ x p [ l 2 a i,n l 1 + h m a i,n l 1 +1 ++ h m a i,n l 2 a i,n l 1 + h m a i,n l 1 +1 ++ h m a i,n l 2 a i,n l 1 + h m a i,n l 1 +1 ++ h m a i,n l 2 a i,n l 1 + h m a i,n l 1 +1 ++ h m a i,n ] ) l B

= m 2 p n h 2 [ x 1 ( h m a i,1 + h m a i,2 ++ h m a i, l 1 + l 2 a i, l 1 +1 ) 1 h m × nh mp + x 2 ( ( h m l 2 ) a i, l 1 +1 + h m a i, l 1 +2 ++mod( 2n p ) a i,[ 2n p ]+1 ) 1 h m × nh mp ++ x h m ( l 2 a i, nh mp l 1 + h m a i, nh mp l 1 +1 ++ h m a i, nh mp ) 1 h m × nh mp + x h m +1 ( h m a i, nh mp +1 ++ h m a i, nh mp + l 1 + l 2 a i, nh mp + l 1 +1 ) 1 h m × nh mp ++ x p ( l 2 a i,n l 1 + h m a i,n l 1 +1 ++ h m a i,n ) 1 h m × nh mp ] l B

= m 2 p n h 2 [ x 1 ( h m a i,1 + h m a i,2 ++ h m a i, l 1 + l 2 a i, l 1 +1 ) + x 2 ( ( h m l 2 ) a i, l 1 +1 + h m a i, l 1 +2 ++mod( 2n p ) a i,[ 2n p ]+1 ) ++ x h m ( l 2 a i, nh mp l 1 + h m a i, nh mp l 1 +1 ++ h m a i, nh mp ) + x h m +1 ( h m a i, nh mp +1 ++ h m a i, nh mp + l 1 + l 2 a i, nh mp + l 1 +1 ) ++ x p ( l 2 a i,n l 1 + h m a i,n l 1 +1 ++ h m a i,n ) ] 1 h m × nh mp l B.

Let

w= m 2 p n h 2 [ x 1 ( h m a i,1 + h m a i,2 ++ h m a i, l 1 + l 2 a i, l 1 +1 ) + x 2 ( ( h m l 2 ) a i, l 1 +1 + h m a i, l 1 +2 ++mod( 2n p ) a i,[ 2n p ]+1 ) ++ x h m ( l 2 a i, nh mp l 1 + h m a i, nh mp l 1 +1 ++ h m a i, nh mp )

+ x h m +1 ( h m a i, nh mp +1 ++ h m a i, nh mp + l 1 + l 2 a i, nh mp + l 1 +1 ) ++ x p ( l 2 a i,n l 1 + h m a i,n l 1 +1 ++ h m a i,n ) ],

take v i to be the sum of the i-th column elements of B, we have

Ro w i ( A ) l X l B=w 1 h m × nh mp l B=w 1 h m × nh mp ( B 1 k l ) =[ w w w w ][ b 11 b 11 b 12 b 12 b 1l b 1l b 11 b 11 b 12 b 12 b 1l b 1l b 21 b 21 b 22 b 22 b 2l b 2l b 21 b 21 b 22 b 22 b 2l b 2l b r1 b r1 b r2 b r2 b rl b rl b r1 b r1 b r2 b r2 b rl b rl ] =[ w k l v 1 w k l v 1 w k l v l w k l v l w k l v 1 w k l v 1 w k l v l w k l v l ] =[ Bloc k i1 ( C ) Bloc k i2 ( C ) Bloc k il ( C ) ],

where

Bloc k ij ( C )=[ c ij c ij c ij c ij ] h m × k l .

The theorem is proved.

Several examples are given to illustrate the effectiveness of this method.

Example 3.1. Consider the matrix-vector second semi-tensor product equation A l X l B=C , where A,B,C is as follows: (For convenience, let’s say A m×n , B r×l , C h×k , X p .)

(1)

A=[ 8 3 7 8 5 8 8 2 ],B=[ 6 3 4 2 ],C=[ 8 2 3 6 2 3 ].

Note that m=h , and n k = 4 3 , so the given matrix is incompatible, and according to lemma 3.1, there is no solution to this equation.

(2)

A=[ 1 0 0 1 1 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 ],B=[ 1 3 1 2 ],C=[ 1 2 3 2 1 2 3 1 1 2 3 3 ].

It is calculated that m=h,k/l =2,n/r =4,k/l |n/r ,p=2 , so the given matrix is compatible, but C does not have a suitable form, and according to Lemma 3.1, there is no solution to this equation.

(3)

A=[ 2 0 0 2 0 2 2 2 ],B=[ 1 3 2 4 ],C=[ 15 35 24 56 ].

It is calculated that m=h,k/l =1,n/r =2,k/l |n/r ,p=2 , and the matrix C has a suitable form, so according to Lemma 3.1, this equation may have a solution X 2 . Make

A ˙ 1 =[ 2 0 2 2 ], A ˙ 2 =[ 0 2 2 2 ],

we have

A ¨ 1 =[ 2 2 2 2 ], A ¨ 2 =[ 2 2 4 4 ],W=[ 2 2 2 4 ], v 1 =3, v 2 =7.

Method 1: Know by definition

A l X l B= 1 2 A( X 1 2×2 ) l B.

From

1 2 [ 2 0 0 2 0 2 2 2 ][ x 1 x 1 x 1 x 1 x 2 x 2 x 2 x 2 ] =[ 1 0 0 1 0 1 1 1 ][ x 1 x 1 x 1 x 1 x 2 x 2 x 2 x 2 ]=[ x 1 + x 2 x 1 + x 2 x 1 +2 x 2 x 1 +2 x 2 ]

we have

A l X l B=[ x 1 + x 2 x 1 + x 2 x 1 +2 x 2 x 1 +2 x 2 ]B=C,

solving the equation yields

X=[ 2 3 ].

Method 2: From v 1 p n WX= C 1 we have

3 2 [ 2 2 2 4 ][ x 1 x 2 ]=[ 15 24 ],

solving the equation yields

X=[ 2 3 ].

(4)

A=[ 4 0 0 4 4 0 0 4 0 4 4 4 4 4 4 4 4 4 4 4 0 4 4 4 ],B=[ 1 3 1 2 ],C=[ 12 12 30 30 22 22 55 55 20 20 50 50 ].

It is calculated that m=h,k/l =2,n/r =4, k l n r ,p=2 , and the matrix C has a suitable form, so according to lemma 3.1, this equation may have a solution X 2 . Let

A ˙ 1 =[ 4 0 0 4 0 4 4 4 4 4 4 4 ], A ˙ 2 =[ 4 0 0 4 4 4 4 4 0 4 4 4 ],

we have

A ¨ 1 =[ 8 8 8 8 12 12 12 12 16 16 16 16 ], A ¨ 2 =[ 8 8 8 8 16 16 16 16 12 12 12 12 ],W=[ 8 8 12 16 16 12 ],

v 1 =2, v 2 =5.

From v 1 p n WX= C 1 we have

1 2 [ 8 8 12 16 16 12 ][ x 1 x 2 ]=[ 12 44 40 ],

solving the equation yields

X=[ 1 2 ].

(5)

A=[ 1 2 1 1 2 2 3 1 ],B=[ 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 ],C=[ 38 38 19 19 38 38 19 19 64 64 32 32 64 64 32 32 ].

where mh , According to lemma 3.3, h m =2, k l =1, rk nl =1,β=2 , and C has a suitable form, this equation may have a solution X 2 . First, by the definition of the second semi-tensor product of the matrix, we know that A l X l B= 1 2 ( A I 2 ) l X l B , so take

A =A I 2 =[ 1 1 2 2 1 1 1 1 1 1 2 2 1 1 1 1 2 2 2 2 3 3 1 1 2 2 2 2 3 3 1 1 ],

let

A ˙ 1 =[ 1 1 2 2 1 1 2 2 2 2 2 2 2 2 2 2 ], A ˙ 2 =[ 1 1 1 1 1 1 1 1 3 3 1 1 3 3 1 1 ],

we have

A ¨ 1 =[ 6 6 6 6 6 6 6 6 8 8 8 8 8 8 8 8 ], A ¨ 2 =[ 4 4 4 4 4 4 4 4 8 8 8 8 8 8 8 8 ],W=[ 6 4 6 4 8 8 8 8 ],

v 1 =8, v 2 =8, v 3 =4, v 4 =4.

From v 1 p 2n WX= C 1 we have

[ 6 4 6 4 8 8 8 8 ][ x 1 x 2 ]=[ 38 38 64 64 ],

further we have

X=[ 3 5 ].

4. The Solution of Matrix Equation

Now, we will study the solvability of the second semi-tensor product equation of the matrix

A l X l B=C, (2)

where A m×n ,B r×l ,C h×k is known, X p×q is an unknown matrix.

4.1. The Situation of m=h

Lemma 4.1. [6] If m=h , and the matrix second semi-tensor product Equation (2) has a solution vector of p×q dimension, then k l must be a positive integer and satisfy p= n α ,q= rk lα ,

where α is any common divisor of n and rk l .

Theorem 4.1. When m=h , then the matrix second semi-tensor product Equation (2) is equivalent to

l αk ( ( B T 1 k l ) I m )( I q A )vec( X )=vec( C ), (3)

where

A =[ vec( A ¨ 1 ),vec( A ¨ 2 ),,vec( A ¨ p ) ]=[ W 1 W 2 W p W 1 W 2 W p W 1 W 2 W p ].

Let w ij be the sum of all the elements of row i of A ˙ j , then

A ¨ j = A ˙ j 1 α =[ w 1j w 1j w 1j w 2j w 2j w 2j w m1 w mj w mj ], A ˙ j m×α ,j=1,2,,p

is a column p of A divided into equal chunks.

Proof. From Lemma 4.1, we know that p= n α , q= rk lα , α is the common factor of n and rk l , and from the definition of the product of the second semi-tensor, we have

A l X l B= 1 α [ A ˙ 1 A ˙ 2 A ˙ p ][ x 11 1 α x 12 1 α x 1q 1 α x 21 1 α x 22 1 α x 2q 1 α x p1 1 α x p2 1 α x pq 1 α ] l B = 1 α [ A ˙ 1 1 α A ˙ 2 1 α A ˙ p 1 α ][ X 1 , X 2 ,, X p ] l B = 1 α [ A ¨ 1 , A ¨ 2 ,, A ¨ p ][ X 1 , X 2 ,, X q ] l B = l αk A( X 1 α )( B 1 k l ), (4)

where A ˙ i m×α ,i=1,2,,p is the column p equal block of the matrix A, taking D=A( X 1 α ) . Using the operator vec( ) we have

[ vec( A ¨ 1 ),vec( A ¨ 2 ),,vec( A ¨ p ) ][ X 1 , X 2 ,, X p ] =[ vec( D ˙ 1 ),vec( D ˙ 2 ),,vec( D ˙ q ) ] mα×q ,

D ˙ i m×α ,i=1,2,,q is the columnar fourth block of the matrix D. Further

( I q A )vec( X )=vec( D )

can be obtained. Therefore, using the operator vec( ) for the last expression of the Equation (4), we get

l αk vec( A( X 1 α )( B 1 k l ) )= l αk ( B T 1 k l )vec( D ) = l αk ( ( B T 1 k l ) I m )( I q A )vec( X ) =vec( C ).

4.2. General Situation

Lemma 4.2. [6] If mh , and the matrix second semi-tensor product Equation (2) has a solution vector of p×q dimension, then h m , k l must be a positive integer and satisfy gcd( h mβ , α β )=1,gcd( h m , k l )=1,β|r,p= nh mα ,q= rk lα , where α is any common divisor of nh m and rk l , β=gcd( h m ,α ) .

Similarly, if the compatibility condition is satisfied, just let A = m h ( A 1 h m ) , then the matrix second semi-tensor product Equation (2) is in the form of m=h .

Theorem 4.2. If the matrix second semi-tensor product Equation (2) has a solution when mh , then the matrix C=( c ij ),i=1,2,,m;j=1,2,,k must have the following form:

C=[ Bloc k 11 ( C ) Bloc k 1l ( C ) Bloc k m1 ( C ) Bloc k ml ( C ) ],

where,

Bloc k ij ( C )=[ c ij c ij c ij c ij ] h m × k l ,

and c ij = c ( i1 ) h m +1,( j1 ) k l +1 ,i=1,2,,m;j=1,2,,l .

Proof. A proof similar to theorem 0.4 can be obtained

Ro w i ( A ) l X l B=[ w 1 , w 2 ,, w q ] 1 h m × nh mp l B =[ w 1 , w 2 ,, w q ] 1 h m × nh mp ( B 1 k l ) =[ w 1 w 1 w q w q w 1 w 1 w q w q ]

[ b 11 b 11 b 1l b 1l b 11 b 11 b 1l b 1l b r1 b r1 b rl b rl b r1 b r1 b rl b rl ] =[ Bloc k i1 ( C ) Bloc k i2 ( C ) Bloc k il ( C ) ],

where

Bloc k ij ( C )=[ c ij c ij c ij c ij ] h m × k l ,

w j = m 2 p n h 2 [ x 1,j ( h m a i,1 + h m a i,2 ++ h m a i, l 1 + l 2 a i, l 1 +1 ) + x 2,j ( ( h m l 2 ) a i, l 1 +1 + h m a i, l 1 +2 ++mod( 2n p ) a i,[ 2n p ]+1 ) ++ x h m ,j ( l 2 a i, nh mp l 1 + h m a i, nh mp l 1 +1 ++ h m a i, nh mp ) + x h m +1,j ( h m a i, nh mp +1 ++ h m a i, nh mp + l 1 + l 2 a i, nh mp + l 1 +1 ) ++ x p,j ( l 2 a i,n l 1 + h m a i,n l 1 +1 ++ h m a i,n ) ].

The theorem was proven.

Now we give the concrete steps to solve the matrix second semi-tensor product Equation (2):

Step 1. First, we examine whether the matrix second semi-tensor product Equation (2) satisfies the compatibility condition, that is,

gcd( h mβ , α β )=1,gcd( h m , k l )=1,β|r,p= nh mα ,q= rk lα ,

and C satisfies

C=[ Bloc k 11 ( C ) Bloc k 1l ( C ) Bloc k m1 ( C ) Bloc k ml ( C ) ],

where,

Bloc k ij ( C )=[ c ij c ij c ij c ij ] h m × k l ,

and c ij = c ( i1 ) h m +1,( j1 ) k l +1 ,i=1,2,,m;j=1,2,,l .

Step 2. Find all allowable dimensions p×q that satisfy Lemma 4.2.

Step 3. Let A = m h ( A 1 h m ) .

Step 4. Find B = l αk ( ( B T 1 k l ) I m )( I q A ) , X ˙ =vec( X ) , C ˙ =vec( C ) we can get: B X ˙ = C ˙ .

Step 5. Solving p×q equations yields X.

Since the solution obtained by this method is definite and more limited, we can try to find an approximate solution for the equation when it does not meet the conditions.

Example 4.1. Consider the matrix second semi-tensor product equation A l X l B=C , where A,B,C are as follows: (For convenience, let’s say A m×n , B r×l , C h×k , X p×q .)

(1)

A=[ 4 0 0 4 0 4 4 0 0 0 0 4 ],B=[ 1 3 1 0 2 1 1 0 1 1 1 1 1 0 0 1 ],

C=[ 10 10 10 10 6 6 4 4 10 10 10 10 6 6 4 4 5 5 5 5 3 3 2 2 ].

There are m=h,k/l =2,n=4, rk l =8 , and note that the possible values of α are 1, 2, 4, so the matrix equation may have 1 × 2, 2 × 4, 4 × 8 solutions. It is easier to find the 1 × 2 solution of the equation as X=[ 1 1 ] . In addition, 1 2 ( X 1 2 ), 1 4 ( X 1 4 ) are also solutions of matrix equations.

(2) Let C=[ 1 0 9 7 3 6 2 4 9 3 1 0 1 8 5 4 4 8 2 5 9 3 4 3 ], The matrices A and B are the same as in (1), and it is not difficult to verify that the matrices A,B,C are compatible, but the second semi-tensor product equation of the matrix (2) has no solution. It can be seen that the condition of lemma 4.1 is only necessary.

(3)

A=[ 3 1 2 2 3 7 9 7 4 3 6 8 3 1 2 2 3 7 9 7 4 3 6 8 1 9 7 3 2 8 7 7 6 8 9 9 ],B=[ 1 0 1 2 2 1 1 2 1 1 1 1 ],

C=[ 66 66 66 30 30 30 48 48 48 84 84 84 214 214 214 97 97 97 154 154 154 271 271 271 66 66 66 30 30 30 48 48 48 84 84 84 214 214 214 97 97 97 154 154 154 271 271 271 179 179 179 81 81 81 128 128 128 226 226 226 232 232 232 106 106 106 172 172 172 298 298 298 ].

Through calculation, we can get that the allowable dimensions of the solution are 2 × 3, 6 × 9, and X 1 =[ 6 9 3 3 0 3 ], X 2 = 1 3 ( X 1 1 3 ).

(4)

A=[ 4 4 8 4 ],B=[ 1 0 1 2 2 1 1 2 1 1 1 1 2 3 2 1 ],

C=[ 42 42 42 34 34 34 34 34 34 42 42 42 42 42 42 34 34 34 34 34 34 42 42 42 42 42 42 34 34 34 34 34 34 42 42 42 42 42 42 34 34 34 34 34 34 42 42 42 66 66 66 53 53 53 53 53 53 66 66 66 66 66 66 53 53 53 53 53 53 66 66 66 66 66 66 53 53 53 53 53 53 66 66 66 66 66 66 53 53 53 53 53 53 66 66 66 ].

Through calculation, we can get that the allowable dimensions of the solution are 2 × 3, 4 × 6, 8 × 12, and X 1 =[ 3 6 3 3 3 3 ], X 2 = 1 2 ( X 1 1 2 ), X 3 = 1 4 ( X 1 1 4 ).

(5)

A=[ 4 0 8 0 ],B=[ 1 0 1 2 2 1 1 2 1 1 1 1 2 3 2 1 ],

C=[ 6 6 6 5 5 5 5 5 5 6 6 6 6 6 6 5 5 5 5 5 5 6 6 6 6 6 6 5 5 5 5 5 5 6 6 6 6 6 6 5 5 5 5 5 5 6 6 6 12 12 12 10 10 10 10 10 10 12 12 12 12 12 12 10 10 10 10 10 10 12 12 12 12 12 12 10 10 10 10 10 10 12 12 12 12 12 12 10 10 10 10 10 10 12 12 12 ].

Through calculation, we can get that the allowable dimensions of the solution are 2 × 3, 4 × 6, 8 × 12, and X 1 =[ 3 6 3 k 1 k 2 k 3 ], k 1 , k 2 , k 3 , X 2 = 1 2 ( X 1 1 2 ), X 3 = 1 4 ( X 1 1 4 ).

5. Conclusions and Suggestions

In this paper, we have investigated the solution of the matrix second semi-tensor product equation A l X l B=C . The second semi-tensor product of the matrix is a new matrix multiplication constructed by Professor Cheng in 2019. Firstly, the solvability of the matrix-vector second semi-tensor product equation has been considered. At the same time, the compatibility conditions, sufficient and necessary conditions and the specific solution methods have been given. Then, the solvability of the second semi-tensor product equation of matrix has been studied. For each part, several examples are given to illustrate the validity of the results.

We expect the conclusions in this article to be useful and believe that they have broader application prospects in control systems, parameter identification, dynamic science, biology, dynamic analysis, nonlinear programming, and other fields.

Acknowledgements

Sincere thanks to the members of JAMP for their professional performance, and special thanks to managing editor Nancy for a rare attitude of high quality.

Funding

This research work was supported by the undergraduate education reform project of Shandong Normal University (Grant No. 2021BJ054) and the Excellent Youth Innovation Team Plan of Shandong Higher Education Institutions (Grant No. 2023KJ197).

Conflicts of Interest

The author declares no conflicts of interest in this paper.

References

[1] Cheng, D. (2019) From Dimension-Free Matrix Theory to Cross-Dimensional Dynamic Systems. Elsevier.
[2] Cheng, D. and Liu, Z. (2019) A New Semi-Tensor Product of Matrices. Control Theory and Technology, 17, 4-12.
https://doi.org/10.1007/s11768-019-8161-2
[3] Cheng, D. (2019) On Equivalence of Matrices. Asian Journal of Mathematics, 23, 257-348.
https://doi.org/10.4310/ajm.2019.v23.n2.a3
[4] Mitra, S.K. (1973) Common Solutions to a Pair of Linear Matrix Equations and . Mathematical Proceedings of the Cambridge Philosophical Society, 74, 213-216.
https://doi.org/10.1017/s030500410004799x
[5] Peng, Y., Hu, X. and Zhang, L. (2005) An Iteration Method for the Symmetric Solutions and the Optimal Approximation Solution of the Matrix Equation . Applied Mathematics and Computation, 160, 763-777.
https://doi.org/10.1016/j.amc.2003.11.030
[6] Ji, Z., Li, J., Zhou, X., et al. (2019) On Solutions of Matrix Equation under Semi-Tensor Product. Linear and Multilinear Algebra, 69, 1935-1963.
https://doi.org/10.1080/03081087.2019.1650881
[7] Liu, G., Zhang, Y., Yao, Y. and Huang, J. (2023) Cyclic Solution and Optimal Approximation of the Quaternion Stein Equation. Journal of Applied Mathematics and Physics, 11, 3735-3746.
https://doi.org/10.4236/jamp.2023.1111235
[8] Chen, L. (2019) Networked Evolutionary Model of Snow-Drift Game Based on Semi-Tensor Product. Journal of Applied Mathematics and Physics, 7, 726-737.
https://doi.org/10.4236/jamp.2019.73050
[9] Wang, J. (2021) Application of Matrix Semi-Tensor Product to Algebraic Structures. Ph.D. Thesis, Shandong University.
https://doi.org/10.27272/d.cnki.gshdu.2021.000553
[10] Chen, X.Y., Cheng, Z.H., et al. (2022) From Linear Algebra to Tensor Computation.
https://xinychen.github.io/books/tensor_book.pdf

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.