1. Introduction
Throughout this paper, and denote the determinant and the transpose of the matrix respectively. Also denotes the greatest integer less than or equal to Centrosymmetric matrices have practical applications in numerical analysis, information theory, statistics, physics, harmonic differential quadrature, differential equations, engineering, sinc methods, magic squares, linear system theory and pattern recognition. The interested readers may refer to [1-12].
Solving and analyzing linear systems of equations is a fundamental problem in science and engineering applications. The cost of solving any linear system using Gauss or Gauss-Gordan algorithms is The motivation of the current paper is to develop efficient algorithms for solving any centrosymmetric linear system having equations and unknowns provided that the coefficient matrix of the system is nonsingular. The cost of each algorithm depends on the solvers of two associated linear systems having smaller sizes than n. More precisely, if n = 2m, then each of the two associated linear systems consists of m equations. If n = 2m + 1, then we have one system having m equations and the other has (m + 1) equations. Consequently, if the two associated linear systems have special structures, then the cost of the centrosymmetric algorithm could be considerably reduced, in particular for large values of n.
The paper is organized as follows. In Section 2, some properties of the exchange and the rotate matrices are presented. Formulae for centrosymmetric tridiagonal determinants are obtained in Section 3. In Section 4, two computational algorithms for solving centrosymmetric linear systems are given. A MAPLE procedure is given in Section 5. Some illustrative examples are presented in Section 6.
Definition 1.1. The matrix with 1’s on the northeast-southeast diagonal and 0’s elsewhere
(1)
is called the exchange matrix of order The subscript on is neglected whenever the size is obvious from the context.
Definition 1.2. Let be an matrix.
The rotate of denoted is defined by
(2)
Definition 1.3 [9]. Let be an matrix. Then
• is said to be centrosymmetric if
• is said to be persymmetric if
• is said to be centrogonal if provided
• is said to be skew-centrosymmetric if
• is said to be bisymmetric if
Note that the bisymmetric matrix is both symmetric and centrosymmetric. It is also both symmetric and presymmetric.
2. Some Properties of The Exchange and The Rotate Matrices
Let us begin this section by giving some helpful results concerning the exchange and rotate matrices. For more details see [1,3,8,9,12-21].
The exchange matrix enjoys the following properties:
•
where is the identity matrix of order
•
• The matrix product is a version of the matrix that has been reflected in line at 0 degree to the horizontal measured counter-clockwise.
• The matrix product is a version of the matrix that has been reflected in line at 90 degrees to the horizontal measured counter-clockwise.
• The matrix product is a version of the matrix that has been rotated counter-clockwise or clockwise by 180 degrees.
Note that the exchange matrix J is equal to the identity matrix I but with the columns in reverse order. More precisely, where is the Kronecker symbol which is equal to 1 if and zero if.
It is also worth mentioned that the matrix is sometimes called the counter-identity matrix or contra-identity matrix or per-identity matrix or the reflection matrix or the reversal matrix.
Exchange matrices are simple to construct in software platforms. For example, to construct in MAPLE, a single line of code can be used as follows:
n := 5: J := array(1..n,1..n,sparse): for i to n do J[i, n + 1 - i] := 1 od: Jn := op(J);
or n := 5: J := matrix(n, n, 0): for i to n do J[i, n + 1 - i] := 1 od: Jn := op(J);
The rotate of of order satisfies:
•
•
• provided exists.
•
•
•
• If of A is then
of is In other words, the centrosymmetric matrix is the same when read backwards as when read forwards.
3. Centrosymmetric Determinants
Centrosymmetric determinants take the form:
(3)
in which
and
for each
In particular, centrosymmetric tridiagonal determinants are of special importance. For convenience of the reader, we present some definitions, notations and properties associated with tridiagonal matrices.
A tridiagonal matrix takes the form:
(4)
These types of matrices frequently appear in many areas of science and engineering. For example in parallel computing, telecommunication system analysis and in solving differential equations using finite differences, see [22,23]. A general tridiagonal matrix of the form (4) can be stored in memory locations, rather than memory locations for a full matrix, by using three vectors and with This is always a good habit in computation in order to save memory space. To study tridiagonal matrices it is very convenient to introduce a vector in the following way [24]:
(5)
where
(6)
It is helpful to restate some important results concerning tridiagonal matrices of the form (4). For more details the interested reader may refer to [24-28].
Let and are positive integers such that and
Define the submatrix of denoted of order by
(7)
In particular, let and the leading principal minor of
Theorem 3.1 [24]. Consider
(8)
Then the determinants in (8) satisfy a three-term recurrence
(9)
where the initial values for are and
Lemma 3.2 [29]. If the factorization of the matrix in (4) is possible, then we have
(10)
where are given by (6). Meanwhile, the Doolittle factorization [30] of is given by:
(11)
Lemma 3.3 [26]. If then
(12)
Lemma 3.4 [26]. If and either or then is a singular matrix.
Lemma 3.5 [24]. If for each then the three-term recurrence (9) reduces to the twoterm recurrence
(13)
Algorithm 3.1 (DETGTRI [31]).
The determinant of the matrix in (4) can be computed using the following symbolic algorithm.
INPUT: Order of the matrix and the components,
OUTPUT: The determinant of the matrix in (4).
Step 1: Use (6) to compute the simplest forms of the components of the vector.
If for any, set (is just a symbolic name) and continue to compute in terms of by using (6).
Step 2: The simplest rational form of the product
(this product is a polynomial in) evaluated at is equal to the determinant of the matrix in (4), i.e.,
The cost of the DETGTRI algorithm is. The algorithm is easy to implement in all Computer Algebra Systems (CAS) such as MACSYMA, MATHEMATICA and MAPLE.
Lemma 3.6. Consider the tridiagonal matrix given by:
(14)
Let and be matrices defined respectively as follows:
(15)
where is a scalar quantity. Then by applying the DETGTRI algorithm, we see that:
(16)
(17)
having used (12) and (13).
Let is an centrosymmetric matrix, then the three following facts are useful when we deal with such matrices:
Fact (1): If then
In other words, the centrosymmetric matrix is the same when read backwards as when read forwards.
Fact (2): If is positive even number, then
is orthogonal.
Fact (3): If is positive odd number then
is orthogonal.
Armed with the above facts, we may formulate the following result whose proof will be omitted.
Theorem 3.7. Let be an of even order say then can be written in the form:
(18)
where
The determinant of the matrix in (18) is given by:
(19)
If is odd, i.e., then we have:
(20)
where and
The determinant of the matrix in (20) is given by
(21)
Concerning the inverse of centrosymmetric matrices the reader may refer to [8].
As an interesting special case of the Theorem 3.7, we give the following result.
Corollary 3.8. In Theorem 3.7, if R = Tn is centrosymmetric tridiagonal matrix, then we have
where k is the common value of the elements in positions (m, m+1) and (m+1, m) of the matrix R = Tn when n = 2m.
Proof. To compute the centrosymmetric tridiagonal determinant of order Two cases will be considered:
Case (1): In this case, the centrosymmetric tridiagonal matrix takes the form:
(22)
Applying Theorem 3.7, we have where
and
By using Lemma 3.6, we obtain
and
Therefore we get
(23)
Case (2): In this case, the centrosymmetric tridiagonal matrix takes the form:
(24)
Applying Theorem 3.7, we obtain here
and
By using Lemma 3.6, we obtain and
By using the DETGTRI algorithm [31] together with (12) and (13), then we have
Therefore
(25)
From (23) and (25), we see that in order to compute the determinant of a centrosymmetric tridiagonal matrix of order, then all we need is to compute if and if
4. Algorithms for Solving Centrosymmetric Linear Systems
Solving linear systems practically dominates scientific computing. In the present section, we focus on solving linear systems of centrosymmetric type. Two cases will be considered:
Case (i): For this case we are going to construct an algorithm for solving centrosymmetric linear systems of the form:
(26)
Block multiplication is particularly useful when there are patterns in the matrices to be multiplied. Therefore it is convenient to rewrite (26) in the partitioned form
(27)
where
and
The system in (26) can also be written in matrix form as follows:
(28)
where is the coefficient matrix of the system (26), and
is the constant vector.
Algorithm 4.1. An algorithm for solving centrosymmetric linear system of even order.
To solve the linear system of the form (26), we may proceed as follows:
INPUT: The entries of the coefficient matrix and the constant vector in (28).
OUTPUT: Solution vector.
Step 1: Construct the matrices and the m-vectors and as follows:
and
Step 2: Compute If then Exiterror(‘No solutions’) end if.
Step 3: Solve the two linear systems:
and for and
respectively.
Step 4: The solution vector is given by
The Algorithm 4.1, will be refereed to as CENTROSYMM-I algorithm.
Concerning the computational cost of the CENTROSYMM-I algorithm:
• The time complexity of Step 1 is
(additions/subtractions and no multiplications/divisions).
• The time complexity of Step 3 depends on the solvers of the two linear systems. For example, tridiagonal linear systems can be solved in linear time (see [25]).
• The time complexity of Step 4 is (additions/subtractions and multiplications).
Step 3 is the step that leads to the reduction of the time complexity, because instead of solving a linear system of equations, we end up with two linear systems half the size of the original one. If the original system is solved with Gaussian elimination (GE) method, then the time complexity will be. But, if GE method is used to solve the two systems in the third step, then the time complexity of our algorithm will be
which is a significant reduction. If a method more efficient than the GE method is used, then the time complexity of our algorithm will be less (see also [16]).
Case (ii): In this case the linear system to be considered has the form:
(29)
or equivalently,
(30)
where
and
The linear system (30) can also be written as:
(31)
where is the coefficient matrix of the system (29),
and
is the constant vector.
Algorithm 4.2. An algorithm for solving centrosymmetric linear system of odd order To solve the linear system of the form (29), we may proceed as follows:
INPUT: The entries of the coefficient matrix R and the constant vector in (31).
OUTPUT: Solution vector.
Step 1: Construct the matrices of orders and respectively and the vectors and of dimensions and respectively as follows:
and
Step 2: Compute If then Exiterror(“No solutions”) end if.
Step 3: Solve the two linear systems and for
and
respectively.
Step 4: The solution vector is given by
The Algorithm will be refereed to as CENTROSYMM-II algorithm.
Concerning the computational cost of the CENTROSYMM-II algorithm:
• The time complexity of Step 1 is.
• The time complexity of Step 3 depends on the solvers of the two linear systems.
• The time complexity of Step 4 is (see also [16]).
It may be convenient to finish this section by giving the following result, whose proof will be omitted.
Theorem 4.1. Let be a non-singular centrosymmetric square matrix of order Consider the four linear systems of centrosymmetric type:
(32)
(33)
(34)
and
(35)
Then the two linear systems (32) and (35) are equivalent. The same is true for the linear systems (33) and (34). Moreover, if the common solution of the systems (32) and (35) is then the common solution of the systems (33) and (34) is
5. Computer Program
In this section, we are going to introduce a MAPLE procedure for solving centrosymmetric linear systems (26) and (29). This procedure is based on the CENTROSYMM-I and CENTROSYMM-II Algorithms.
restart:
centrosymm:=proc(R::array,f::vector,n::posint)
local i, r,m,f1,f2,A,Jm,J,H,y,x,B,Y,Z,X:
global xsoln,detR,detP,detQ,P,Q;
X:= vector(n): m:=floor(n/2): J:=array(1..m, 1..m, sparse):
for i to m do J[m+1-i,i]:=1 od:
A:= linalg[submatrix](R,1..m,1..m):
if n=2*m then
Case(1): n is even
Step 1 in CENTROSYMM-I Algorithm.
B:= linalg[submatrix](R,m+1..n,1..m):
P:=evalm( A + evalm(J&*B)):
Q:=evalm( A - evalm(J&*B)):
Step 2 in CENTROSYMM-I Algorithm.
detP:=linalg[det](P): detQ:=linalg[det](Q): detR:=detP *detQ:
if detR = 0 then ERROR("Singular Matrix !!!! ") fi;
Step 3 in CENTROSYMM-I Algorithm.
f1:=array(1..m): f2:=array(1..m):
for i to m do f1[i]:=f[i]+f[2*m+1-i];
f2[i]:=f[i]-f[2*m+1-i];
od;
Y:=array(1..m): Z:=array(1..m):
Y:= linalg[linsolve](P,f1): Z:=linalg[linsolve] (Q,f2):
Step 4 in CENTROSYMM-I Algorithm.
for i to m do X[i]:=1/2*(Y[i]+Z[i]);
od;
for i from m+1 to n do X[i]:=1/2*(Y[n+1-i]-Z[n+1-i]);
od;
xsoln:=simplify([seq(X[r],r=1..n)]):
else
Case(2): n is odd
Step 1 in CENTROSYMM-II Algorithm.
B:= linalg[submatrix](R,m+2..n,1..m):
H := evalm( A + evalm(J&*B)):
y:=linalg[submatrix](R,1..m,m+1..m+1):
x:=linalg[submatrix](R,m+1..m+1,1..m):
P:=linalg[blockmatrix](2,2,[H,2*y,x,[2]]):op(P);
Q:=evalm( A - evalm(J&*B)):
Step 2 in CENTROSYMM-II Algorithm.
detP:=linalg[det](P): detQ:=linalg[det](Q): detR:= detP *detQ:
if detR = 0 then ERROR("Singular Matrix !!!!") fi;
Step 3 in CENTROSYMM-II Algorithm.
f1:=array(1..m+1): f2:=array(1..m):
for i to m do f1[i]:=f[i]+f[2*m+2-i];
f2[i]:=f[i]-f[2*m+2-i];
od;
f1[m+1]:=f[m+1];
Y:=array(1..m+1):
Y:= linalg[linsolve](P,f1): Z:=linalg[linsolve] (Q,f2):
Step 4 in CENTROSYMM-II Algorithm.
for i to m do X[i]:=1/2*(Y[i]+Z[i]);
od;
X[m+1]:=Y[m+1];
for i from m+2 to n do X[i]:=1/2*(Y[n+1-i]-Z[n+1-i]);
od;
xsoln:=simplify([seq(X[r],r=1..n)]);
fi:
end proc:
6. Illustrative Examples
All results in this section are obtained with the help of the MAPLE procedure centrosymm.
Example 6.1. Solve the centrosymmetric linear system
Solution:
Here and Using the procedure centrosymm, we get the following results:
and
The solutions of the systems
and
are and
Therefore, we have
Hence the solution vector is
Example 6.2. Solve the centrosymmetric linear system
Solution: Here and Using the procedure centrosymm, we obtain the following results:
and
The solutions of the systems
and
are and
Therefore, we get
Hence the solution vector is
Example 6.3. Solve the centrosymmetric linear system
Solution:
Here and Using the procedure centrosymm, we obtain the following results:
Error Singular Matrix !!!! This means that the given system has no solutions.
NOTES