1. Introduction
In 1858, Cayley published his “A memoir of theory of matrices” in which he proposed and demonstrated the Cayley-Hamilton theorem. An English mathematician named Cullis was the first to use modern bracket notation for matrices in 1913 and simultaneously demonstrated the first significant use of the notation A = [aij] to represent a matrix where ai,j and the ith row and the jth column. We know matrix multiplication is defined as multiplying row of multiplicand matrix with column of multiplier matrix. But why should not multiply column of multiplicand matrix with row of multiplier matrix. Also, we know the transpose of matrix. In which row is transferred to column and vice versa. What will be happened if matrix tilted 90˚, 180˚, 270˚ and 360˚? Or what would be the images (original image, mirror image, water image and water image of mirror image) of a given matrix? We also know Identity matrix and its characteristics. Is there any possibility do we define any other identity matrix? Is there any possibility does a given matrix have two more characteristic equations or two more minimal polynomials?
Seymour Lipschutz and Marc Lars Lipson [1] explained matrices and their algebraic operations. Kenneth Kuttler [2] analyzed matrices and row operations. Eric Jarman [3] described Jordan canonical matrices and Tom Denton and Andrew Waldron [4] clearly explained eigen vectors. Peeyush Chandra, A.K. Lal, V. Raghavendra, G. Santhanam [5] explained Eigen values and Eigen vectors.
In this paper, if a given matrix is I-matrix (usual matrix), we can see mirror image of a given matrix (J-matrix), water image of a given matrix (transprocal of I-matrix) and mirror image of a water image of a given matrix (transprocal of J-matrix), other types of matrix multiplication, identity matrix, characteristic equation, minimal polynomial and diagonalization.
We shall call certain matrix as I-matrix. J-matrix is mirror image of I-matrix, water image of J-matrix is called transprocal of I-matrix and water image of I-matrix is called transprocal of J-matrix. Transpose of Transprocal matrix or Transprocal of Transpose matrix is Transprocose matrix.
Let A be a certain matrix. That is I-matrix. We notate it as AI. Now,
Let
then we can say
so
,
we can categorize two groups of matrices. Such are I group matrices and J group matrices
Transprocal of matrix A is
and
Transprocose (Transpose + Transprocal = Transprocose) of matrix A is
.
We know that
then Transprocose of A matrix is
.
We can categorize certain matrix as two groups. Such are I group matrices and J group matrices.
Let
be a certain matrix.
We can define I-group matrices are:
Main diagonal.
Trace and determinant of these matrices are the same.
We can define J-group matrices are:
.
2. Main Diagonal
Trace and determinant of these matrices are the same.
(anspose + reciprocal)
Definition 1: Let A be a m x n matrix,
then,
the transprocal of matrix A is
where,
is called transprocal of A. Transprocal means, first elements to last elements of a given matrix to be considered as last elements to first element of a given matrix. Which means, let a11 be a first element and amn be a last element of a given matrix then transprocal of matrix is transferring a certain place of a matrix element to a certain placement. That is, a11 got a position at amn place, a12 got a position at am(n−1) place, …, a1n got a position at am1 place, …, am1 got a position at a1n place, …, amn got a position at a11 place.
Properties:
1)
.
2)
.
3)
.
4)
.
5)
.
6)
.
7)
.
8)
; more generally
.
9)
.
10)
.
11)
.
12)
.
Proof:
1) Let A be a m x n matrix,
then
.
So, the transprocal of matrix
is
, thus
.
2) Let
then
.
3) Let
then
and let
then
then
.
4) Let
then
Now
.
5) Let
then
.
6) Let
then
Now
then
.
7) Now
then
.
8) Let
then
So
Now
then
Let
then
Now
or
(1)
Eigen value of
is
,
,
Also,
or
(2)
Eigen value of
is
,
,
Both cases we get,
9)
10)
11)
12) Let A be a m x n matrix,
.
Definition 2: Transpose of Transprocal matrix is called TRANSPROCOSE matrix. Let we assign
. so now onwards we call
is Transprocose of matrix A.
Properties:
1)
.
2)
.
3)
.
4)
.
5)
.
6) We know that
and
,
. Thus
.
7)
.
3. Algebraic Properties of Certain Matrix, Transpose, Transprocal and Transprocose Matrices
Let A be a matrix then
is a transpose of matrix A,
is a transprocal of matrix A and
is a transprocose of matrix A.
Now we find some matrices by the combination of certain matrix, Transpose, Transprocal and Transprocose matrices.
1)
2)
3)
4)
5)
6)
.
Let we intend Transpose, Transprocal and Transprocose on above matrices.
1) Let
.
Now we taking transpose on both sides. We get,
Now we taking transprocal on both sides. We get,
Now we taking transprocose on both sides. We get,
2) Let
.
Now we taking transpose on both sides. We get,
Now we taking transprocal on both sides. We get,
Now we taking transprocose on both sides. We get,
3) Let
.
Now we taking transpose on both sides. We get,
Now we taking transprocal on both sides. We get,
Now we taking transprocose on both sides. We get,
4) Let
.
Now we taking transpose on both sides. We get,
Now we taking transprocal on both sides. We get,
Now we taking transprocose on both sides. We get,
5) Let
.
Now we taking transpose on both sides. We get,
Now we taking transprocal on both sides. We get,
Now we taking transprocose on both sides. We get,
6) Let
.
Now we taking transpose on both sides. We get,
Now we taking transprocal on both sides. We get,
Now we taking transprocose on both sides. We get,
From the above matrices, we concluded they are related with themselves.
4. Dominance Property
1) If
then
Let
then
Now
From the above matrices, we concluded the multiplic and matrix dominated in product matrix.
Cayley—Hamilton theorem: Every square matrix satisfies its characteristic equation.
Note: It is always true for matrix A and its transprocal matrix
.
13) Let
then
Now the characteristic equation of A is
.
We can write this equation as
;
Also, we can write this equation as
.
,
and
,
.
Use above matrices and check those characteristic equations.
Thus
.
5. J-Matrix
Definition 2:
I matrix and J matrix: An m x n matrix is usually written as
we shall call this matrix as I-matrix (first element starts from North West corner).
If an m x n matrix is said to be J-matrix (first element starts from North East corner), it should be written as
.
We define some type of J matrix.
1) Diagonal matrix: A square matrix is called diagonal matrix, if all its non-diagonal elements are zero.
2) Scalar matrix: A diagonal matrix is called scalar matrix, if all the non-diagonal elements are equal to zero and all the diagonal elements are equal to scalar say k.
3) Unit or identity matrix: a square matrix is said to be unit or identity matrix, if all the diagonal elements are equal to unity and non-diagonal elements are zero.
etc. are unit matrices.
Upper triangular matrix: A square matrix is said to be upper triangular matrix, if all elements below the leading diagonal are zero.
is an upper triangular matrix.
Leading diagonal.
4) Lower triangular matrix: A square matrix is said to be lower triangular matrix, if all elements above the leading diagonal are zero.
is a lower triangular matrix.
Leading diagonal.
5) Transpose matrix: If we interchange the rows and corresponding column in a given matrix A, the matrix obtained is called transpose of the matrix A and denoted by AT or A'.
Let
then
.
6) Symmetric matrix: A square matrix A is said to be symmetric matrix, if
. That is if for all the values of i and j,
.
Let
is a symmetric matrix.
7) Anti-Symmetric matrix: A square matrix A is said to be symmetric matrix, if
. That is if for all the values of i and j,
.
Let
is an anti-symmetric matrix
8) Orthogonal matrix: A square matrix A is said to be orthogonal matrix, if
Where J is a unit matrix.
9) Hermitian matrix: A square matrix A is called Hermitian matrix, if
. That is if every i-jth element of A is equal to complex conjugate j-ith element of A. i.e.
.
Note: Every diagonal element of Hermitian matrix is real.
Ex: Let
are Hermitian matrix.
10) Skew-Hermitian matrix: A square matrix A is called skew-Hermitian matrix, if
That is if every i-jth element of A is equal to complex conjugate j-ith element of A. i.e.
.
Note: Diagonal element of a Skew-Hermitian matrix is either purely imaginary or zero.
Ex: Let
are skew-Hermitian matrix.
11) Unitary matrix: A square matrix A is said to be unitary matrix,
If
. Where
.
12) Involutary matrix: A square matrix A is said to be involutary matrix, if
. Unit matrix is always an involutary matrix, since
Note: Other definitions like nilpotent, idempotent, conjugate, etc. of J matrix as same as I matrix.
6. Algebra of J Matrix
6.1. Addition and Subtraction of J-Matrices
If two matrices A and B are same order, then addition and subtraction of matrices A ± B is defined as the matrix which is obtained by the addition and subtraction of the corresponding elements of A and B.
More clearly, we can say that
Let
then
.
6.2. Scalar Multiplication of J Matrix
Let A be any matrix and k be any scalar, then the matrix obtained by multiplying every element of the matrix a by k is called scalar multiplication of a by k and denoted by kA.
Ex: Let k = 3 and
then
.
6.3. Multiplication of J Matrices
Definition 3: Suppose
be two matrices such that the number of rows of A is equal to the of columns of B; say, A is p × m matrix and B is n × p matrix. Then the product AB is a n × m matrix whose ijth entry is obtained by multiplying ith row of B by jth column of A. that is
where
.
The product AB is not defined if P is a p × m matrix and B is an n × q matrix, where
.
Ex:
1) Find AB where
and
Because A is 2 × 3 matrix and B is 2 × 2, the product AB is defined and AB is a 2 × 3 matrix. To obtain the first row of the matrix AB, multiply the first row (1, 2) of B by each column of A,
Respectively. That is,
To obtain the second row of AB, multiply the second row (3, 4)of B by each column of A. thus,
.
2) Suppose
and
then
and
.
The above examples show that matrix multiplication is not commutative. That is, in general,
. However, matrix multiplication does satisfy the following properties.
Preposition 1: If all multiple and addition make sense, the following hold for matrices A, B, C and scalars x, y.
1)
2)
3)
Proof: Using the definition of multiplication,
1)
2)
is easily claimed.
3)
.
6.4. Determinant of J Matrix
Every square matrix can be associated with an expression or a number which is known as determinant. The determinant of a square matrix
of order n is denoted by
and is given by
Note: A matrix which is not square matrix does not possess determinant.
Determinants of orders 1 and 2.
Determinants of orders 1 and 2 are defined as follows:
and
Thus, the determinant of a 1 × 1 matrix
is the scalar a11; that is,
. The determinant of order two may easily be remembered by using the following diagram:
.
That is the determinant is equal to the product of the elements along the plus-labeled arrow minus the product of the elements along the minus-labeled arrow.
The determinant of a 3 × 3 matrix:
Let
be a square matrix of order 3, then
given by
Diagonal and Trace.
Let
be an n-square matrix. The diagonal or main diagonal of A consists of the elements with the same subscripts that is
.
The trace of A, written tr(A) is the sum of the diagonal elements. Namely,
.
6.5. Some Working Models of J Matrices
1) Orthogonal matrix: A square matrix A is said to be orthogonal matrix, if
.
where J is a unit matrix.
Proof:
Let
be a 3 × 3 matrix, then
2) Unitary matrix: a square matrix A is said to be unitary matrix, if
.
Proof:
Let
be a 2 × 2 matrix then
Moreover,
Now
3) Involuntary matrix: A square matrix A is said to be involuntary matrix, if
. Unit matrix is always an involuntary matrix, since
.
Proof:
Let
be a 3 × 3 matrix, then
6.6. Solve the System of Equations
Inverse method:
Solution: The system of given equations can be written as
, where
,
,
Therefore
. Now
Thus
.
Cramer’s rule:
Ex: Solve the following system of equation:
here determinant ∆ of the coefficient matrix is given by
Hence the system has a unique solution. By Cramer’s rule, we thus have
where
Hence the solution is
.
Normal form
A matrix o order m x n is said to be in (fully reduced) normal form it is of the form
Characteristic equation, Eigen values and Eigen vectors
The characteristic equation of a square matrix A is defined as the equation
. The expression
is often referred to as the characteristic polynomial. This will be usually denoted by
.
If
then
(sum of principle minors of order 2)
(sum of principle minors of order 3)
.
Cayley-Hamilton theorem
Every square matrix satisfies its characteristic equation.
Proof: Let A be a square matrix and let
be its characteristic equation. Then it is known that
Since the matrix
is a factor (left) of the right hand side, by the remainder theorem for matrix polynomials, if A is substituted for xJ then the right hand side must be satisfied.
Therefore,
.
Ex:
Let
be a 3 × 3 matrix then
.
The characteristic equation of A is
.
By Cayley-Hamilton theorem, we get
.
,
Remark 1: The characteristic is not only equation that a matrix satisfies. There may be other equation may be satisfied.
Remark 2: the above theorem can be used successfully in computing the inverse of a non-singular matrix.
To find A−1:
Let
be a 3 × 3 matrix then
and
The characteristic equation of A is
.
By Cayley-Hamilton theorem, we get
.
Or
, multiplying A−1 both side we get,
Hence
. Thus
Eigen space
The vector space generated by the Eigenvectors corresponding to an Eigen value of a matrix is called Eigen space of the eigenvalue. This will be sometimes denoted by L(λ), where λ is an Eigen value of A. if x and y are two Eigen vectors of an Eigen value of A and c is a scalar, then it is easy to verify that x + y and cx are also Eigen vectors of the same Eigen value.
The dimension of the Eigen space of an Eigen value is called geometric multiplicity of that Eigen value. If an Eigen value has the same geometric multiplicity as its algebraic multiplicity, then it is called regular.
Minimal polynomial
The minimal polynomial of a matrix is defined as the monic polynomial of the lowest degree satisfied by the matrix. The minimal polynomial of a matrix A will be denoted by
.
Three results follow readily from the definition:
1) The minimal polynomial of a matrix divides every polynomial satisfied by the matrix.
2) The characteristic polynomial and minimal polynomial have the same irreducible factors.
3) A scalar is an Eigen value of a matrix A if and only if it is a root of the minimal polynomial of the matrix.
Ex: find the characteristic polynomials and the minimal polynomials of the matrix.
Solution: Let the given matrix be denoted by A. Then the characteristic equation is
.
Therefore the characteristic equation of the matrix is,
or
The possible minimal polynomials are
and
We now observe
So, A does not satisfy the polynomial
. But by the Cayley-Hamilton theorem, A satisfies the polynomial
. Hence the characteristic polynomial is also the minimal polynomial of A.
Diagonalization
From the point of view of application, it is often important to see whether any square matrix is similar to a diagonal matrix, i.e. whether for a matrix A there exists a non-singular matrix P such that
, where D is a diagonizable matrix.
Ex: Diagonalize the following matrix
Solution:
Let
then
or
.
Eigenvalues of A are −2, −2, 4. The Eigenvectors corresponding to −2 are (0, 1, 1) and (1, 1, 0) and (2, 1, 1). Now we construct a matrix taking three Eigenvectors without the scalars as three columns and obtain
then
so
Now
Which is the required diagonal matrix.
Also,
7. Jordan Canonical Form
We have seen that if the Eigen values of a matrix are all different, then the matrix can be diagonalized by similarity transformations. If the Eigenvalues are not different, the matrix can be diagonalized if the Eigenvectors whose number equals the order of the matrix are linearly independent. If the Eigenvectors are not linearly independent or if their number is less than the order of the matrix, then A cannot be diagonalized by similarity transformations.
However, by similarity transformation, any square matrix can always be converted into a matrix having Jordan blocks along the diagonal, called the Jordan canonical form.
A matrix is called a Jordan block if all the diagonal elements of the matrix are the same scalar and always 1 occurs over the diagonal elements. The following are examples of Jordan blocks:
Theorem: Every square matrix A whose characteristic polynomial
And the minimal polynomial
Can be reduced to the block diagonal matrix
Which is called a Jordan canonical form, where Bij are Jordan blocks of the form
And for each
the corresponding Bij have the following properties:
1) There is at least one Bij of order m, all other Bij are of order less than m.
2) The sum of orders of Bij is ni.
3) The number of Bij equals the geometric multiplicity of
.
4) The number of Bij of each possible order uniquely determined by A.
Ex: Reduce the matrix A to the Jordan canonical forms whose characteristic polynomial
and the minimal polynomial
are respectively
and
Then the Jordan canonical form is one of the following matrices.
or
Note that the first one is the desired canonical form if there are two independent eigenvectors corresponding to the eigenvalue 5 but the second occurs if there are three Eigen vectors corresponding to the Eigen value 5.
Remark: Observe that
1) The order of the matrix is 4 + 3, i.e. 7
2) There is at least one block of order 5 [the index of the factor t − 5 in
] for the Eigen value 5 and at least one block of order 2 [the index of the factor t − 7 in
].
3) The sum of the order of the blocks corresponding to 5 is 2 + 2, i.e. 4 in first form; the sum of the orders of the blocks corresponding to 7 is 2 + 1, i.e. 3 in both forms.
Real quadratic form
An expression of the form
, where
and
, is called a real quadratic form in the variables
. A real quadratic form can be written as
where
and A is a symmetric matrix, (called the matrix of the form) as
with
Determine the matrix of the following quadratic form:
here the associated matrix is
.
I and J operation on a given matrix A
1) Super orthogonal matrix: a square matrix A is said to be super orthogonal matrix, for both operations I and J, product of matrix and its transpose gives identity matrix.
i.e. for I-operation
where I is a I-unit matrix and for J-operation
where J is a J-unit matrix.
Let
be a 3 × 3 matrix, then
For I-operation:
Thus
.
For J-operation:
Thus
.
2) Super unitary matrix: a square matrix A is said to be super unitary matrix,
for both operations I and J, product of matrix and its transpose of its conjugate gives identity matrix.
i.e. for I-operation
where I is a I-unit matrix and for J-operation
where J is a J-unit matrix.
Proof:
be a 2 × 2 matrix. Then
For I-operation
Now
Thus
, where I is an I-unit matrix.
For J-operation
Now
Thus
, where J is a J-unit matrix.
Theorem 1: Unit matrix satisfies involuntary condition in one manner only. It would be either I-matrix or J-matrix.
Proof:
Let
be a 3 × 3 matrix, then
For I-operation:
Theorem 2: Inverse of I-matrix and inverse of J-matrix are TRANSPROCAL to each other.
Proof:
Let
be an n × n matrix, then
Let
be an n × n matrix, then
From above, we conclude that the inverse of I-matrix and J-matrix is TRANSPROCAL to each other.
8. Solving Linear Equations Using I-Matrix and J-Matrix
Inverse method:
Solve the following system of equation:
(1)
(2)
(3)
Solution: The given system of equations can be written as
, where
i.e.
Case 1:
Let
then
.
Case 2:
Let
then
.
Both ways we got the same solution for given equations.
Theorem 3: Every non-singular square matrix has two characteristic equations. One is from main diagonal (I-matrix) and the other one is from anti-diagonal (J-matrix). Also, every non-singular square matrix satisfies its characteristic equations.
Proof:
Let
be a 3 × 3 matrix and
, then
For I-matrix:
. The characteristic equation of AI is
.
Eigen values:
.
By Cayley-Hamilton theorem, we get
,
.
For J-matrix:
. The characteristic equation of AI is
.
Eigen values:
.
By Cayley-Hamilton theorem, we get
,
Hence proved.
Both forms of matrices I and J, we obtained different characteristic equations and the given matrix satisfied both equations.
Orthoprocal matrix
A real matrix A is orthoprocal if
, that is, if
. Thus A must necessarily be square and invertible.
Let
then
Thus
.
Trans Orthoprocal matrices
A real matrix A is Trans Orthoprocal if
, that is, if
. Thus A must necessarily be square and invertible.
Let
then
and
Thus
.
Theorem 4:
A matrix A is said to be trans-orthoprocal matrix iff matrix A should be an orthogonal matrix.
Proof:
Necessity:
Let
be an orthogonal matrix. Now we taking transprocal on both sides we get,
Again we taking transpose on both side we get,
. Since
.
Sufficient:
Let
be a Trans-orthoprocal matrix. Now we taking transprocal on both sides we get,
. Since
.
Or
Let
be a trans-orthoprocal matrix. Now we taking transpose on both sides we get,
. Since
.
Product between two transprocose matrice
Let
,
be two matrices then
,
are transprocose matrices of A and B matrices.
Now
.
Now
Hence proved.