Matrices—One Review

Abstract

To explore the various kind of matrices, matrix multiplication, identity matrix, characteristic equation, minimal polynomial and diagonalization, my paper investigates matrices and algebraic operations defined on them. These matrices may be viewed as rectangular array of elements where each entry depends on two subscripts. System of linear equations and their solutions may be efficiently investigated using the language of matrices. Furthermore, certain abstract objects introduced in the end of my papers, such as I-matrix, J-matrix, Transprocal of certain matrix, transpose of transprocal matrix, i.e. transprocose matrix, super orthogonality, super unitary, trans othogonaliity, and trans orthoprocal, can be represented by this matrix. On the other hand, the abstract treatment of linear algebra presented later will give us a new insight into the structure of these matrices. The entries in our matrices will come from some arbitrary, but fixed, field K.

Share and Cite:

Rangasamy, B. (2019) Matrices—One Review. Advances in Linear Algebra & Matrix Theory, 9, 43-72. doi: 10.4236/alamt.2019.93004.

1. Introduction

In 1858, Cayley published his “A memoir of theory of matrices” in which he proposed and demonstrated the Cayley-Hamilton theorem. An English mathematician named Cullis was the first to use modern bracket notation for matrices in 1913 and simultaneously demonstrated the first significant use of the notation A = [aij] to represent a matrix where ai,j and the ith row and the jth column. We know matrix multiplication is defined as multiplying row of multiplicand matrix with column of multiplier matrix. But why should not multiply column of multiplicand matrix with row of multiplier matrix. Also, we know the transpose of matrix. In which row is transferred to column and vice versa. What will be happened if matrix tilted 90˚, 180˚, 270˚ and 360˚? Or what would be the images (original image, mirror image, water image and water image of mirror image) of a given matrix? We also know Identity matrix and its characteristics. Is there any possibility do we define any other identity matrix? Is there any possibility does a given matrix have two more characteristic equations or two more minimal polynomials?

Seymour Lipschutz and Marc Lars Lipson  explained matrices and their algebraic operations. Kenneth Kuttler  analyzed matrices and row operations. Eric Jarman  described Jordan canonical matrices and Tom Denton and Andrew Waldron  clearly explained eigen vectors. Peeyush Chandra, A.K. Lal, V. Raghavendra, G. Santhanam  explained Eigen values and Eigen vectors.

In this paper, if a given matrix is I-matrix (usual matrix), we can see mirror image of a given matrix (J-matrix), water image of a given matrix (transprocal of I-matrix) and mirror image of a water image of a given matrix (transprocal of J-matrix), other types of matrix multiplication, identity matrix, characteristic equation, minimal polynomial and diagonalization.

We shall call certain matrix as I-matrix. J-matrix is mirror image of I-matrix, water image of J-matrix is called transprocal of I-matrix and water image of I-matrix is called transprocal of J-matrix. Transpose of Transprocal matrix or Transprocal of Transpose matrix is Transprocose matrix.

Let A be a certain matrix. That is I-matrix. We notate it as AI. Now,

Let $A=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]$ then we can say ${A}_{I}=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]$ so ${A}_{J}=\left[\begin{array}{ccc}c& b& a\\ f& e& d\\ i& h& g\end{array}\right]$ ,

we can categorize two groups of matrices. Such are I group matrices and J group matrices ${A}^{\text{T}}=\left[\begin{array}{ccc}a& d& g\\ b& e& h\\ c& f& i\end{array}\right]$ Transprocal of matrix A is ${A}^{¬}=\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]$ and

Transprocose (Transpose + Transprocal = Transprocose) of matrix A is ${A}^{⊳}=\left[\begin{array}{ccc}i& f& c\\ h& e& b\\ g& d& a\end{array}\right]$ .

We know that ${A}^{\text{T}}=\left[\begin{array}{ccc}a& d& g\\ b& e& h\\ c& f& i\end{array}\right]$ then Transprocose of A matrix is ${A}^{⊳}=\left[\begin{array}{ccc}i& f& c\\ h& e& b\\ g& d& a\end{array}\right]$ .

We can categorize certain matrix as two groups. Such are I group matrices and J group matrices.

Let $A=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]$ be a certain matrix.

We can define I-group matrices are:

${A}_{I}=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right],{A}_{I}^{T}=\left[\begin{array}{ccc}a& d& g\\ b& e& h\\ c& f& i\end{array}\right],{A}_{I}^{¬}=\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right],{A}_{I}^{⊳}=\left[\begin{array}{ccc}i& f& c\\ h& e& b\\ g& d& a\end{array}\right]$ Main diagonal.

Trace and determinant of these matrices are the same.

We can define J-group matrices are:

${A}_{j}=\left[\begin{array}{ccc}c& b& a\\ f& e& d\\ i& h& g\end{array}\right],{A}_{j}^{\text{T}}=\left[\begin{array}{ccc}g& d& a\\ h& e& b\\ i& f& c\end{array}\right],{A}_{j}^{¬}=\left[\begin{array}{ccc}g& h& i\\ d& e& f\\ a& b& c\end{array}\right],{A}_{j}^{⊳}=\left[\begin{array}{ccc}c& f& i\\ b& e& h\\ a& d& g\end{array}\right]$ .

2. Main Diagonal

Trace and determinant of these matrices are the same.

(anspose + reciprocal)

Definition 1: Let A be a m x n matrix,

$A=\left[\begin{array}{ccccc}{a}_{11}& {a}_{12}& {a}_{13}& \cdots & {a}_{1n}\\ {a}_{21}& {a}_{22}& {a}_{23}& \cdots & {a}_{2n}\\ {a}_{31}& {a}_{32}& {a}_{33}& \cdots & {a}_{3n}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& {a}_{m3}& \cdots & {a}_{mn}\end{array}\right]$ then,

the transprocal of matrix A is

${A}^{¬}=\left[\begin{array}{ccccc}{a}_{mn}& \cdots & {a}_{m3}& {a}_{m2}& {a}_{m1}\\ ⋮& \ddots & ⋮& ⋮& ⋮\\ {a}_{3n}& \cdots & {a}_{33}& {a}_{32}& {a}_{31}\\ {a}_{2n}& \cdots & {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{1n}& \cdots & {a}_{13}& {a}_{12}& {a}_{11}\end{array}\right]$

where, ${A}^{¬}$ is called transprocal of A. Transprocal means, first elements to last elements of a given matrix to be considered as last elements to first element of a given matrix. Which means, let a11 be a first element and amn be a last element of a given matrix then transprocal of matrix is transferring a certain place of a matrix element to a certain placement. That is, a11 got a position at amn place, a12 got a position at am(n−1) place, …, a1n got a position at am1 place, …, am1 got a position at a1n place, …, amn got a position at a11 place.

Properties:

1) ${\left({A}^{¬}\right)}^{¬}=A$ .

2) ${I}^{¬}=I$ .

3) ${\left(A+B\right)}^{¬}={A}^{¬}+{B}^{¬}$ .

4) ${\left(kA\right)}^{¬}=k{A}^{¬}$ .

5) ${\left(AB\right)}^{¬}={A}^{¬}{B}^{¬}$ .

6) $tr\left(A\right)=tr\left({A}^{¬}\right)$ .

7) $\mathrm{det}\left(A\right)=\mathrm{det}\left({A}^{¬}\right)$ .

8) ${\left({A}^{-1}\right)}^{¬}={\left({A}^{¬}\right)}^{-1}$ ; more generally ${\left({A}^{n}\right)}^{¬}={\left({A}^{¬}\right)}^{n},n\in Z$ .

9) $eig\left(A\right)=eig\left({A}^{¬}\right)$ .

10) ${X}_{A}\left(t\right)={X}_{{A}^{¬}}\left(t\right)$ .

11) ${m}_{A}\left(t\right)={m}_{{A}^{¬}}\left(t\right)$ .

12) ${\left({A}^{\text{T}}\right)}^{¬}={\left({A}^{¬}\right)}^{\text{T}}$ .

Proof:

1) Let A be a m x n matrix,

$A=\left[\begin{array}{ccccc}{a}_{11}& {a}_{12}& {a}_{13}& \cdots & {a}_{1n}\\ {a}_{21}& {a}_{22}& {a}_{23}& \cdots & {a}_{2n}\\ {a}_{31}& {a}_{32}& {a}_{33}& \cdots & {a}_{3n}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& {a}_{m3}& \cdots & {a}_{mn}\end{array}\right]$ then ${A}^{¬}=\left[\begin{array}{ccccc}{a}_{mn}& \cdots & {a}_{m3}& {a}_{m2}& {a}_{m1}\\ ⋮& \ddots & ⋮& ⋮& ⋮\\ {a}_{3n}& \cdots & {a}_{33}& {a}_{32}& {a}_{31}\\ {a}_{2n}& \cdots & {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{1n}& \cdots & {a}_{13}& {a}_{12}& {a}_{11}\end{array}\right]$ .

So, the transprocal of matrix ${A}^{¬}$ is

${\left({A}^{¬}\right)}^{¬}=\left[\begin{array}{ccccc}{a}_{11}& {a}_{12}& {a}_{13}& \cdots & {a}_{1n}\\ {a}_{21}& {a}_{22}& {a}_{23}& \cdots & {a}_{2n}\\ {a}_{31}& {a}_{32}& {a}_{33}& \cdots & {a}_{3n}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& {a}_{m3}& \cdots & {a}_{mn}\end{array}\right]=A$ , thus ${\left({A}^{¬}\right)}^{¬}=A$ .

2) Let $I=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]$ then ${I}^{¬}=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]$ .

3) Let $A=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]$ then ${A}^{¬}=\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]$ and let $B=\left[\begin{array}{ccc}r& s& t\\ u& v& w\\ x& y& z\end{array}\right]$ then ${B}^{¬}=\left[\begin{array}{ccc}z& y& x\\ w& v& u\\ t& s& r\end{array}\right]$

$A+B=\left[\begin{array}{ccc}a+r& b+s& c+t\\ d+u& e+v& f+w\\ g+x& h+y& i+z\end{array}\right]$ then

${\left(A+B\right)}^{¬}=\left[\begin{array}{ccc}i+z& h+y& g+x\\ f+w& e+v& d+u\\ c+t& b+s& a+r\end{array}\right]=\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]+\left[\begin{array}{ccc}z& y& x\\ w& v& u\\ t& s& r\end{array}\right]={A}^{¬}+{B}^{¬}$ .

4) Let $A=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]$ then $kA=\left[\begin{array}{ccc}ka& kb& kc\\ kd& ke& kf\\ kg& kh& ki\end{array}\right]$

Now ${\left(kA\right)}^{¬}=\left[\begin{array}{ccc}ki& kh& kg\\ kf& ke& kd\\ kc& kb& ka\end{array}\right]=k\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]=k{A}^{¬}$ .

5) Let $AB=\left[\begin{array}{ccc}ar+bu+cx& as+bv+cy& at+bw+cz\\ dr+eu+fx& ds+ev+fy& dt+ew+fz\\ gr+hu+ix& gs+hv+iy& gt+hw+iz\end{array}\right]$ then

${\left(AB\right)}^{¬}=\left[\begin{array}{ccc}gt+hw+iz& gs+hv+iy& gr+hu+ix\\ dt+ew+fz& ds+ev+fy& dr+eu+fx\\ at+bw+cz& as+bv+cy& ar+bu+cx\end{array}\right]$

$\begin{array}{c}{\left(AB\right)}^{¬}=\left[\begin{array}{ccc}iz+hw+gt& iy+hv+gs& ix+hu+gr\\ fz+ew+dt& fy+ev+ds& fx+eu+dr\\ cz+bw+at& cy+bv+as& cx+bu+ar\end{array}\right]\\ =\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]\left[\begin{array}{ccc}z& y& x\\ w& v& u\\ t& s& r\end{array}\right]={A}^{¬}{B}^{¬}\end{array}$ .

6) Let $A=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]$ then ${A}^{¬}=\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]$

Now $tr\left(A\right)=a+e+i$ then $tr\left({A}^{¬}\right)=i+e+a=a+e+i=tr\left(A\right)$ .

7) Now $\mathrm{det}\left(A\right)=aei+bfg+cdh-afh-bdi-ceg$ then

$\begin{array}{c}\mathrm{det}\left({A}^{¬}\right)=iea+hdc+gfa-hfa-idb-gec\\ =aei+bfg+cdh-afh-bdi-ceg=\mathrm{det}\left(A\right)\end{array}$ .

8) Let $A=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]$ then ${A}^{-1}=\frac{1}{|A|}\left[\begin{array}{ccc}ei-fh& ch-bi& bf-ce\\ fg-di& ai-cg& cd-af\\ dh-eg& bg-ah& ae-bd\end{array}\right]$

So ${\left({A}^{-1}\right)}^{¬}=\frac{1}{|A|}\left[\begin{array}{ccc}ae-bd& bg-ah& dh-eg\\ cd-af& ai-cg& fg-di\\ bf-ce& ch-bi& ei-fh\end{array}\right]$

Now ${A}^{¬}=\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]$ then $\begin{array}{c}{\left({A}^{¬}\right)}^{-1}=\frac{1}{|{A}^{¬}|}\left[\begin{array}{ccc}ae-bd& bg-ah& dh-eg\\ cd-af& ai-cg& fg-di\\ bf-ce& ch-bi& ei-fh\end{array}\right]\\ =\frac{1}{|A|}\left[\begin{array}{ccc}ae-bd& bg-ah& dh-eg\\ cd-af& ai-cg& fg-di\\ bf-ce& ch-bi& ei-fh\end{array}\right]={\left({A}^{-1}\right)}^{¬}\end{array}$

Let $A=\left[\begin{array}{ccc}-3& -5& -9\\ -6& 3& 9\\ -9& -3& 5\end{array}\right]$ then ${A}^{¬}=\left[\begin{array}{ccc}5& -3& -9\\ 9& 3& -6\\ -9& -5& -3\end{array}\right]$

Now

$|A-x|=\left[\begin{array}{ccc}-3-x& -5& -9\\ -6& 3-x& 9\\ -9& -3& 5-x\end{array}\right]$

$\begin{array}{c}=\left(-3-x\right)\left({x}^{2}-8x+15+27\right)+5\left(-30+6x+81\right)-9\left(18+27-9x\right)\\ =-3{x}^{2}+24x-126-{x}^{3}+8{x}^{2}-42x+255+30x-405+81x\\ =-{x}^{3}+5{x}^{2}+93x-276\end{array}$

or ${x}^{3}-5{x}^{2}-93x+276=0$ (1)

Eigen value of $\left(A\right)={x}^{3}-5{x}^{2}-93x+276=0$ is ${x}_{1}=11.13$ , ${x}_{2}=2.78$ , ${x}_{3}=-8.91$

Also,

$\begin{array}{c}|{A}^{¬}-x{I}^{¬}|=\left[\begin{array}{ccc}5-x& -3& -9\\ 9& 3-x& -6\\ -9& -5& -3-x\end{array}\right]\\ =\left(5-x\right)\left({x}^{2}-9-30\right)-3\left(-27-9x-54\right)-9\left(-45+27-9x\right)\\ =5{x}^{2}-195-{x}^{3}+39x-243-27x+162+81x\\ =-{x}^{3}+5{x}^{2}+93x-276\end{array}$

or ${x}^{3}-5{x}^{2}-93x+276=0$ (2)

Eigen value of $\left({A}^{¬}\right)={x}^{3}-5{x}^{2}-93x+276=0$ is ${x}_{1}=11.13$ , ${x}_{2}=2.78$ , ${x}_{3}=-8.91$

Both cases we get,

9) $eig\left(A\right)=eig\left(A ¬\right)$

10) ${X}_{A}\left(t\right)={X}_{{A}^{¬}}\left(t\right)$

11) ${m}_{A}\left(t\right)={m}_{{A}^{¬}}\left(t\right)$

12) Let A be a m x n matrix,

$A=\left[\begin{array}{ccccc}{a}_{11}& {a}_{12}& {a}_{13}& \cdots & {a}_{1n}\\ {a}_{21}& {a}_{22}& {a}_{23}& \cdots & {a}_{2n}\\ {a}_{31}& {a}_{32}& {a}_{33}& \cdots & {a}_{3n}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& {a}_{m3}& \cdots & {a}_{mn}\end{array}\right]$ $⇒{A}^{¬}=\left[\begin{array}{ccccc}{a}_{mn}& \cdots & {a}_{m3}& {a}_{m2}& {a}_{m1}\\ ⋮& \ddots & ⋮& ⋮& ⋮\\ {a}_{3n}& \cdots & {a}_{33}& {a}_{32}& {a}_{31}\\ {a}_{2n}& \cdots & {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{1n}& \cdots & {a}_{13}& {a}_{12}& {a}_{11}\end{array}\right]$ ${\left({A}^{¬}\right)}^{\text{T}}={\left[\begin{array}{ccccc}{a}_{mn}& \cdots & {a}_{m3}& {a}_{m2}& {a}_{m1}\\ ⋮& \ddots & ⋮& ⋮& ⋮\\ {a}_{3n}& \cdots & {a}_{33}& {a}_{32}& {a}_{31}\\ {a}_{2n}& \cdots & {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{1n}& \cdots & {a}_{13}& {a}_{12}& {a}_{11}\end{array}\right]}^{\text{T}}=\left[\begin{array}{ccccc}{a}_{mn}& \cdots & {a}_{3n}& {a}_{2n}& {a}_{1n}\\ ⋮& \ddots & ⋮& ⋮& ⋮\\ {a}_{m3}& \cdots & {a}_{33}& {a}_{23}& {a}_{13}\\ {a}_{m2}& \cdots & {a}_{32}& {a}_{22}& {a}_{12}\\ {a}_{m1}& \cdots & {a}_{31}& {a}_{21}& {a}_{11}\end{array}\right]$ .

Definition 2: Transpose of Transprocal matrix is called TRANSPROCOSE matrix. Let we assign ${\left({A}^{¬}\right)}^{\text{T}}={A}^{⊳}$ . so now onwards we call ${A}^{⊳}$ is Transprocose of matrix A.

Properties:

1) ${A}^{⊳}={\left({A}^{¬}\right)}^{\text{T}}={\left({A}^{\text{T}}\right)}^{¬}$ .

2) ${A}^{⊳}{}^{{}^{\text{T}}}={\left({A}^{¬}{}^{{}^{\text{T}}}\right)}^{\text{T}}={A}^{¬}$ .

3) ${A}^{⊳}{}^{{}^{¬}}={\left({A}^{\text{T}}{}^{{}^{¬}}\right)}^{¬}={A}^{\text{T}}$ .

4) ${A}^{\text{T}}{}^{{}^{⊳}}={\left[{\left({A}^{\text{T}}\right)}^{\text{T}}\right]}^{¬}={A}^{¬}$ .

5) ${A}^{¬}{}^{{}^{⊳}}={\left[{\left({A}^{¬}\right)}^{¬}\right]}^{\text{T}}={A}^{\text{T}}$ .

6) We know that ${A}^{\text{T}}{}^{{}^{\text{T}}}=A$ and ${A}^{¬}{}^{{}^{¬}}=A$ ,

${A}^{⊳}{}^{{}^{⊳}}={\left({A}^{⊳}\right)}^{⊳}={\left[{\left[{\left[{A}^{\text{T}}\right]}^{¬}\right]}^{¬}\right]}^{\text{T}}={\left[{A}^{\text{T}}\right]}^{\text{T}}=A$ . Thus ${A}^{⊳}{}^{{}^{⊳}}=A$ .

7) ${\left(AB\right)}^{⊳}={\left[{\left(AB\right)}^{¬}\right]}^{\text{T}}={\left[{A}^{¬}{B}^{¬}\right]}^{\text{T}}={B}^{¬}{}^{{}^{\text{T}}}{A}^{¬}{}^{{}^{\text{T}}}={B}^{⊳}{A}^{⊳}$ .

3. Algebraic Properties of Certain Matrix, Transpose, Transprocal and Transprocose Matrices

Let A be a matrix then ${A}^{\text{T}}$ is a transpose of matrix A, ${A}^{¬}$ is a transprocal of matrix A and ${A}^{⊳}$ is a transprocose of matrix A.

Now we find some matrices by the combination of certain matrix, Transpose, Transprocal and Transprocose matrices.

1) $A{A}^{\text{T}}=B,{A}^{\text{T}}A=C$

2) $A{A}^{¬}=D,{A}^{¬}A=E$

3) $A{A}^{⊳}=F,{A}^{⊳}A=G$

4) ${A}^{\text{T}}{A}^{¬}=R,{A}^{¬}{A}^{\text{T}}=S$

5) ${A}^{\text{T}}{A}^{⊳}=T,{A}^{⊳}{A}^{\text{T}}=U$

6) ${A}^{⊳}{A}^{¬}=V,{A}^{¬}{A}^{⊳}=W$ .

Let we intend Transpose, Transprocal and Transprocose on above matrices.

1) Let $B=A{A}^{\text{T}}$ .

Now we taking transpose on both sides. We get, ${B}^{\text{T}}={\left[A{A}^{\text{T}}\right]}^{\text{T}}={A}^{\text{T}}{}^{{}^{\text{T}}}{A}^{\text{T}}=A{A}^{\text{T}}=B$

Now we taking transprocal on both sides. We get, ${B}^{¬}={\left[A{A}^{\text{T}}\right]}^{¬}={A}^{¬}{A}^{\text{T}}{}^{{}^{¬}}={A}^{¬}{A}^{⊳}=W$

Now we taking transprocose on both sides. We get, ${B}^{⊳}={\left[A{A}^{\text{T}}\right]}^{⊳}={A}^{\text{T}}{}^{{}^{⊳}}{A}^{⊳}={A}^{¬}{A}^{⊳}=W$

2) Let $D=A{A}^{¬}$ .

Now we taking transpose on both sides. We get, ${D}^{\text{T}}={\left[A{A}^{¬}\right]}^{\text{T}}={A}^{¬}{}^{{}^{\text{T}}}{A}^{\text{T}}={A}^{⊳}{A}^{\text{T}}=U$

Now we taking transprocal on both sides. We get, ${D}^{¬}={\left[A{A}^{¬}\right]}^{¬}={A}^{¬}{A}^{¬}{}^{{}^{¬}}={A}^{¬}A=E$

Now we taking transprocose on both sides. We get, ${D}^{⊳}={\left[A{A}^{¬}\right]}^{⊳}={A}^{¬}{}^{{}^{⊳}}{A}^{⊳}={A}^{\text{T}}{A}^{⊳}=T$

3) Let $F=A{A}^{⊳}$ .

Now we taking transpose on both sides. We get, ${F}^{\text{T}}={\left[A{A}^{⊳}\right]}^{\text{T}}={A}^{⊳}{}^{{}^{\text{T}}}{A}^{\text{T}}={A}^{¬}{A}^{\text{T}}=S$

Now we taking transprocal on both sides. We get, ${F}^{¬}={\left[A{A}^{⊳}\right]}^{¬}={A}^{¬}{A}^{⊳}{}^{{}^{¬}}={A}^{¬}{A}^{\text{T}}=S$

Now we taking transprocose on both sides. We get, ${F}^{⊳}={\left[A{A}^{⊳}\right]}^{⊳}={A}^{⊳}{}^{{}^{⊳}}{A}^{⊳}=A{A}^{⊳}=F$

4) Let $T={A}^{\text{T}}{A}^{⊳}$ .

Now we taking transpose on both sides. We get, ${T}^{\text{T}}={\left[{A}^{\text{T}}{A}^{⊳}\right]}^{\text{T}}={A}^{⊳}{}^{{}^{\text{T}}}{A}^{\text{T}}{}^{{}^{\text{T}}}={A}^{¬}A=E$

Now we taking transprocal on both sides. We get, ${T}^{¬}={\left[{A}^{\text{T}}{A}^{⊳}\right]}^{¬}={A}^{\text{T}}{}^{{}^{¬}}{A}^{⊳}{}^{{}^{¬}}={A}^{⊳}{A}^{\text{T}}=U$

Now we taking transprocose on both sides. We get, ${T}^{⊳}={\left[{A}^{\text{T}}{A}^{⊳}\right]}^{⊳}={A}^{⊳}{}^{{}^{⊳}}{A}^{\text{T}}{}^{{}^{⊳}}=A{A}^{¬}=D$

5) Let $R={A}^{\text{T}}{A}^{¬}$ .

Now we taking transpose on both sides. We get, ${R}^{\text{T}}={\left[{A}^{\text{T}}{A}^{¬}\right]}^{\text{T}}={A}^{¬}{}^{{}^{\text{T}}}{A}^{\text{T}}{}^{{}^{\text{T}}}={A}^{⊳}A=G$

Now we taking transprocal on both sides. We get, ${R}^{¬}={\left[{A}^{\text{T}}{A}^{¬}\right]}^{¬}={A}^{\text{T}}{}^{{}^{¬}}{A}^{¬}{}^{{}^{¬}}={A}^{⊳}A=G$

Now we taking transprocose on both sides. We get, ${R}^{⊳}={\left[{A}^{\text{T}}{A}^{¬}\right]}^{⊳}={A}^{¬}{}^{{}^{⊳}}{A}^{\text{T}}{}^{{}^{⊳}}={A}^{\text{T}}{A}^{¬}=R$

6) Let $V={A}^{⊳}{A}^{¬}$ .

Now we taking transpose on both sides. We get, ${V}^{\text{T}}={\left[{A}^{⊳}{A}^{¬}\right]}^{\text{T}}={A}^{¬}{}^{{}^{\text{T}}}{A}^{⊳}{}^{{}^{\text{T}}}={A}^{⊳}{A}^{¬}=V$

Now we taking transprocal on both sides. We get, ${V}^{¬}={\left[{A}^{⊳}{A}^{¬}\right]}^{¬}={A}^{⊳}{}^{{}^{¬}}{A}^{¬}{}^{{}^{¬}}={A}^{\text{T}}A=C$

Now we taking transprocose on both sides. We get, ${V}^{⊳}={\left[{A}^{⊳}{A}^{¬}\right]}^{⊳}={A}^{¬}{}^{{}^{⊳}}{A}^{⊳}{}^{{}^{⊳}}={A}^{\text{T}}A=C$

From the above matrices, we concluded they are related with themselves.

4. Dominance Property

1) If $A{A}^{¬}=C$ then ${A}^{¬}A={C}^{¬}$

Let $A=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]$ then ${A}^{¬}=\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]$

Now

$A{A}^{¬}=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]=\left[\begin{array}{ccc}ai+bf+cc& ah+be+bc& ag+bd+ac\\ di+ef+cf& dh+ee+bf& dg+de+af\\ gi+hf+ic& gh+eh+bi& gg+dh+ai\end{array}\right]=C$

${A}^{¬}A=\left[\begin{array}{ccc}i& h& g\\ f& e& d\\ c& b& a\end{array}\right]\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& i\end{array}\right]=\left[\begin{array}{ccc}gg+dh+ai& gh+eh+bi& gi+hf+ic\\ dg+de+af& dh+ee+bf& di+ef+cf\\ ag+bd+ac& ah+be+bc& ai+bf+cc\end{array}\right]={C}^{¬}$

From the above matrices, we concluded the multiplic and matrix dominated in product matrix.

Cayley—Hamilton theorem: Every square matrix satisfies its characteristic equation.

Note: It is always true for matrix A and its transprocal matrix ${A}^{¬}$ .

13) Let $A=\left[\begin{array}{ccc}-3& -5& -9\\ -6& 3& 9\\ -9& -3& 5\end{array}\right]$ then ${A}^{¬}=\left[\begin{array}{ccc}5& -3& -9\\ 9& 3& -6\\ -9& -5& -3\end{array}\right]$

Now the characteristic equation of A is ${x}^{3}-5{x}^{2}-93x+276=0$ .

We can write this equation as ${A}^{3}-5{A}^{2}-93A+276I=0$ ;

Also, we can write this equation as ${A}^{¬3}-5{A}^{¬2}-93{A}^{¬}+276{I}^{¬}=0$ .

${A}^{2}=\left[\begin{array}{ccc}120& 27& -63\\ -81& 12& 126\\ 0& 21& 79\end{array}\right]$ , ${A}^{3}=\left[\begin{array}{ccc}45& -330& -1152\\ -963& 63& 1467\\ -837& -174& 584\end{array}\right]$ and

${\left({A}^{¬}\right)}^{2}=\left[\begin{array}{ccc}79& 21& 0\\ 126& 12& -81\\ -63& 27& 120\end{array}\right]$ , ${\left({A}^{¬}\right)}^{3}=\left[\begin{array}{ccc}584& -174& -837\\ 1467& 63& -963\\ -1152& -330& 45\end{array}\right]$ .

Use above matrices and check those characteristic equations.

$\begin{array}{l}{A}^{3}-5{A}^{2}-93A+276I\\ =\left[\begin{array}{ccc}45-600+279+276& -330-135+465& -1152+315+837\\ -963+405+558& 63-60-279+276& 1467-630-837\\ -837+0+837& -174-105+279& 584-395-465+276\end{array}\right]\\ =\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\end{array}$

$\begin{array}{l}{A}^{¬3}-5{A}^{¬2}-93{A}^{¬}+276{I}^{¬}\\ =\left[\begin{array}{ccc}584-395-465+276& -174-105+279& -837+0+837\\ 1467-630-837& 63-60-279+276& -963+405+558\\ -1152+315+837& -330-135+465& 45-600+279+276\end{array}\right]\\ =\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\end{array}$

Thus ${A}^{3}-5{A}^{2}-93A+276I={A}^{¬3}-5{A}^{¬2}-93{A}^{¬}+276{I}^{¬}$ .

5. J-Matrix

Definition 2:

I matrix and J matrix: An m x n matrix is usually written as

$A=\left[\begin{array}{ccccc}{a}_{11}& {a}_{12}& {a}_{13}& \cdots & {a}_{1n}\\ {a}_{21}& {a}_{22}& {a}_{23}& \cdots & {a}_{2n}\\ {a}_{31}& {a}_{32}& {a}_{33}& \cdots & {a}_{3n}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& {a}_{m3}& \cdots & {a}_{mn}\end{array}\right]$

we shall call this matrix as I-matrix (first element starts from North West corner).

If an m x n matrix is said to be J-matrix (first element starts from North East corner), it should be written as

$A=\left[\begin{array}{ccccc}{a}_{1n}& \cdots & {a}_{13}& {a}_{12}& {a}_{11}\\ {a}_{2n}& \cdots & {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{3n}& \cdots & {a}_{33}& {a}_{32}& {a}_{31}\\ ⋮& \ddots & ⋮& ⋮& ⋮\\ {a}_{mn}& \cdots & {a}_{m3}& {a}_{m2}& {a}_{m1}\end{array}\right]$ .

We define some type of J matrix.

1) Diagonal matrix: A square matrix is called diagonal matrix, if all its non-diagonal elements are zero.

$A=\left[\begin{array}{ccc}0& 0& {a}_{11}\\ 0& {a}_{22}& 0\\ {a}_{33}& 0& 0\end{array}\right],{a}_{ii}\in N$

2) Scalar matrix: A diagonal matrix is called scalar matrix, if all the non-diagonal elements are equal to zero and all the diagonal elements are equal to scalar say k.

$A=\left[\begin{array}{ccc}0& 0& k\\ 0& k& 0\\ k& 0& 0\end{array}\right]$

3) Unit or identity matrix: a square matrix is said to be unit or identity matrix, if all the diagonal elements are equal to unity and non-diagonal elements are zero.

$J=\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right],\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right]$ etc. are unit matrices.

Upper triangular matrix: A square matrix is said to be upper triangular matrix, if all elements below the leading diagonal are zero.

$A=\left[\begin{array}{ccccc}e& d& c& b& a\\ i& h& g& f& 0\\ l& k& j& 0& 0\\ n& m& 0& 0& 0\\ o& 0& 0& 0& 0\end{array}\right]$ is an upper triangular matrix.

4) Lower triangular matrix: A square matrix is said to be lower triangular matrix, if all elements above the leading diagonal are zero.

$A=\left[\begin{array}{ccccc}0& 0& 0& 0& a\\ 0& 0& 0& c& b\\ 0& 0& f& e& d\\ 0& j& i& h& g\\ o& n& m& l& k\end{array}\right]$ is a lower triangular matrix.

5) Transpose matrix: If we interchange the rows and corresponding column in a given matrix A, the matrix obtained is called transpose of the matrix A and denoted by AT or A'.

Let ${A}_{J}=\left[\begin{array}{ccc}c& b& a\\ f& e& d\\ i& h& g\end{array}\right]$ then ${A}^{\text{T}}=\left[\begin{array}{ccc}g& d& a\\ h& e& b\\ i& f& c\end{array}\right]$ .

6) Symmetric matrix: A square matrix A is said to be symmetric matrix, if ${A}^{\text{T}}=A$ . That is if for all the values of i and j, ${a}_{ij}={a}_{ji}$ .

Let ${A}_{J}=\left[\begin{array}{ccc}c& b& a\\ d& e& b\\ i& d& c\end{array}\right]$ is a symmetric matrix.

7) Anti-Symmetric matrix: A square matrix A is said to be symmetric matrix, if ${A}^{\text{T}}=-A$ . That is if for all the values of i and j, ${a}_{ij}=-{a}_{ji}$ .

Let ${A}_{J}=\left[\begin{array}{ccc}-c& -b& 0\\ -d& 0& b\\ 0& d& c\end{array}\right]$ is an anti-symmetric matrix

8) Orthogonal matrix: A square matrix A is said to be orthogonal matrix, if $A{A}^{\text{T}}=J={A}^{\text{T}}A$ Where J is a unit matrix.

9) Hermitian matrix: A square matrix A is called Hermitian matrix, if ${\left(\stackrel{¯}{A}\right)}^{\text{T}}=A$ . That is if every i-jth element of A is equal to complex conjugate j-ith element of A. i.e. ${a}_{ij}=\stackrel{¯}{{a}_{ji}}$ .

Note: Every diagonal element of Hermitian matrix is real.

Ex: Let ${A}_{J}=\left[\begin{array}{ccc}1-2i& 2-3i& 5\\ 1+i& 5& -4-i\\ 8& 3-i& 2+i\end{array}\right],\left[\begin{array}{cc}1+i& 3\\ 5& 2-i\end{array}\right]$ are Hermitian matrix.

10) Skew-Hermitian matrix: A square matrix A is called skew-Hermitian matrix, if

${\left(\stackrel{¯}{A}\right)}^{\text{T}}=-A$ That is if every i-jth element of A is equal to complex conjugate j-ith element of A. i.e. ${a}_{ij}=-\stackrel{¯}{{a}_{ji}}$ .

Note: Diagonal element of a Skew-Hermitian matrix is either purely imaginary or zero.

Ex: Let ${A}_{J}=\left[\begin{array}{ccc}1-2i& 2-3i& 5-i\\ 1+i& 0& -4-i\\ 8+i& 3-i& 2+i\end{array}\right],\left[\begin{array}{cc}1+i& 3-2i\\ 5+i& 2-i\end{array}\right]$ are skew-Hermitian matrix.

11) Unitary matrix: A square matrix A is said to be unitary matrix,

If $A{A}^{*}=J={A}^{*}A$ . Where ${A}^{*}={\left(\stackrel{¯}{A}\right)}^{\text{T}}$ .

12) Involutary matrix: A square matrix A is said to be involutary matrix, if ${A}^{2}=J$ . Unit matrix is always an involutary matrix, since

${J}^{2}=J$

Note: Other definitions like nilpotent, idempotent, conjugate, etc. of J matrix as same as I matrix.

6. Algebra of J Matrix

6.1. Addition and Subtraction of J-Matrices

If two matrices A and B are same order, then addition and subtraction of matrices A ± B is defined as the matrix which is obtained by the addition and subtraction of the corresponding elements of A and B.

More clearly, we can say that

Let $A=\left[\begin{array}{ccc}{a}_{13}& {a}_{12}& {a}_{11}\\ {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{33}& {a}_{32}& {a}_{31}\end{array}\right]$ then $B=\left[\begin{array}{ccc}{b}_{13}& {b}_{12}& {b}_{11}\\ {b}_{23}& {b}_{22}& {b}_{21}\\ {b}_{33}& {b}_{32}& {b}_{31}\end{array}\right]$

$A±B=\left[\begin{array}{ccc}{a}_{13}±{b}_{13}& {a}_{12}±{b}_{12}& {a}_{11}±{b}_{11}\\ {a}_{23}±{b}_{23}& {a}_{22}±{b}_{22}& {a}_{21}±{b}_{21}\\ {a}_{33}±{b}_{33}& {a}_{32}±{b}_{32}& {a}_{31}±{b}_{31}\end{array}\right]$ .

6.2. Scalar Multiplication of J Matrix

Let A be any matrix and k be any scalar, then the matrix obtained by multiplying every element of the matrix a by k is called scalar multiplication of a by k and denoted by kA.

Ex: Let k = 3 and $A=\left[\begin{array}{ccc}{a}_{13}& {a}_{12}& {a}_{11}\\ {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{33}& {a}_{32}& {a}_{31}\end{array}\right]$ then $3A=\left[\begin{array}{ccc}3{a}_{13}& 3{a}_{12}& 3{a}_{11}\\ 3{a}_{23}& 3{a}_{22}& 3{a}_{21}\\ 3{a}_{33}& 3{a}_{32}& 3{a}_{31}\end{array}\right]$ .

6.3. Multiplication of J Matrices

Definition 3: Suppose $A={a}_{ij},B={b}_{ij}$ be two matrices such that the number of rows of A is equal to the of columns of B; say, A is p × m matrix and B is n × p matrix. Then the product AB is a n × m matrix whose ijth entry is obtained by multiplying ith row of B by jth column of A. that is

$\begin{array}{c}AB=\left[\begin{array}{ccccc}{a}_{1m}& \cdots & {a}_{1j}& \cdots & {a}_{11}\\ ⋮& \ddots & ⋮& \ddots & ⋮\\ {a}_{im}& \cdots & {a}_{ij}& \cdots & {a}_{i1}\\ ⋮& \ddots & ⋮& \ddots & ⋮\\ {a}_{pm}& \cdots & {a}_{pj}& \cdots & {a}_{p1}\end{array}\right]\left[\begin{array}{ccccc}{b}_{1p}& \cdots & {b}_{1j}& \cdots & {b}_{11}\\ ⋮& \ddots & ⋮& \ddots & ⋮\\ {b}_{ip}& \cdots & {b}_{ij}& \cdots & {b}_{i1}\\ ⋮& \ddots & ⋮& \ddots & ⋮\\ {b}_{np}& \cdots & {b}_{nj}& \cdots & {b}_{n1}\end{array}\right]\\ =\left[\begin{array}{ccccc}{c}_{1m}& \cdots & {c}_{1j}& \cdots & {c}_{11}\\ ⋮& \ddots & ⋮& \ddots & ⋮\\ {c}_{im}& \cdots & {c}_{ij}& \cdots & {c}_{i1}\\ ⋮& \ddots & ⋮& \ddots & ⋮\\ {c}_{nm}& \cdots & {c}_{nj}& \cdots & {c}_{n1}\end{array}\right]\end{array}$

where ${c}_{ij}={b}_{i1}{a}_{1j}+{b}_{i2}{a}_{2j}+\cdots +{b}_{ip}{a}_{pj}\underset{k=1}{\overset{p}{\sum }}{b}_{ik}{a}_{kj}$ .

The product AB is not defined if P is a p × m matrix and B is an n × q matrix, where $p\ne q$ .

Ex:

1) Find AB where $A=\left(\begin{array}{ccc}2& 0& -4\\ 3& 4& 2\end{array}\right)$ and $B=\left(\begin{array}{cc}1& 2\\ 3& 4\end{array}\right)$

Because A is 2 × 3 matrix and B is 2 × 2, the product AB is defined and AB is a 2 × 3 matrix. To obtain the first row of the matrix AB, multiply the first row (1, 2) of B by each column of A,

$\left(\begin{array}{c}2\\ 3\end{array}\right),\left(\begin{array}{c}0\\ 4\end{array}\right),\left(\begin{array}{c}-4\\ 2\end{array}\right)$ Respectively. That is,

$AB=\left(\begin{array}{ccc}4+3& 0+4& -8+2\end{array}\right)=\left(\begin{array}{ccc}7& 4& -6\end{array}\right)$

To obtain the second row of AB, multiply the second row (3, 4)of B by each column of A. thus,

$AB=\left(\begin{array}{ccc}4+3& 0+4& -8+2\\ 8+9& 0+12& -16+6\end{array}\right)=\left(\begin{array}{ccc}7& 4& -6\\ 17& 12& -10\end{array}\right)$ .

2) Suppose $A=\left(\begin{array}{cc}2& 1\\ 3& 4\end{array}\right)$ and $B=\left(\begin{array}{cc}1& 2\\ 3& 4\end{array}\right)$ then

$AB=\left(\begin{array}{cc}4+3& 2+4\\ 8+9& 4+12\end{array}\right)=\left(\begin{array}{cc}7& 6\\ 17& 16\end{array}\right)$ and $BA=\left(\begin{array}{cc}1+6& 2+8\\ 4+9& 8+12\end{array}\right)=\left(\begin{array}{cc}7& 10\\ 13& 20\end{array}\right)$ .

The above examples show that matrix multiplication is not commutative. That is, in general, $AB\ne BA$ . However, matrix multiplication does satisfy the following properties.

Preposition 1: If all multiple and addition make sense, the following hold for matrices A, B, C and scalars x, y.

1) $A\left(xB+yC\right)=x\left(AB\right)+y\left(AC\right)$

2) $\left(B+C\right)A=BA+CA$

3) $A\left(BC\right)=\left(AB\right)C$

Proof: Using the definition of multiplication,

1) $\begin{array}{c}{\left(A\left(xB+yC\right)\right)}_{ij}=\underset{k}{\sum }{\left(xB+yC\right)}_{ik}{A}_{kj}\\ =\underset{k}{\sum }\left(x{B}_{ik}+y{C}_{ik}\right){A}_{kj}\\ =x\underset{k}{\sum }{B}_{ik}{A}_{kj}+y\underset{k}{\sum }{C}_{ik}{A}_{kj}\\ =x{\left(AB\right)}_{ij}+y{\left(AC\right)}_{ij}\\ ={\left(x\left(AB\right)+y\left(AC\right)\right)}_{ij}\end{array}$

2) $\left(B+C\right)A=BA+CA$ is easily claimed.

3) $\begin{array}{c}A\left(BC\right)={\left(A\left(BC\right)\right)}_{ij}=\underset{k}{\sum }{\left(BC\right)}_{ik}{A}_{kj}\\ =\underset{l}{\sum }{C}_{il}{B}_{lk}\underset{k}{\sum }{A}_{kj}\\ =\underset{l}{\sum }{C}_{il}{\left(AB\right)}_{lj}\\ ={\left(\left(AB\right)C\right)}_{ij}\\ =\left(AB\right)C\end{array}$ .

6.4. Determinant of J Matrix

Every square matrix can be associated with an expression or a number which is known as determinant. The determinant of a square matrix $A=\left({a}_{ij}\right)$ of order n is denoted by $|A|$ and is given by

$|\begin{array}{cccc}{a}_{1n}& \cdots & {a}_{12}& {a}_{11}\\ {a}_{2n}& \cdots & {a}_{22}& {a}_{21}\\ ⋮& \ddots & ⋮& ⋮\\ {a}_{nn}& \cdots & {a}_{n2}& {a}_{n1}\end{array}|=|A|$

Note: A matrix which is not square matrix does not possess determinant.

Determinants of orders 1 and 2.

Determinants of orders 1 and 2 are defined as follows:

$|{a}_{11}|$ and $|\begin{array}{cc}{a}_{12}& {a}_{11}\\ {a}_{22}& {a}_{21}\end{array}|={a}_{11}{a}_{22}-{a}_{12}{a}_{21}$

Thus, the determinant of a 1 × 1 matrix $A=\left[{a}_{11}\right]$ is the scalar a11; that is, $\mathrm{det}A=|{a}_{11}|={a}_{11}$ . The determinant of order two may easily be remembered by using the following diagram:

$|\begin{array}{cc}{a}_{12}& {a}_{11}\\ {a}_{22}& {a}_{21}\end{array}|$ $={a}_{11}{a}_{22}-{a}_{12}{a}_{21}$ .

That is the determinant is equal to the product of the elements along the plus-labeled arrow minus the product of the elements along the minus-labeled arrow.

The determinant of a 3 × 3 matrix:

Let $A=\left[\begin{array}{ccc}{a}_{13}& {a}_{12}& {a}_{11}\\ {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{33}& {a}_{32}& {a}_{31}\end{array}\right]$ be a square matrix of order 3, then $|A|$ given by

$\begin{array}{c}|A|=|\begin{array}{ccc}{a}_{13}& {a}_{12}& {a}_{11}\\ {a}_{23}& {a}_{22}& {a}_{21}\\ {a}_{33}& {a}_{32}& {a}_{31}\end{array}|\\ ={a}_{11}{a}_{22}{a}_{33}+{a}_{12}{a}_{23}{a}_{31}+{a}_{13}{a}_{21}{a}_{33}-{a}_{11}{a}_{23}{a}_{32}-{a}_{12}{a}_{21}{a}_{33}-{a}_{13}{a}_{22}{a}_{31}\end{array}$

Diagonal and Trace.

Let $A=\left({a}_{ij}\right)$ be an n-square matrix. The diagonal or main diagonal of A consists of the elements with the same subscripts that is ${a}_{11},{a}_{12},{a}_{33},\cdots ,{a}_{nn}$ .

The trace of A, written tr(A) is the sum of the diagonal elements. Namely,

$tr\left(A\right)={a}_{11}+{a}_{12}+{a}_{33}+\cdots +{a}_{nn}=\underset{k=1}{\overset{n}{\sum }}{a}_{kk}$ .

6.5. Some Working Models of J Matrices

1) Orthogonal matrix: A square matrix A is said to be orthogonal matrix, if

$A{A}^{\text{T}}=J={A}^{\text{T}}A$ .

where J is a unit matrix.

Proof:

Let $A=\frac{1}{3}\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]$ be a 3 × 3 matrix, then ${A}^{\text{T}}=\frac{1}{3}\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]$

$A{A}^{\text{T}}=\frac{1}{9}\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]$

$\begin{array}{c}=\frac{1}{9}\left[\begin{array}{ccc}-4+2+2& -2-2+4& 1+4+4\\ 2-4+2& 1+4+4& -2-2+4\\ 4+4+1& 2-4+2& -4+2+2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}0& 0& 9\\ 0& 9& 0\\ 9& 0& 0\end{array}\right]=\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right]=J\end{array}$

$\begin{array}{c}A{A}^{\text{T}}=\frac{1}{9}\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}-4+2+2& 2+2-4& 1+4+4\\ -2+4-2& 1+4+4& 2+2-4\\ 4+4+1& -2+4-2& 4-2-2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}0& 0& 9\\ 0& 9& 0\\ 9& 0& 0\end{array}\right]=\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right]=J\end{array}$

2) Unitary matrix: a square matrix A is said to be unitary matrix, if $A{A}^{*}=J={A}^{*}A$ .

Proof:

Let $A=\frac{1}{2}\left[\begin{array}{cc}1-j& 1+j\\ 1+j& 1-j\end{array}\right]$ be a 2 × 2 matrix then $\stackrel{¯}{A}=\frac{1}{2}\left[\begin{array}{cc}1+j& 1-j\\ 1-j& 1+j\end{array}\right]$

Moreover, ${A}^{*}={\left(\stackrel{¯}{A}\right)}^{\text{T}}=\frac{1}{2}\left[\begin{array}{cc}1+j& 1-j\\ 1-j& 1+j\end{array}\right]$

Now

$A{A}^{*}=\frac{1}{2}\left[\begin{array}{cc}1+j& 1-j\\ 1-j& 1+j\end{array}\right]\frac{1}{2}\left[\begin{array}{cc}1-j& 1+j\\ 1+j& 1-j\end{array}\right]=\frac{1}{4}\left[\begin{array}{cc}-2j+2j& 4\\ 4& 2j-2j\end{array}\right]=\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right]=J$

$A{A}^{*}=\frac{1}{2}\left[\begin{array}{cc}1-j& 1+j\\ 1+j& 1-j\end{array}\right]\frac{1}{2}\left[\begin{array}{cc}1+j& 1-j\\ 1-j& 1+j\end{array}\right]=\frac{1}{4}\left[\begin{array}{cc}2j-2j& 4\\ 4& -2j+2j\end{array}\right]=\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right]=J$

3) Involuntary matrix: A square matrix A is said to be involuntary matrix, if ${A}^{2}=J$ . Unit matrix is always an involuntary matrix, since ${J}^{2}=J$ .

Proof:

Let $A=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]$ be a 3 × 3 matrix, then

$\begin{array}{c}{A}^{2}=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\\ =\left[\begin{array}{ccc}12-3-9& 12+0-12& 16-3-12\\ -3+0+3& -3+0+4& -4+0+4\\ -12+4+9& -12+0+12& -16+4+12\end{array}\right]\end{array}$

$=\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right]=J$

6.6. Solve the System of Equations

Inverse method:

$w+3x+2y+4z=21$

$w-3x-2y+4z=-5$

$2w-3x+2y-4z=-1$

$-2w+3x+2y-4z=1$

Solution: The system of given equations can be written as $B=XA$ , where

$\left[\begin{array}{cccc}4& 2& 3& 1\\ 4& -2& -3& 1\\ -4& 2& -3& 2\\ -4& 2& 3& -2\end{array}\right]=A$ , $\left[\begin{array}{c}w\\ x\\ y\\ z\end{array}\right]=X$ , $\left[\begin{array}{c}21\\ -5\\ -1\\ 1\end{array}\right]=B$

$XA=\left[\begin{array}{c}w\\ x\\ y\\ z\end{array}\right]\left[\begin{array}{cccc}4& 2& 3& 1\\ 4& -2& -3& 1\\ -4& 2& -3& 2\\ -4& 2& 3& -2\end{array}\right]=\left[\begin{array}{c}21\\ -5\\ -1\\ 1\end{array}\right]=B$

Therefore $B{A}^{-1}=X$ . Now

${A}^{-1}=\frac{-1}{96}\left[\begin{array}{cccc}96& 0& 96& 0\\ 48& 16& 64& 0\\ -72& -24& -72& -24\\ -24& 0& -36& -12\end{array}\right]$

$B{A}^{-1}=\left[\begin{array}{c}21\\ -5\\ -1\\ 1\end{array}\right]\frac{-1}{96}\left[\begin{array}{cccc}96& 0& 96& 0\\ 48& 16& 64& 0\\ -72& -24& -72& -24\\ -24& 0& -36& -12\end{array}\right]=\left[\begin{array}{c}4\\ 3\\ 2\\ 1\end{array}\right]$

Thus $w=4,x=3,y=2,z=1$ .

Cramer’s rule:

Ex: Solve the following system of equation:

$2x+y+3z=7$

$2y-z=3$

$x+2z=3$

here determinant ∆ of the coefficient matrix is given by

$\Delta =\left[\begin{array}{ccc}3& 1& 2\\ -1& 2& 0\\ 2& 0& 1\end{array}\right]$

Hence the system has a unique solution. By Cramer’s rule, we thus have

$x=\frac{{\Delta }_{x}}{\Delta },\text{\hspace{0.17em}}\text{\hspace{0.17em}}y=\frac{{\Delta }_{y}}{\Delta },\text{\hspace{0.17em}}\text{\hspace{0.17em}}z=\frac{{\Delta }_{z}}{\Delta }$

where

${\Delta }_{x}=|\begin{array}{ccc}3& 1& 7\\ -1& 2& 3\\ 2& 0& 3\end{array}|=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Delta }_{y}=|\begin{array}{ccc}3& 7& 2\\ -1& 3& 0\\ 2& 3& 1\end{array}|=2,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Delta }_{z}=|\begin{array}{ccc}7& 1& 2\\ 3& 2& 0\\ 3& 0& 1\end{array}|=1$

Hence the solution is $x=1,y=2,z=1$ .

Normal form

A matrix o order m x n is said to be in (fully reduced) normal form it is of the form

$\left[\begin{array}{cc}0& {I}_{r}\\ 0& 0\end{array}\right]$

Characteristic equation, Eigen values and Eigen vectors

The characteristic equation of a square matrix A is defined as the equation $|A-xJ|=0$ . The expression $|A-xJ|$ is often referred to as the characteristic polynomial. This will be usually denoted by ${\chi }_{A}\left(x\right)$ .

If $A=\left[\begin{array}{cccc}{a}_{1n}& \cdots & {a}_{12}& {a}_{11}\\ {a}_{2n}& \cdots & {a}_{22}& {a}_{21}\\ ⋮& \ddots & ⋮& ⋮\\ {a}_{nn}& \cdots & {a}_{n2}& {a}_{n1}\end{array}\right]$ then

${\chi }_{A}\left(x\right)={\left(-1\right)}^{n}{x}^{n}+{\left(-1\right)}^{n-1}tr\left(A\right){x}^{n-1}+{\left(-1\right)}^{n-2}$ (sum of principle minors of order 2) ${x}^{n-2}+{\left(-1\right)}^{n-3}$ (sum of principle minors of order 3) ${x}^{n-3}+\cdots +\mathrm{det}A$ .

Cayley-Hamilton theorem

Every square matrix satisfies its characteristic equation.

Proof: Let A be a square matrix and let ${a}_{0}+{a}_{1}x+{a}_{2}{x}^{2}+\cdots +{a}_{n}{x}^{n}=0$ be its characteristic equation. Then it is known that

$\left(A-xJ\right)adj\left(A-xJ\right)={|A-xJ|}^{n}J={a}_{0}J+{a}_{1}xJ+{a}_{2}{x}^{2}J+\cdots +{a}_{n}{x}^{n}J=0$

Since the matrix $A-xJ$ is a factor (left) of the right hand side, by the remainder theorem for matrix polynomials, if A is substituted for xJ then the right hand side must be satisfied.

Therefore, ${a}_{0}J+{a}_{1}A+{a}_{2}{A}^{2}+\cdots +{a}_{n}{A}^{n}=0$ .

Ex:

Let $A=\left[\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\\ 2& 0& 1\end{array}\right]$ be a 3 × 3 matrix then $|A|\ne 0$ .

The characteristic equation of A is ${x}^{3}-4{x}^{2}-4x-1=0$ .

By Cayley-Hamilton theorem, we get ${A}^{3}-4{A}^{2}-4A-J=0$ .

${A}^{2}=\left[\begin{array}{ccc}3& 0& 2\\ 1& 1& 2\\ 5& 0& 3\end{array}\right]$ , ${A}^{3}=\left[\begin{array}{ccc}8& 0& 5\\ 4& 1& 4\\ 13& 0& 8\end{array}\right]$

$\begin{array}{l}{A}^{3}-4{A}^{2}+4A-J\\ =\left[\begin{array}{ccc}8& 0& 5\\ 4& 1& 4\\ 13& 0& 8\end{array}\right]-4\left[\begin{array}{ccc}3& 0& 2\\ 1& 1& 2\\ 5& 0& 3\end{array}\right]+4\left[\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\\ 2& 0& 1\end{array}\right]-\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right]=\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\end{array}$

Remark 1: The characteristic is not only equation that a matrix satisfies. There may be other equation may be satisfied.

Remark 2: the above theorem can be used successfully in computing the inverse of a non-singular matrix.

To find A−1:

Let $A=\left[\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\\ 2& 0& 1\end{array}\right]$ be a 3 × 3 matrix then $|A|\ne 0$ and ${A}^{2}=\left[\begin{array}{ccc}3& 0& 2\\ 1& 1& 2\\ 5& 0& 3\end{array}\right]$

The characteristic equation of A is ${x}^{3}-4{x}^{2}-4x-1=0$ .

By Cayley-Hamilton theorem, we get ${A}^{3}-4{A}^{2}-4A-J=0$ .

Or $\left({A}^{2}-4A+4J\right)A=J$ , multiplying A−1 both side we get,

Hence $\left({A}^{2}-4A+4J\right)={A}^{-1}$ . Thus

$\begin{array}{c}{A}^{-1}={A}^{2}-4A+4J\\ =\left[\begin{array}{ccc}3& 0& 2\\ 1& 1& 2\\ 5& 0& 3\end{array}\right]-4\left[\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\\ 2& 0& 1\end{array}\right]+4\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right]=\left[\begin{array}{ccc}-1& 0& 2\\ 1& 1& -2\\ 1& 0& -1\end{array}\right]\end{array}$

Eigen space

The vector space generated by the Eigenvectors corresponding to an Eigen value of a matrix is called Eigen space of the eigenvalue. This will be sometimes denoted by L(λ), where λ is an Eigen value of A. if x and y are two Eigen vectors of an Eigen value of A and c is a scalar, then it is easy to verify that x + y and cx are also Eigen vectors of the same Eigen value.

The dimension of the Eigen space of an Eigen value is called geometric multiplicity of that Eigen value. If an Eigen value has the same geometric multiplicity as its algebraic multiplicity, then it is called regular.

Minimal polynomial

The minimal polynomial of a matrix is defined as the monic polynomial of the lowest degree satisfied by the matrix. The minimal polynomial of a matrix A will be denoted by ${m}_{A}\left(x\right)$ .

1) The minimal polynomial of a matrix divides every polynomial satisfied by the matrix.

2) The characteristic polynomial and minimal polynomial have the same irreducible factors.

3) A scalar is an Eigen value of a matrix A if and only if it is a root of the minimal polynomial of the matrix.

Ex: find the characteristic polynomials and the minimal polynomials of the matrix.

$\left[\begin{array}{ccc}0& 1& 2\\ -1& 1& 0\\ 4& 2& 0\end{array}\right]$

Solution: Let the given matrix be denoted by A. Then the characteristic equation is $|A-tJ|=0$ .

Therefore the characteristic equation of the matrix is,

$|\begin{array}{ccc}0& 1& 2-t\\ -1& 1-t& 0\\ 4-t& 2& 0\end{array}|=0$ or ${\left(t-2\right)}^{2}\left(t-3\right)=0$

The possible minimal polynomials are

$\left(t-2\right)\left(t-3\right)=0$ and ${\left(t-2\right)}^{2}\left(t-3\right)=0$

We now observe

$\left(A-2J\right)\left(A-3J\right)=\left[\begin{array}{ccc}0& 1& 0\\ -1& -1& 0\\ 2& 2& 0\end{array}\right]\left[\begin{array}{ccc}0& 1& -1\\ -1& -2& 0\\ 1& 2& 0\end{array}\right]=\left[\begin{array}{ccc}0& -2& -1\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\ne 0$

So, A does not satisfy the polynomial $\left(t-2\right)\left(t-3\right)$ . But by the Cayley-Hamilton theorem, A satisfies the polynomial ${\left(t-2\right)}^{2}\left(t-3\right)$ . Hence the characteristic polynomial is also the minimal polynomial of A.

Diagonalization

From the point of view of application, it is often important to see whether any square matrix is similar to a diagonal matrix, i.e. whether for a matrix A there exists a non-singular matrix P such that $D={P}^{-1}AP$ , where D is a diagonizable matrix.

Ex: Diagonalize the following matrix $\left[\begin{array}{ccc}3& -3& 1\\ 3& -5& 3\\ 4& -6& 6\end{array}\right]$

Solution:

Let $A=\left[\begin{array}{ccc}3& -3& 1\\ 3& -5& 3\\ 4& -6& 6\end{array}\right]$ then $|A-xJ|=\left[\begin{array}{ccc}3& -3& 1-x\\ 3& -5-x& 3\\ 4-x& -6& 6\end{array}\right]=0$ or ${\left(x+2\right)}^{2}\left(x-4\right)=0$ .

Eigenvalues of A are −2, −2, 4. The Eigenvectors corresponding to −2 are (0, 1, 1) and (1, 1, 0) and (2, 1, 1). Now we construct a matrix taking three Eigenvectors without the scalars as three columns and obtain

$P=\left[\begin{array}{ccc}1& 0& 1\\ 1& 1& 1\\ 2& 1& 0\end{array}\right]$ then $|P|=|\begin{array}{ccc}1& 0& 1\\ 1& 1& 1\\ 2& 1& 0\end{array}|\ne 0$ so ${P}^{-1}=\left[\begin{array}{ccc}\frac{-1}{2}& \frac{1}{2}& \frac{1}{2}\\ 0& 1& -1\\ \frac{1}{2}& \frac{-1}{2}& \frac{1}{2}\end{array}\right]$

Now

$PA{P}^{-1}=\left[\begin{array}{ccc}1& 0& 1\\ 1& 1& 1\\ 2& 1& 0\end{array}\right]\left[\begin{array}{ccc}3& -3& 1\\ 3& -5& 3\\ 4& -6& 6\end{array}\right]\left[\begin{array}{ccc}\frac{-1}{2}& \frac{1}{2}& \frac{1}{2}\\ 0& 1& -1\\ \frac{1}{2}& \frac{-1}{2}& \frac{1}{2}\end{array}\right]$

$\begin{array}{c}=\left[\begin{array}{ccc}1& 0& 1\\ 1& 1& 1\\ 2& 1& 0\end{array}\right]\left[\begin{array}{ccc}\frac{3}{2}+\frac{3}{2}-2& \frac{-3}{2}-\frac{5}{2}+3& \frac{1}{2}+\frac{3}{2}-3\\ -3+3+0& 3-5+0& -1+3+0\\ \frac{3}{2}-\frac{3}{2}+2& -\frac{3}{2}+\frac{5}{2}-3& \frac{1}{2}-\frac{3}{2}+3\end{array}\right]\\ =\left[\begin{array}{ccc}1& 0& 1\\ 1& 1& 1\\ 2& 1& 0\end{array}\right]\left[\begin{array}{ccc}1& 0-1& -1\\ 0& -2& 2\\ 2& -2& 2\end{array}\right]\\ =\left[\begin{array}{ccc}-1-1+2& 0-1+1& -1-1+0\\ 2-2+0& 0-2+0& 2-2+0\\ 2-2+4& 0-2+2& 2-2+0\end{array}\right]\\ =\left[\begin{array}{ccc}0& 0& -2\\ 0& -2& 0\\ 4& 0& 0\end{array}\right]=D\end{array}$

Which is the required diagonal matrix.

Also,

$\begin{array}{c}{P}^{-1}DP=\left[\begin{array}{ccc}\frac{-1}{2}& \frac{1}{2}& \frac{1}{2}\\ 0& 1& -1\\ \frac{1}{2}& \frac{-1}{2}& \frac{1}{2}\end{array}\right]\left[\begin{array}{ccc}0& 0& -2\\ 0& -2& 0\\ 4& 0& 0\end{array}\right]\left[\begin{array}{ccc}1& 0& 1\\ 1& 1& 1\\ 2& 1& 0\end{array}\right]\\ =\left[\begin{array}{ccc}\frac{-1}{2}& \frac{1}{2}& \frac{1}{2}\\ 0& 1& -1\\ \frac{1}{2}& \frac{-1}{2}& \frac{1}{2}\end{array}\right]\left[\begin{array}{ccc}0+0+4& 0+0+0& -2+0+0\\ 0+0+0& 0-2+0& -2+0+0\\ 0+0+8& 0-2+0& 0+0+0\end{array}\right]\\ =\left[\begin{array}{ccc}\frac{-1}{2}& \frac{1}{2}& \frac{1}{2}\\ 0& 1& -1\\ \frac{1}{2}& \frac{-1}{2}& \frac{1}{2}\end{array}\right]\left[\begin{array}{ccc}4& 0& -2\\ 4& -2& -2\\ 8& -2& 0\end{array}\right]\\ =\left[\begin{array}{ccc}1+0+2& -1+0-2& -1+0+2\\ 1+0+2& -1-2-2& -1+2+2\\ 0+0+4& 0-2-4& 0+2+4\end{array}\right]=\left[\begin{array}{ccc}3& -3& 1\\ 3& -5& 3\\ 4& -6& 6\end{array}\right]=A\end{array}$

7. Jordan Canonical Form

We have seen that if the Eigen values of a matrix are all different, then the matrix can be diagonalized by similarity transformations. If the Eigenvalues are not different, the matrix can be diagonalized if the Eigenvectors whose number equals the order of the matrix are linearly independent. If the Eigenvectors are not linearly independent or if their number is less than the order of the matrix, then A cannot be diagonalized by similarity transformations.

However, by similarity transformation, any square matrix can always be converted into a matrix having Jordan blocks along the diagonal, called the Jordan canonical form.

A matrix is called a Jordan block if all the diagonal elements of the matrix are the same scalar and always 1 occurs over the diagonal elements. The following are examples of Jordan blocks:

$\left[\text{5}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[\begin{array}{cc}1& 2\\ 2& 0\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[\begin{array}{ccc}0& 1& -2\\ 1& -2& 0\\ 8& 0& 0\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[\begin{array}{cccc}0& 0& 1& 3\\ 0& 1& 3& 0\\ 1& 3& 0& 0\\ 3& 0& 0& 0\end{array}\right]$

Theorem: Every square matrix A whose characteristic polynomial

${\chi }_{A}\left(t\right)={\left(t-{\lambda }_{1}\right)}^{{n}_{1}}{\left(t-{\lambda }_{2}\right)}^{{n}_{2}}\cdots {\left(t-{\lambda }_{r}\right)}^{{n}_{r}}$

And the minimal polynomial

${m}_{A}\left(t\right)={\left(t-{\lambda }_{1}\right)}^{m}{\left(t-{\lambda }_{2}\right)}^{m}\cdots {\left(t-{\lambda }_{r}\right)}^{m}$

Can be reduced to the block diagonal matrix

$\left[\begin{array}{cccc}0& & & {B}_{1j}\\ & & {B}_{2j}& \\ & ⋰& & \\ {B}_{rj}& & & 0\end{array}\right]$

Which is called a Jordan canonical form, where Bij are Jordan blocks of the form

$\left[\begin{array}{cccccc}0& 0& \cdots & 0& 1& {\lambda }_{1}\\ 0& 0& \cdots & 1& {\lambda }_{2}& 0\\ ⋮& ⋮& \ddots & ⋮& ⋮& ⋮\\ {\lambda }_{i}& 0& \cdots & 0& 0& 0\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,r$

And for each ${\lambda }_{i}$ the corresponding Bij have the following properties:

1) There is at least one Bij of order m, all other Bij are of order less than m.

2) The sum of orders of Bij is ni.

3) The number of Bij equals the geometric multiplicity of ${\lambda }_{i}$ .

4) The number of Bij of each possible order uniquely determined by A.

Ex: Reduce the matrix A to the Jordan canonical forms whose characteristic polynomial ${\chi }_{A}\left(t\right)$ and the minimal polynomial ${m}_{A}\left(t\right)$ are respectively

${\chi }_{A}\left(t\right)={\left(t-5\right)}^{2}{\left(t-7\right)}^{2}$ and ${m}_{A}\left(t\right)={\left(t-5\right)}^{2}{\left(t-7\right)}^{2}$

Then the Jordan canonical form is one of the following matrices.

$\left[\begin{array}{ccccccc}0& & & & & 1& 5\\ & & & & & 5& 0\\ & & & 1& 5& & \\ & & & 5& 0& & \\ & 1& 7& & & & \\ & 7& 0& & & & \\ 7& & & & & & 0\end{array}\right]$ or $\left[\begin{array}{ccccccc}0& & & & & 1& 5\\ & & & & & 5& 0\\ & & & & 5& & \\ & & & 5& & & \\ & 1& 7& & & & \\ & 7& 0& & & & \\ 7& & & & & & 0\end{array}\right]$

Note that the first one is the desired canonical form if there are two independent eigenvectors corresponding to the eigenvalue 5 but the second occurs if there are three Eigen vectors corresponding to the Eigen value 5.

Remark: Observe that

1) The order of the matrix is 4 + 3, i.e. 7

2) There is at least one block of order 5 [the index of the factor t − 5 in ${m}_{A}\left(t\right)$ ] for the Eigen value 5 and at least one block of order 2 [the index of the factor t − 7 in ${m}_{A}\left(t\right)$ ].

3) The sum of the order of the blocks corresponding to 5 is 2 + 2, i.e. 4 in first form; the sum of the orders of the blocks corresponding to 7 is 2 + 1, i.e. 3 in both forms.

An expression of the form $\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{n}{\sum }}{a}_{ij}{x}_{i}{x}_{j}$ , where ${a}_{ij}\in r$ and ${a}_{ij}={a}_{ji}$ , is called a real quadratic form in the variables ${x}_{1},{x}_{2},\cdots ,{x}_{n}$ . A real quadratic form can be written as ${X}^{\text{T}}AX$ where $X={\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)}^{\text{T}}$ and A is a symmetric matrix, (called the matrix of the form) as

$|\begin{array}{ccccc}{a}_{1n}& \cdots & {a}_{13}& {a}_{12}& {a}_{11}\\ {a}_{2n}& \cdots & {a}_{23}& {a}_{22}& {a}_{21}\\ ⋮& \ddots & ⋮& ⋮& ⋮\\ {a}_{nn}& \cdots & {a}_{n3}& {a}_{n2}& {a}_{n1}\end{array}|$ with

${a}_{ij}={a}_{ji}$

Determine the matrix of the following quadratic form:

${x}_{11}+2{x}_{22}+4{x}_{33}+2{x}_{1}{x}_{2}-4{x}_{2}{x}_{3}-2{x}_{3}{x}_{1}$

here the associated matrix is $|\begin{array}{ccc}-1& 1& 1\\ -2& 2& 1\\ 4& -2& -1\end{array}|$ .

I and J operation on a given matrix A

1) Super orthogonal matrix: a square matrix A is said to be super orthogonal matrix, for both operations I and J, product of matrix and its transpose gives identity matrix.

i.e. for I-operation ${A}_{I}{A}_{I}^{\text{T}}=I={A}_{I}^{\text{T}}{A}_{I}$ where I is a I-unit matrix and for J-operation ${A}_{J}{A}_{j}^{\text{T}}=J={A}_{j}^{\text{T}}{A}_{J}$ where J is a J-unit matrix.

Let $A=\frac{1}{3}\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]$ be a 3 × 3 matrix, then

For I-operation:

${A}_{I}^{\text{T}}=\frac{1}{3}\left[\begin{array}{ccc}2& -2& 1\\ 1& 2& 2\\ -2& -1& 2\end{array}\right]$

$\begin{array}{c}{A}_{I}{A}_{I}^{\text{T}}=\frac{1}{9}\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]\left[\begin{array}{ccc}2& -2& 1\\ 1& 2& 2\\ -2& -1& 2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}4+1+4& -4+2+2& 2+2-4\\ -4+2+2& 4+4+1& -2+4-2\\ 2+2-4& -2+4-2& 1+4+4\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}9& 0& 0\\ 0& 9& 0\\ 0& 0& 9\end{array}\right]=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]=I\end{array}$

$\begin{array}{c}{A}_{I}^{\text{T}}{A}_{I}=\frac{1}{9}\left[\begin{array}{ccc}2& -2& 1\\ 1& 2& 2\\ -2& -1& 2\end{array}\right]\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}4+4+1& 2-4+2& -4+2+2\\ 2-4+2& 1+4+4& -2-2+4\\ -4+2+2& -2-2+4& 4+1+4\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}9& 0& 0\\ 0& 9& 0\\ 0& 0& 9\end{array}\right]=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]=I\end{array}$

Thus ${A}_{I}{A}_{I}^{\text{T}}=I={A}_{I}^{\text{T}}{A}_{I}$ .

For J-operation:

${A}_{J}^{\text{T}}=\frac{1}{3}\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]$

$\begin{array}{c}{A}_{J}{A}_{J}^{\text{T}}=\frac{1}{9}\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}-4+2+2& -2-2+4& 1+4+4\\ 2-4+2& 1+4+4& -2-2+4\\ 4+4+1& 2-4+2& -4+2+2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}0& 0& 9\\ 0& 9& 0\\ 9& 0& 0\end{array}\right]=\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right]=J\end{array}$

$\begin{array}{c}{A}_{J}^{\text{T}}{A}_{J}=\frac{1}{9}\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}-4+2+2& 2+2-4& 4+1+4\\ -2+4-2& 1+4+4& 2+2-4\\ 4+4+1& -2+4-2& -4+2+2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}0& 0& 9\\ 0& 9& 0\\ 9& 0& 0\end{array}\right]=\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right]=J\end{array}$

Thus ${A}_{J}{A}_{j}^{\text{T}}=J={A}_{j}^{\text{T}}{A}_{J}$ .

2) Super unitary matrix: a square matrix A is said to be super unitary matrix,

for both operations I and J, product of matrix and its transpose of its conjugate gives identity matrix.

i.e. for I-operation ${A}_{I}{A}_{I}^{*}=I={A}_{I}^{*}{A}_{I}$ where I is a I-unit matrix and for J-operation ${A}_{J}{A}_{j}^{*}=J={A}_{j}^{*}{A}_{J}$ where J is a J-unit matrix.

Proof:

$A=\frac{1}{2}\left[\begin{array}{cc}1-k& 1+k\\ 1+k& 1-k\end{array}\right]$ be a 2 × 2 matrix. Then $\stackrel{¯}{A}=\frac{1}{2}\left[\begin{array}{cc}1+k& 1-k\\ 1-k& 1+k\end{array}\right]$

For I-operation

${A}_{I}^{*}={\left({\stackrel{¯}{A}}_{I}\right)}^{\text{T}}=\frac{1}{2}\left[\begin{array}{cc}1+k& 1-k\\ 1-k& 1+k\end{array}\right]$

Now

$\begin{array}{c}{A}_{I}{A}_{I}^{*}=\frac{1}{4}\left[\begin{array}{cc}1-k& 1+k\\ 1+k& 1-k\end{array}\right]\left[\begin{array}{cc}1+k& 1-k\\ 1-k& 1+k\end{array}\right]\\ =\frac{1}{4}\left[\begin{array}{cc}4& -2k+2k\\ 2k-2k& 4\end{array}\right]=\frac{1}{4}\left[\begin{array}{cc}4& 0\\ 0& 4\end{array}\right]=\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right]=I\end{array}$

$\begin{array}{c}{A}_{I}^{*}{A}_{I}=\frac{1}{4}\left[\begin{array}{cc}1+k& 1-k\\ 1-k& 1+k\end{array}\right]\left[\begin{array}{cc}1-k& 1+k\\ 1+k& 1-k\end{array}\right]\\ =\frac{1}{4}\left[\begin{array}{cc}4& 2k-2k\\ -2k+2k& 4\end{array}\right]=\frac{1}{4}\left[\begin{array}{cc}4& 0\\ 0& 4\end{array}\right]=\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right]=I\end{array}$

Thus ${A}_{I}{A}_{I}^{*}=I={A}_{I}^{*}{A}_{I}$ , where I is an I-unit matrix.

For J-operation

${A}_{J}^{*}={\left({\stackrel{¯}{A}}_{I}\right)}^{\text{T}}=\frac{1}{2}\left[\begin{array}{cc}1+k& 1-k\\ 1-k& 1+k\end{array}\right]$

Now

$\begin{array}{c}{A}_{J}{A}_{J}^{*}=\frac{1}{4}\left[\begin{array}{cc}1-k& 1+k\\ 1+k& 1-k\end{array}\right]\left[\begin{array}{cc}1+k& 1-k\\ 1-k& 1+k\end{array}\right]\\ =\frac{1}{4}\left[\begin{array}{cc}-2k+2k& 4\\ 4& 2k-2k\end{array}\right]=\frac{1}{4}\left[\begin{array}{cc}0& 4\\ 4& 0\end{array}\right]=\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right]=J\end{array}$

$\begin{array}{c}{A}_{j}^{*}{A}_{J}=\frac{1}{4}\left[\begin{array}{cc}1+k& 1-k\\ 1-k& 1+k\end{array}\right]\left[\begin{array}{cc}1-k& 1+k\\ 1+k& 1-k\end{array}\right]\\ =\frac{1}{4}\left[\begin{array}{cc}2k-2k& 4\\ 4& -2k+2k\end{array}\right]=\frac{1}{4}\left[\begin{array}{cc}0& 4\\ 4& 0\end{array}\right]=\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right]=J\end{array}$

Thus ${A}_{J}{A}_{J}^{*}=I={A}_{J}^{*}{A}_{J}$ , where J is a J-unit matrix.

Theorem 1: Unit matrix satisfies involuntary condition in one manner only. It would be either I-matrix or J-matrix.

Proof:

Let $A=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]$ be a 3 × 3 matrix, then

For I-operation:

$\begin{array}{c}{A}_{I}^{2}=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\\ =\left[\begin{array}{ccc}9-3-12& 9+0-16& 12-3-16\\ -3+0+3& -3+0+4& -4+0+4\\ 9+4+2& -9+0+16& -12+4+16\end{array}\right]\left[\begin{array}{ccc}-6& -7& -7\\ 0& 1& 0\\ 7& 7& 8\end{array}\right]\ne I\end{array}$

$\begin{array}{c}{A}_{J}^{2}=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\\ =\left[\begin{array}{ccc}12-3-9& 12+0-12& 16-3-12\\ -3+0+3& -3+0+4& -4+0+4\\ -12+4+9& -12+0+12& -16+4+12\end{array}\right]\left[\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right]=J\end{array}$

Theorem 2: Inverse of I-matrix and inverse of J-matrix are TRANSPROCAL to each other.

Proof:

Let ${A}_{I}=\left[\begin{array}{ccc}{a}_{11}& {a}_{12}& {a}_{13}\\ {a}_{21}& {a}_{22}& {a}_{23}\\ {a}_{31}& {a}_{32}& {a}_{33}\end{array}\right]$ be an n × n matrix, then

$\begin{array}{c}{A}_{i}^{-1}=\frac{1}{|{A}_{I}|}\left[\begin{array}{ccc}|{a}_{22}{a}_{33}-{a}_{23}{a}_{32}|& |{a}_{13}{a}_{32}-{a}_{12}{a}_{33}|& |{a}_{12}{a}_{23}-{a}_{13}{a}_{22}|\\ |{a}_{23}{a}_{31}-{a}_{21}{a}_{33}|& |{a}_{11}{a}_{33}-{a}_{13}{a}_{31}|& |{a}_{13}{a}_{21}-{a}_{11}{a}_{23}|\\ |{a}_{21}{a}_{32}-{a}_{22}{a}_{31}|& |{a}_{11}{a}_{32}-{a}_{12}{a}_{31}|& |{a}_{11}{a}_{22}-{a}_{12}{a}_{21}|\end{array}\right]\\ =\frac{1}{|{A}_{I}|}\left[\begin{array}{ccc}{A}_{11}& {A}_{21}& {A}_{31}\\ {A}_{12}& {A}_{22}& {A}_{32}\\ {A}_{13}& {A}_{23}& {A}_{33}\end{array}\right]\end{array}$

Let ${A}_{J}=\left[\begin{array}{ccc}{a}_{11}& {a}_{12}& {a}_{13}\\ {a}_{21}& {a}_{22}& {a}_{23}\\ {a}_{31}& {a}_{32}& {a}_{33}\end{array}\right]$ be an n × n matrix, then

$\begin{array}{c}{A}_{J}^{-1}=\frac{1}{|{A}_{J}|}\left[\begin{array}{ccc}|{a}_{12}{a}_{21}-{a}_{11}{a}_{22}|& |{a}_{12}{a}_{31}-{a}_{11}{a}_{32}|& |{a}_{22}{a}_{31}-{a}_{21}{a}_{32}|\\ |{a}_{11}{a}_{23}-{a}_{13}{a}_{21}|& |{a}_{11}{a}_{33}-{a}_{13}{a}_{31}|& |{a}_{21}{a}_{33}-{a}_{23}{a}_{31}|\\ |{a}_{13}{a}_{22}-{a}_{12}{a}_{23}|& |{a}_{12}{a}_{33}-{a}_{13}{a}_{32}|& |{a}_{23}{a}_{32}-{a}_{22}{a}_{33}|\end{array}\right]\\ =\frac{1}{|{A}_{J}|}\left[\begin{array}{ccc}{A}_{33}& {A}_{23}& {A}_{13}\\ {A}_{32}& {A}_{22}& {A}_{12}\\ {A}_{31}& {A}_{21}& {A}_{11}\end{array}\right]\end{array}$

From above, we conclude that the inverse of I-matrix and J-matrix is TRANSPROCAL to each other.

8. Solving Linear Equations Using I-Matrix and J-Matrix

Inverse method:

Solve the following system of equation:

$2x+y+3z=7$ (1)

$2y-z=3$ (2)

$x+2z=3$ (3)

Solution: The given system of equations can be written as $AX=B$ , where

i.e. $AX=\left[\begin{array}{ccc}2& 1& 3\\ 0& 2& -1\\ 1& 0& 2\end{array}\right]\left[\begin{array}{c}x\\ y\\ z\end{array}\right]=\left[\begin{array}{c}7\\ 3\\ 3\end{array}\right]=B$

Case 1:

Let ${A}_{I}=\left[\begin{array}{ccc}2& 1& 3\\ 0& 2& -1\\ 1& 0& 2\end{array}\right]$ then ${A}_{I}^{-1}=\left[\begin{array}{ccc}4& -2& -7\\ -1& 1& 2\\ -2& 1& 4\end{array}\right]$

$\left[\begin{array}{c}x\\ y\\ z\end{array}\right]=\left[\begin{array}{ccc}4& -2& -7\\ -1& 1& 2\\ -2& 1& 4\end{array}\right]\left[\begin{array}{c}7\\ 3\\ 3\end{array}\right]=\left[\begin{array}{c}28-6-21\\ -7+3+6\\ -14+3+12\end{array}\right]=\left[\begin{array}{c}1\\ 2\\ 1\end{array}\right]$ .

Case 2:

Let ${A}_{J}=\left[\begin{array}{ccc}2& 1& 3\\ 0& 2& -1\\ 1& 0& 2\end{array}\right]$ then ${A}_{J}^{-1}=\left[\begin{array}{ccc}4& 1& -2\\ 2& 1& 1\\ -7& -2& 4\end{array}\right]$

$B=\left[\begin{array}{c}7\\ 3\\ 3\end{array}\right]=\left[\begin{array}{c}z\\ y\\ x\end{array}\right]\left[\begin{array}{ccc}2& 1& 3\\ 0& 2& -1\\ 1& 0& 2\end{array}\right]={X}^{¬}A$ .

$\left[\begin{array}{c}z\\ y\\ x\end{array}\right]=\left[\begin{array}{c}7\\ 3\\ 3\end{array}\right]\left[\begin{array}{ccc}4& 1& -2\\ 2& 1& -1\\ -7& -2& 4\end{array}\right]=\left[\begin{array}{c}-14+3+12\\ 7-3+6\\ 28-6-21\end{array}\right]=\left[\begin{array}{c}1\\ 2\\ 1\end{array}\right]$

Both ways we got the same solution for given equations.

Theorem 3: Every non-singular square matrix has two characteristic equations. One is from main diagonal (I-matrix) and the other one is from anti-diagonal (J-matrix). Also, every non-singular square matrix satisfies its characteristic equations.

Proof:

Let $A=\left[\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\\ 2& 0& 1\end{array}\right]$ be a 3 × 3 matrix and $|A|\ne 0$ , then

For I-matrix:

$|{A}_{I}|=|\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\\ 2& 0& 1\end{array}|=-1$

$|{A}_{I}-x|=|\begin{array}{ccc}1-x& 0& 1\\ 0& 1-x& 1\\ 2& 0& 1-x\end{array}|$ . The characteristic equation of AI is ${x}^{3}-3{x}^{2}+x+1=0$ .

Eigen values: ${x}_{1}=2.41,{x}_{2}=1,{x}_{3}=-0.41$ .

By Cayley-Hamilton theorem, we get ${A}^{3}-3{A}^{2}+A+I=0$

${A}^{2}=\left[\begin{array}{ccc}3& 0& 2\\ 2& 1& 2\\ 4& 0& 3\end{array}\right]$ , ${A}^{3}=\left[\begin{array}{ccc}7& 0& 5\\ 6& 1& 5\\ 10& 0& 7\end{array}\right]$

${A}^{3}-3{A}^{2}+A+I=\left[\begin{array}{ccc}7-9+1+1& 0-0+0+0& 5-6+1+0\\ 6-6+0+0& 1-3+1+1& 5-6+1+0\\ 10-12+2+0& 0-0+0+0& 7-9+1+1\end{array}\right]=\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]$ .

For J-matrix:

$|{A}_{I}|=|\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\\ 2& 0& 1\end{array}|=1$

$|{A}_{I}-x|=|\begin{array}{ccc}1& 0& 1-x\\ 0& 1-x& 1\\ 2-x& 0& 1\end{array}|$ . The characteristic equation of AI is ${x}^{3}-4{x}^{2}+4x-1=0$ .

Eigen values: ${x}_{1}=2.6,{x}_{2}=1,{x}_{3}=0.38$ .

By Cayley-Hamilton theorem, we get ${x}^{3}-4{A}^{2}+4A-J=0$

${A}^{2}=\left[\begin{array}{ccc}3& 0& 2\\ 1& 1& 2\\ 5& 0& 3\end{array}\right]$ , ${A}^{3}=\left[\begin{array}{ccc}8& 0& 5\\ 4& 1& 4\\ 13& 0& 8\end{array}\right]$

$\begin{array}{l}{A}^{3}-4{A}^{2}+4A-J\\ =\left[\begin{array}{ccc}8-12+4-0& 0-0+0-0& 5-8+4+-1\\ 4-4+0-0& 1-4+4-1& 4-8+4-0\\ 13-20+8-1& 0-0+0-0& 8-12+4-0\end{array}\right]=\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\end{array}$

Hence proved.

Both forms of matrices I and J, we obtained different characteristic equations and the given matrix satisfied both equations.

Orthoprocal matrix

A real matrix A is orthoprocal if ${A}^{¬}={A}^{-1}$ , that is, if $A{A}^{¬}={A}^{¬}A=I$ . Thus A must necessarily be square and invertible.

Let $A=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]$ then ${A}^{¬}=\left[\begin{array}{ccc}-4& -4& -3\\ -1& 0& -1\\ 4& 3& 3\end{array}\right]$

$\begin{array}{c}A{A}^{¬}=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\left[\begin{array}{ccc}-4& -4& -3\\ -1& 0& -1\\ 4& 3& 3\end{array}\right]\\ =\left[\begin{array}{ccc}-12-3+16& -12+0+12& -9-3+12\\ 4+0-4& 4+0-3& 3+0-3\\ 12+4-16& 12+0-12& 9+4-12\end{array}\right]\end{array}$

$=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]=I$

$\begin{array}{c}{A}^{¬}A=\left[\begin{array}{ccc}-4& -4& -3\\ -1& 0& -1\\ 4& 3& 3\end{array}\right]\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\\ =\left[\begin{array}{ccc}-12+4+9& -12+0+12& -16+4+12\\ -3+0+3& -3+0+4& -4+0+4\\ 12-3-9& 12+0-12& 16-3-12\end{array}\right]\\ =\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]=I\end{array}$

Thus $A{A}^{¬}={A}^{¬}A=I$ .

Trans Orthoprocal matrices

A real matrix A is Trans Orthoprocal if ${\left({A}^{¬}\right)}^{-1}={A}^{⊳}$ , that is, if ${A}^{¬}{A}^{⊳}={A}^{⊳}{A}^{¬}=I$ . Thus A must necessarily be square and invertible.

Let $A=\frac{1}{3}\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]$ then ${A}^{¬}=\frac{1}{3}\left[\begin{array}{ccc}2& 2& 1\\ -1& 2& -2\\ -2& 1& 2\end{array}\right]$ and ${A}^{⊳}=\frac{1}{3}\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]$

$\begin{array}{c}{A}^{¬}{A}^{⊳}=\frac{1}{3}\left[\begin{array}{ccc}2& 2& 1\\ -1& 2& -2\\ -2& 1& 2\end{array}\right]\frac{1}{3}\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}4+4+1& -2+4-2& -4+2+2\\ -2+4-2& 1+4+4& 2+2-4\\ -4+2+2& 2+2-4& 4+1+4\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}9& 0& 0\\ 0& 9& 0\\ 0& 0& 9\end{array}\right]=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]=I\end{array}$

$\begin{array}{c}{A}^{⊳}{A}^{¬}=\frac{1}{3}\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]\frac{1}{3}\left[\begin{array}{ccc}2& 2& 1\\ -1& 2& -2\\ -2& 1& 2\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}4+1+4& 4-2-2& 2+2-4\\ 4-2-2& 4+4+1& 2-4+2\\ 2+2-4& 2-4+2& 1+4+4\end{array}\right]\\ =\frac{1}{9}\left[\begin{array}{ccc}9& 0& 0\\ 0& 9& 0\\ 0& 0& 9\end{array}\right]=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]=I\end{array}$

Thus ${A}^{¬}{A}^{⊳}={A}^{⊳}{A}^{¬}=I$ .

Theorem 4:

A matrix A is said to be trans-orthoprocal matrix iff matrix A should be an orthogonal matrix.

Proof:

Necessity:

Let $I=A{A}^{\text{T}}$ be an orthogonal matrix. Now we taking transprocal on both sides we get,

${I}^{¬}={\left[A{A}^{\text{T}}\right]}^{¬}={A}^{¬}{A}^{\text{T}}{}^{{}^{¬}}$ Again we taking transpose on both side we get,

${I}^{¬}{}^{{}^{\text{T}}}={\left[{A}^{¬}{A}^{\text{T}}{}^{{}^{¬}}\right]}^{\text{T}}={\left[{A}^{¬}{A}^{⊳}\right]}^{\text{T}}={A}^{⊳}{}^{{}^{\text{T}}}{A}^{¬}{}^{{}^{\text{T}}}={A}^{¬}{A}^{⊳}=I$ . Since ${I}^{¬}{}^{{}^{\text{T}}}={I}^{⊳}=I$ .

Sufficient:

Let $I={A}^{¬}{A}^{⊳}$ be a Trans-orthoprocal matrix. Now we taking transprocal on both sides we get,

${I}^{¬}={\left[{A}^{¬}{A}^{⊳}\right]}^{¬}={A}^{¬}{}^{{}^{¬}}{A}^{⊳}{}^{{}^{¬}}=A{A}^{\text{T}}=I$ . Since ${I}^{¬}=I$ .

Or

Let $I={A}^{¬}{A}^{⊳}$ be a trans-orthoprocal matrix. Now we taking transpose on both sides we get,

${I}^{\text{T}}={\left[{A}^{¬}{A}^{⊳}\right]}^{\text{T}}={A}^{⊳}{}^{{}^{\text{T}}}{A}^{\text{T}}=A{A}^{\text{T}}=I$ . Since ${I}^{\text{T}}=I$ .

Product between two transprocose matrice ${\left(AB\right)}^{⊳}={B}^{⊳}{A}^{⊳}$

Let $A=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]$ , $B=\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]$ be two matrices then

${A}^{⊳}=\left[\begin{array}{ccc}-4& -1& 4\\ -4& 0& 3\\ -3& -1& 3\end{array}\right]$ , ${B}^{⊳}=\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]$ are transprocose matrices of A and B matrices.

Now $AB=\left[\begin{array}{ccc}3& 3& 4\\ -1& 0& -1\\ -3& -4& -4\end{array}\right]\left[\begin{array}{ccc}2& 1& -2\\ -2& 2& -1\\ 1& 2& 2\end{array}\right]$

$AB=\left[\begin{array}{ccc}6-6+4& 3+6+8& -6-3+8\\ -2+0-1& -1+0-2& 2+0-2\\ -6+8-4& -3-8-8& 6+4-8\end{array}\right]=\left[\begin{array}{ccc}4& 17& -1\\ -3& -3& 0\\ -2& -19& 2\end{array}\right]$

${\left(AB\right)}^{⊳}=\left[\begin{array}{ccc}2& 0& -1\\ -19& -3& 17\\ -2& -3& 4\end{array}\right]$ .

Now ${B}^{⊳}{A}^{⊳}=\left[\begin{array}{ccc}2& -1& -2\\ 2& 2& 1\\ 1& -2& 2\end{array}\right]\left[\begin{array}{ccc}-4& -1& 4\\ -4& 0& 3\\ -3& -1& 3\end{array}\right]$

${B}^{⊳}{A}^{⊳}=\left[\begin{array}{ccc}-8+4+6& -2+0+2& 8-3-6\\ -8-8-3& -2+0-1& 8+6+3\\ -4+8-6& -1+0-2& 4-6+6\end{array}\right]=\left[\begin{array}{ccc}2& 0& -1\\ -19& -3& 17\\ -2& -3& 4\end{array}\right]={\left(AB\right)}^{⊳}$

Hence proved.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

  Lipschutz, S. and Lipson, M.L. (2009) Schuam Outline Series Linear Algebra. 4th Edition, Mcgraw Hill Publishers, New York, 27-111.  Kuttler, K. (2012) Linear Algebra Matrices and Row Operations. BookBoon Limited, 47-49. https://bookboon.com/en  Jarman, E. (2012) N Roots of Matrices. University of Central Missouri, Warrensburg, 22-58.  Denton, T. and Waldron, A. (2012) Linear Algebra in Twenty Five Lectures. 63-122.  Chandra, P., Lal, A.K., Raghavendra, V. and Santhanam, G. (2013) Notes on Mathematics-1021, Funded by Mhrd-India. 9-46.     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 