A Follow-Up on Projection Theory: Theorems and Group Action

Abstract

In this article, we wish to expand on some of the results obtained from the first article entitled Projection Theory. We have already established that one-parameter projection operators can be constructed from the unit circle . As discussed in the previous article these operators form a Lie group known as the Projection Group. In the first section, we will show that the concepts from my first article are consistent with existing theory [1] [2]. In the second section, it will be demonstrated that not only such operators are mutually congruent but also we can define a group action on  by using the rotation group [3] [4]. It will be proved that the group acts on elements of  in a non-faithful but ∞-transitive way consistent with both group operations. Finally, in the last section we define the group operation  in terms of matrix operations using the operator and the Hadamard Product; this construction is consistent with the group operation defined in the first article.

Share and Cite:

Niglio, J. (2019) A Follow-Up on Projection Theory: Theorems and Group Action. Advances in Linear Algebra & Matrix Theory, 9, 1-19. doi: 10.4236/alamt.2019.91001.

1. Notation System

In this article the notation has been kept the same as in the previous article.

F is the Frobenius Norm.

P [ θ ] is some projection operator in G P ( [ θ ] ) .

I 2 is the 2 × 2 identity matrix.

R n k ( P [ θ ] ) is the Rank of a projection operator.

σ ( P [ θ ] ) is the spectrum of a projection operator.

S t a b S O ( 2 ) ( P [ θ ] ) is the Stabilizer subgroup for some fixed element of G P ( [ θ ] ) .

R ( α ) is an element of S O ( 2 ) for angle α .

V e c is the Vectorisation Operator.

is the Hadamard Product.

is the Kronecker Product.

2. Some Important Theorems and Lemmas

From [1] [2] we can get the following results.

Lemma 1. The Frobenius morn F of a projection matrix is its trace. That is

P [ θ ] F = T r ( P [ θ ] ) = 1

Proof. It is well known that

P [ θ ] F 2 : = i , j | p i j ( θ ) | 2 = T r ( P [ θ ] T P [ θ ] )

Given that P [ θ ] is both symmetric and idempotent implies that

T r ( P [ θ ] T P [ θ ] ) = T r ( P [ θ ] P [ θ ] ) = T r ( P [ θ ] 2 ) = T r ( P [ θ ] )

Hence,

P [ θ ] F = T r ( P [ θ ] )

therefore,

T r ( [ cos 2 [ θ ] cos [ θ ] si n [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] ) = cos 2 [ θ ] + sin 2 [ θ ] = 1, [ θ ] T

Theorem 1 The necessary and sufficient condition for P [ θ ] G P ( [ θ ] ) is given by

P [ θ ] 2 = P [ θ ] , [ θ ] T

Proof. We know, from the previous article that P [ θ ] is constructed as follows

Φ ( e i [ θ ] ) : = e i [ θ ] ( e i [ θ ] ) T = [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] = P [ θ ]

P [ θ ] 2 = [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] 2 = [ cos 4 [ θ ] + cos 2 [ θ ] sin 2 [ θ ] cos 3 [ θ ] sin [ θ ] + cos [ θ ] sin 3 [ θ ] cos 3 [ θ ] sin [ θ ] + cos [ θ ] sin 3 [ θ ] sin 4 [ θ ] + cos 2 [ θ ] sin 2 [ θ ] ] = [ cos 2 [ θ ] ( cos 2 [ θ ] + sin 2 [ θ ] ) cos [ θ ] sin [ θ ] ( cos 2 [ θ ] + sin 2 [ θ ] ) cos [ θ ] sin [ θ ] ( cos 2 [ θ ] + sin 2 [ θ ] ) sin 2 [ θ ] ( sin 2 [ θ ] + cos 2 [ θ ] ) ] = [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] = P [ θ ]

Theorem 2 The orthogonal complement of P [ θ ] is given by

P [ θ ± π / 2 ] = I 2 P [ θ ]

where I 2 is the 2 × 2 identity matrix. This is known as a Vector Rejection.

Proof. We first show that

P [ θ ± π / 2 ] = I 2 P [ θ ]

the RHS of the equation gives us

I 2 P ( [ θ ] ) = [ 1 0 0 1 ] [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] = [ 1 cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] 1 sin 2 [ θ ] ] = [ sin 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] cos 2 [ θ ] ]

Now, the LHS of th equation gives us the following result

P [ θ ± π / 2 ] = [ cos 2 ( [ θ ] ± π / 2 ) cos ( [ θ ] ± π / 2 ) sin ( [ θ ] ± π / 2 ) sin ( [ θ ] ± π / 2 ) cos ( [ θ ] ± π / 2 ) sin 2 ( [ θ ] ± π / 2 ) ] = [ sin 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] cos 2 [ θ ] ]

since

cos 2 ( [ θ ] ± π / 2 ) = ( cos [ θ ] cos ( π / 2 ) sin [ θ ] sin ( π / 2 ) ) 2 = ( sin [ θ ] ) 2 = sin 2 [ θ ]

sin 2 ( [ θ ] ± π / 2 ) = ( sin [ θ ] cos ( π / 2 ) ± cos [ θ ] sin ( π / 2 ) ) 2 = ( ± cos [ θ ] ) 2 = cos 2 [ θ ]

Secondly, we show the effect these operators have on some vector x 2 . We easily show this in the following way

P [ θ ] x , P [ θ ± π / 2 ] x = 0

We proceed as follows

P [ θ ] x , P [ θ ± π / 2 ] x = P [ θ ] x , ( I 2 P [ θ ] ) x = P [ θ ] x , I 2 x P [ θ ] x = P [ θ ] x , x P [ θ ] x = P [ θ ] x , x P [ θ ] x , P [ θ ] x = P [ θ ] x x cos θ P [ θ ] x 2 = P [ θ ] x P [ θ ] x P [ θ ] x 2 = 0

Lemma 2 The product of the operators P [ θ ] and P [ θ ± π / 2 ] is zero. We shall call P [ θ ± π / 2 ] the Orthogonal Complement Operator of P [ θ ] which I shall denote as P [ θ ] .

P [ θ ] P [ θ ] = 0, [ θ ] T

We say that P [ θ ] P [ θ ]

Proof.

P [ θ ] P [ θ ] = [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ sin 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] cos 2 [ θ ] ] = [ cos 2 [ θ ] sin 2 [ θ ] cos 2 [ θ ] sin 2 [ θ ] cos 3 [ θ ] sin [ θ ] + cos 3 [ θ ] sin [ θ ] sin 3 [ θ ] cos [ θ ] sin 3 [ θ ] cos [ θ ] cos 2 [ θ ] sin 2 [ θ ] + sin 2 [ θ ] cos 2 [ θ ] ] = [ 0 0 0 0 ]

Lemma 3 P [ θ ] + P [ θ ] = I 2

Proof.

P [ θ ] + P [ θ ] = [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] + [ sin 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] cos 2 [ θ ] ] = [ 1 0 0 1 ] = I 2

Theorem 3 A projection operator of dimension onto V 2 where dim ( V ) = r then

P [ θ ] = T Δ r T 1 (1)

where

Δ r : = [ 1 0 0 0 ] (2)

The matrix T is formed by the columns of the basis vectors which span the subspaces V and W respectively.

Proof. We know that V , W 2 , furthermore we also know that dim ( V ) = dim ( W ) = 1 r = 1 .

Suppose the vector a = ( cos θ , sin θ ) T is a basis for subspaces V i.e. S p n { a } = V then the matrix T can be computed in the following way.

Now, taking into account the fact that W = V implies that b is given by

b = ( cos ( θ ± π / 2 ) , sin ( θ ± π / 2 ) ) = ( sin θ , ± cos θ )

Hence, T is computed in the following way

T = [ cos [ θ ] sin [ θ ] sin [ θ ] ± cos [ θ ] ] det ( T ) = ± cos 2 [ θ ] ± sin 2 [ θ ] = ± 1

Hence, T 1 is given by

T 1 = ± 1 [ ± cos [ θ ] ± sin [ θ ] sin [ θ ] cos [ θ ] ] = [ cos [ θ ] sin [ θ ] sin [ θ ] ± cos [ θ ] ] = T T T , T T O (2,ℝ)

Hence, P [ θ ] is

T Δ r T 1 = [ cos [ θ ] sin [ θ ] sin [ θ ] ± cos [ θ ] ] [ 1 0 0 0 ] ± 1 [ ± cos [ θ ] ± sin [ θ ] sin [ θ ] cos [ θ ] ] = ± 1 [ cos [ θ ] sin [ θ ] sin [ θ ] ± cos [ θ ] ] [ ± cos [ θ ] ± sin [ θ ] 0 0 ] = ± 1 [ ± cos 2 [ θ ] ± cos [ θ ] sin [ θ ] ± sin [ θ ] cos [ θ ] ± sin 2 [ θ ] ] = P [ θ ]

We can see that T , T 1 O ( 2, ) and are orthogonal reflection matrices.

Lemma 4 Let P [ θ ] be some projection matrix for some [ θ ] T then we have

R n k ( P [ θ ] ) = T r ( P θ ) (3)

Proof. We know that

P [ θ ] = [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] T r ( P [ θ ] ) = cos 2 [ θ ] + sin 2 [ θ ] = 1

We can now calculate its rank. The column space of P [ θ ] is given by

C l n ( P [ θ ] ) = { [ cos 2 [ θ ] sin [ θ ] cos [ θ ] ] , [ cos [ θ ] sin [ θ ] sin 2 [ θ ] ] }

where C l n ( P [ θ ] ) is the column space of P [ θ ] . R n k ( P [ θ ] ) = 1 iff α 1 , α 2 0 such that

α 1 [ cos 2 [ θ ] sin [ θ ] cos [ θ ] ] + α 2 [ cos [ θ ] sin [ θ ] sin 2 [ θ ] ] = [ 0 0 ]

This gives us the following linear system

[ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ α 1 α 2 ] = [ 0 0 ]

Given that det ( P [ θ ] ) = 0 implies an infinite set of solutions exists.

Writing the homogeneous linear system we find the general set of vector solutions as follows

α 1 cos 2 [ θ ] + α 2 cos [ θ ] sin [ θ ] = 0

α 1 sin [ θ ] cos [ θ ] + α 2 sin 2 [ θ ] = 0

therefore

α 1 sin [ θ ] cos [ θ ] + α 2 sin 2 [ θ ] = 0

α 1 cos [ θ ] + α 2 sin [ θ ] = 0

α 1 = α 2 sin [ θ ] cos [ θ ] = α 2 tan [ θ ]

Hence, the solution vector

U = { α 2 | α = α 2 ( tan [ θ ] ,1 ) }

This implies that R n k ( P [ θ ] ) = T r ( P [ θ ] ) = 1 .

Theorem 4 Let A = [ a 1 ] such that V = S p n ( A ) . Then we have

P [ θ ] = A ( A T A ) 1 A T (4)

Proof. a 1 = ( cos [ θ ] , sin [ θ ] ) T = A , then we have

A ( A T A ) 1 A T = A ( a 1 T a 1 ) 1 A = A A T = a 1 a 1 T = a 1 a 1 = P [ θ ]

Lemma 5 The spectrum σ ( P [ θ ] ) of the P [ θ ] for some [ θ ] T is given by

σ ( P [ θ ] ) { λ 1 = 0, λ 2 = 1, P [ θ ] G P ( [ θ ] ) }

Proof. We simply perform the following calculation

det ( P [ θ ] λ I 2 ) = | cos 2 θ λ cos θ sin θ sin θ cos θ sin 2 θ λ | = ( cos 2 θ λ ) ( sin 2 θ λ ) cos 2 θ sin 2 θ = cos 2 θ sin 2 θ λ cos 2 θ λ sin 2 θ + λ 2 cos 2 θ sin 2 θ = λ 2 λ

therefore we see that

det ( P [ θ ] λ I 2 ) = 0 λ 2 λ = 0 λ ( λ 1 ) = 0 λ = 0 λ = 1

Hence, σ ( P [ θ ] ) = { 0 , 1 } as claimed.

We can calculate the associated Eigenspaces. For λ = 0 we get

P [ θ ] E 0 = λ E 0 = 0

Hence, it is clear that given some P [ θ ] its Eigenspace for λ = 0 is K e r ( [ P [ θ ] ] ) , that is, in this case, the Eigenspace is spanned by the following vectors

E 0 ( [ θ ] ) : = ( cos ( [ θ ] + π / 2 ) , sin ( [ θ ] + π / 2 ) ) T = [ sin ( [ θ ] ) cos ( [ θ ] ) ]

Therefore, we can say that is I m ( P [ θ ] ) is V 2 then S p n { E 0 ( [ θ ] ) } = V .

For λ = 1 we get the following Eigenvectors

P [ θ ] E 1 = λ E 1 = E 1

P [ θ ] E 1 E 1 = ( P [ θ ] I 2 ) E 1 = 0

In matrix form, this gives us

[ cos 2 θ 1 cos θ sin θ sin θ cos θ sin 2 θ 1 ] [ E 1 E 2 ] = [ 0 0 ]

[ sin 2 θ cos θ sin θ sin θ cos θ cos 2 θ ] [ E 1 E 2 ] = [ 0 0 ]

This gives us the following system

sin 2 θ E 1 + cos θ sin θ E 2 = 0

sin θ cos θ E 1 cos 2 θ E 2 = 0

therefore, from (24) we get

sin θ cos θ E 1 = cos 2 θ E 2 tan θ E 1 = E 2

Therefore,

E 2 E 1 = tan θ E 1 ( θ ) = E 2 cot θ

Therefore, the Eigenvector is given

E 1 = E 2 [ cot θ 1 ]

Lemma 6 The Eigenvectors E 0 and E 1 are linearly independent. Moreover, S p n { E 0 , E 1 } = 2 .

Proof. For γ 0 , γ 1 we have

γ 0 E 0 + γ 1 E 1 = 0

Clearly, E 0 , E 1 are linearly independent iff γ 0 = γ 1 = 0

γ 0 sin θ + γ 1 cot θ = 0

γ 0 cos θ + γ 1 = 0

This can be written matrix form in the following way

[ sin θ cot θ cos θ 1 ] [ γ 0 γ 1 ] = [ 0 0 ]

Now,

| sin θ cot θ cos θ 1 | = sin θ cot θ cos θ 0 , θ

We can clearly see that, the determinant is non-zero hence, E 0 and E 1 are linearly independent.

We can now talk about the Diagonalizability of Projection Matrices P [ θ ] . Given that P [ θ ] has distinct eigenvalues implies that it is diagonalizable i.e. P [ θ ] = P D P 1 where D is a diagonal matrix

Lemma 7

P [ θ ] = P [ 0 0 0 1 ] P 1

where

P = [ E 0 , E 1 ] = [ sin θ cot θ cos θ 1 ] , P 1 = 1 sin θ + cot θ cos θ [ 1 cot θ cos θ sin θ ]

Proof.

P [ θ ] = 1 sin θ + cot θ cos θ [ sin θ cot θ cos θ 1 ] [ 0 0 0 1 ] [ 1 cot θ cos θ sin θ ] = 1 sin θ + cot θ cos θ [ sin θ cot θ cos θ 1 ] [ 0 0 cos θ sin θ ] = 1 sin θ + cot θ cos θ [ cot θ cos θ cot θ sin θ cos θ sin θ ] = [ cos 2 θ cos θ sin θ sin θ cos θ sin 2 θ ] = P [ θ ]

Theorem 5 The projection matrix P [ θ ] with spectrum σ = { λ 1 , λ 2 } is diagonalizable, hence there exists matrices { G 1 , G 2 } such that

P [ θ ] = λ 1 G 1 + λ 2 G 2 (5)

where G 1 , G 2 are projection matrices onto N ( P [ θ ] λ i I 2 ) along R ( P [ θ ] λ i I 2 ) . Furthermore,

1) G i , G j = 0 whenever i j .

2) i = 1 m k G i = I 2 .

Proof. We know that the Eigenvalues are λ 1 = 0 λ 2 = 1 this obviously implies that

P [ θ ] = λ 2 G 2 = G 2

P [ θ ] = P D P 1 = ( X 1 | X 2 ) ( λ 1 I 0 0 λ 2 I ) ( Y 1 T Y 2 T ) = λ 1 X 1 Y 1 T + λ 2 X 2 Y 2 T = λ 2 X 2 Y 2 T = X 2 Y 2 T = 1 sin θ + cot θ cos θ [ cot θ 1 ] [ cos θ sin θ ] = 1 sin θ + cot θ cos θ [ cot θ cos θ cot θ sin θ cos θ sin θ ] = P [ θ ]

This implies that G 2 = X 2 Y 2 T .

Lemma 8 Let G 0 and G 1 be the spectral projectors for the eigenvalues λ 1 and λ 2 respectively. Then we show that

1)

G 0 G 1 = 0

2)

G 0 + G 1 = I 2

Proof.

G 0 G 1 = X 1 Y 1 T = [ sin θ cos θ ] [ 1 cot θ ] [ cot θ 1 ] [ cos θ sin θ ] = [ sin θ cot θ sin θ cos θ cos θ cot θ ] [ cot θ cos θ sin θ cot θ cos θ sin θ ] = [ 0 0 0 0 ]

As claimed.

For the second part of the proof, we simply add the matrices

G 0 + G 1 = 1 sin θ + cot θ cos θ { [ sin θ cot θ sin θ cos θ cos θ cot θ ] + [ cot θ cos θ sin θ cot θ cos θ sin θ ] } = 1 sin θ + cot θ cos θ [ ( sin θ + cot θ sin θ ) 0 0 ( cos θ cot θ + sin θ ) ] = I 2

3. Lie Group Action of S O ( 2 , ) on G P [ θ ]

From [3] [4] we have

Definition 1 Two squares matrices A and B are said to be Congruent if there exists an invertible matrix R such that

B = R T A R (6)

The form of the equation would tend to suggest that R T , R S O ( 2, ) such that R T R = R R T = I 2 .

Let α = θ θ be the rotation by some angle α such that α [ 0, π ) . α > 0 implies the rotation is counter-clockwise, α < 0 implies the rotation is clockwise.

Theorem 6 The action of S O ( 2 ) on G P ( [ θ ] ) defines a congruence between two elements P [ θ ] and P [ θ ] in G P ( [ θ ] ) i.e.

ξ : S O ( 2 ) × G P ( [ θ ] ) G P ( [ θ ] )

such that

ξ ( R ( α ) , P [ θ ] ) = R T ( α ) P [ θ ] R ( α ) = P [ θ ] (7)

Proof. A point on S 1 can be represented as e i θ = ( cos θ , sin θ ) T .

Hence, we can choose two points on S 1 , e i [ θ ] = ( cos [ θ ] , sin [ θ ] ) T and e i [ θ ] = ( cos [ θ ] , sin [ θ ] ) T .

ξ ( R ( α ) , P [ θ ] ) = R T ( α ) P [ θ ] R ( α ) = ( R T e i [ θ ] ) ( ( e i [ θ ] ) T R ) = ( ( e i [ θ ] ) T R ) T ( ( e i [ θ ] ) T R ) = e [ i θ ] ( e [ i θ ] ) T = P [ θ ]

It should be clear this action corresponds to the projector in the direction of [ θ ] + α , α < | π | which, as described in the first article, consistent with the topological structure. This is a clockwise transformation of the projection operator.

This, of course, implies that all projection operator are congruent matrices since it is always possible to find some R ( α ) S O ( 2 ) . Moreover, we know that I is the identity of S O ( 2 ) hence we get the following result

ξ ( I , P [ θ ] ) = I T P [ θ ] I = I P [ θ ] I = P [ θ ] I = P [ θ ]

Finally, given some P [ θ ] G P ( [ θ ] ) the mapping ξ is bijective on [ θ ] + α < | π | which implies that the mapping ξ is invertible and can be defined as

ξ 1 ( R ( α ) , P [ θ ] ) : = ξ ( R ( α ) , P [ θ ] ) (8)

We can see that this is equal to

ξ 1 ( R ( α ) , P [ θ ] ) = R T ( α ) P [ θ ] R ( α ) = R ( α ) P [ θ ] R T (α)

such that

ξ 1 ξ = R ( α ) ( R T ( α ) P [ θ ] R ( α ) ) R T = I P [ θ ] I = P [ θ ]

Note that ξ 1 is an counter-clockwise transformation of the projection operator.

Lemma 9 Let R ( α ) , R ( α ) S O ( 2 ) and let P [ θ ] G P ( [ θ ] ) , then

ξ ( R ( α ) , ξ ( R ( α ) , P [ θ ] ) ) = R T ( α + α ) P [ θ ] R ( α + α ) = ξ ( R ( α ) R ( α ) , P [ θ ] ) (9)

Proof.

ξ ( R ( α ) , ξ ( R ( α ) , P [ θ ] ) ) = ξ ( R ( α ) , R T ( α ) P [ θ ] R ( α ) ) = R T ( α ) ( R T ( α ) P [ θ ] R ( α ) ) R ( α ) = R T ( α ) R T ( α ) P [ θ ] R ( α ) R ( α ) = R T ( α + α ) P [ θ ] R ( α + α ) = ξ ( R ( α ) R ( α ) , P [ θ ] )

Lemma 10 Let R ( α ) and P [ θ ] be elements of S O ( 2 ) and G P ( [ θ ] ) respectively. Then we have

ξ ( R ( 0 ) , P [ θ ] ) = ξ ( R ( k π ) , P [ θ ] ) = P [ θ ] , k (10)

Proof. Beginning with the case where φ = 0 we get the following result

ξ ( R ( 0 ) , P [ θ ] ) = R T ( 0 ) P [ θ ] R ( 0 ) = [ 1 0 0 1 ] [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ 1 0 0 1 ] = P [ θ ]

Now choosing φ = k π for some k leads to

ξ ( R ( k π ) , P [ θ ] ) = R T ( k π ) P [ θ ] R ( k π ) = [ cos ( k π ) sin ( k π ) sin ( k π ) cos ( k π ) ] [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ cos ( k π ) sin ( k π ) sin ( k π ) cos ( k π ) ] = [ ( 1 ) k 0 0 ( 1 ) k ] [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ ( 1 ) k 0 0 ( 1 ) k ] = [ ( 1 ) k cos 2 [ θ ] ( 1 ) k cos [ θ ] sin [ θ ] ( 1 ) k sin [ θ ] cos [ θ ] ( 1 ) k sin 2 [ θ ] ] [ ( 1 ) k 0 0 ( 1 ) k ] = [ ( 1 ) 2 k cos 2 [ θ ] ( 1 ) 2 k cos [ θ ] sin [ θ ] ( 1 ) 2 k sin [ θ ] cos [ θ ] ( 1 ) 2 k sin 2 [ θ ] ] = P [ θ ]

This shows us that projection matrices have a period of π .

We can now calculate the Stabilizer of G P ( [ θ ] ) as follows.

Definition 2 The rotation group S O ( 2 ) acts on G P ( [ θ ] ) then the Stabilizer of some element P [ θ ] G P ( [ θ ] ) denoted S t a b S O ( 2 ) is defined as

S t a b S O ( 2 ) ( P [ θ ] ) : = { R ( α ) S O ( 2 ) : ξ ( R ( α ) , P [ θ ] ) = P [ θ ] }

From the above definition it is clear that

S t a b S O ( 2 ) ( P [ θ ] ) : = { R [ k π ] : α = k π , k }

Lemma 11

S t a b S O ( 2 ) ( P [ θ ] ) S O (2)

Proof. Choosing k = 0 implies R ( 0 ) = I 2 R ( 0 ) P [ θ ] = I 2 P [ θ ] = P [ θ ] . We know that for every R ( k π ) R z 1 ( k π ) = R T ( k π ) s.t. R ( k π ) R T ( k π ) = R ( k π ) T R ( k π ) = I 2 .

hence, for some k , k we have

R ( k π ) ( R T ( k π ) P [ θ ] R ( k π ) ) R T ( k π ) = R ( k π ) ( P [ θ ] ) R T ( k π ) = P [ θ ]

This implies that

R ( k π ) ( R T ( k π ) R ( k π ) ) R T ( k π ) = R ( k π ) I 2 R T ( k π ) = I 2 S t a b S O ( 2 ) ( P [ θ ] )

Theorem 7. Let P [ θ ] , P [ θ ] G P ( [ θ ] ) . Let θ = α + α then

P [ θ + θ ] = R T ( α + α ) P [ θ ] R ( α + α )

Proof.

R T ( α + α ) P [ θ ] R ( α + α ) = R T ( α ) R T ( α ) P [ θ ] R ( α ) R ( α ) = R T ( α ) P [ θ + α ] R ( α ) = P [ θ + ( α + α ) ] = P [ θ + θ ]

Hence, we can conclude that

R T ( α + α ) P [ θ ] R ( α + α ) = P [ θ + θ ]

Lemma 12 The group of action of S O ( 2, ) on G P ( [ θ ] ) not faithful but it is ∞-transitive.

Proof. for a group action to be faithful implies that for every pair of distinct elements of S O ( 2, ) there is some P [ θ ] such that R ( α ) P [ θ ] R T ( α ) R ( α ) P [ θ ] R T ( α ) , P [ θ ] G P ( [ θ ] ) .

Consider the set S O ( 2, ) × S O ( 2, ) and let us choose some arbitrary projector in G P ( [ θ ] ) . Specifically, let us consider the pair ( R ( α ) , R ( α ) ) S O ( 2, ) × S O ( 2, ) such that α = α ± π . Then we demonstrate that it is impossible to find an element of G P ( [ θ ] ) such that the above definition is satisfied.

R T ( α ) P [ θ ] R ( α ) = R T ( α ± π ) P [ θ ] R ( α ± π ) = R T ( α ) P [ θ ] R ( α ) = R T ( α ) P [ θ ] R ( α ) (11)

Since α α and P [ θ ] is arbitrary, we conclude that this action is not faithful.

However, we are going to demonstrate that it is n-transitive. To show this we can consider two pairwise distinct projector sequences of form

( { P [ θ i ] } , { P [ θ r ] } ) , i , r = 1 , , n

each sequence is pairwise distinct, that is, P [ θ i ] P [ θ j ] , i , j = 1 , , n and P [ θ r ] P [ θ s ] , r , s = 1 , , n . Suppose that each sequence is chosen so that ϕ ( P [ θ i ] ) , ϕ ( P [ θ r ] ) T and form two arithmetic sequences such that the quotient metric satisfies d T ( [ θ i ] , [ θ r ] ) = inf k { | θ i θ r + 2 k π | } = β < π , i , r = 1, , n . Then we can have the following result

R T ( β ) P [ θ i ] R ( β ) = P [ θ r ] , i , r = 1, , n

we can define a refinement of the sequence in the following way

σ ( β ) = β , β < β σ ( { P [ θ i ] } i = 1 n ) = { P [ θ i ] } i = 1 n , n < n

As β 0 n we have lim n ( { P [ θ i ] } i = 1 n { P [ θ r ] } r = 1 n ) = P α G P ( [ θ ] ) τ G P ( [ θ ] ) .

where, it was shown that τ G P ( [ θ ] ) is the topology G P [ θ ] . Hence, this group action is ∞-transitive.

Lemma 13 The Kernel of ξ is given by

K e r ξ : = { R ( α ) S O ( 2, ) : R T ( α ) P [ θ ] R ( α ) = P [ 0 ] } = { R ( α ) S O ( 2, ) : α : = θ }

for some P [ θ ] G P ( [ θ ] ) .

Proof. Let P [ θ ] be some element of G P ( [ θ ] ) and choose α = θ the we have

ξ ( R ( θ ) P [ θ ] ) = R T ( θ ) P [ θ ] R ( θ ) = R ( θ ) P [ θ ] R T ( θ ) = ξ 1 ( R ( θ ) , P [ θ ] ) = P [ 0 ] = I d G P ( [ θ ] )

Lemma 14 The Lie Group G P ( [ θ ] ) has exactly one orbit.

Proof. We know that the action is transitive since for all pairs on G P ( [ θ ] ) × G P ( [ θ ] ) of the form ( P [ θ ] , P [ θ ] ) there exists R ( α + k π ) , k such that P [ θ ] = R T ( α + k π ) P [ θ ] R ( α + k π ) , k .

We just need to show that for some fixed P [ θ ] G P ( [ θ ] ) we get R T ( α ) P [ θ ] R ( α ) = G P ( [ θ ] ) . It is clear that choosing π 2 < α π 2 we get

{ R T ( π 2 ) P [ θ ] R ( π 2 ) = P [ θ + π 2 ] = P [ θ ] R T ( π 2 ) P [ θ ] R ( π 2 ) = P [ θ π 2 ] = R ( π 2 ) P [ θ ] R T ( π 2 ) = P [ θ ]

Due to the fact that projector repeat every π we get see that R T ( α ) P [ θ ] R ( α ) = G P ( [ θ ] ) .

Definition 3 The Vec operator is a linear transformation which converts a matrix into a column vector. That is

V e c : n × m n m

defined as

V e c ( A ) = [ a 1,1 , , a m ,1 , , a 1,2 , , a m ,2 , , a m , n ] T , A n × m

Theorem 8 The group S O ( 2, ) defines a homeomorphism group on G P ( [ θ ] ) . That is

S O ( 2, ) : G P ( [ θ ] ) G P ( [ θ ] )

For some P [ θ ] G P ( [ θ ] ) and for some translation angle α [ 0, π ) this mapping can be defined as follows

V e c 1 { ( R z ( α ) R z ( α ) ) V e c ( P [ θ ] ) }

Proof.

( R ( α ) R ( α ) ) V e c ( P [ θ ] ) = [ cos 2 α cos α sin α sin α cos α sin 2 α cos α sin α cos 2 α sin 2 α sin α cos α sin α cos α sin 2 α cos 2 α cos α sin α sin 2 α sin α cos α cos α sin α cos 2 α ] [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] = [ cos 2 α cos 2 [ θ ] 2 cos α sin α cos [ θ ] sin [ θ ] + sin 2 α sin 2 [ θ ] cos α sin α cos 2 [ θ ] + cos 2 α cos [ θ ] sin [ θ ] sin 2 α sin [ θ ] cos [ θ ] sin α cos α sin 2 [ θ ] cos α sin α cos 2 [ θ ] + cos 2 α cos [ θ ] sin [ θ ] sin 2 α sin [ θ ] cos [ θ ] sin α cos α sin 2 [ θ ] sin 2 α cos 2 [ θ ] + 2 sin α cos α cos θ sin [ θ ] + cos 2 α sin 2 [ θ ] ]

= [ ( cos α cos [ θ ] sin α sin [ θ ] ) 2 ( cos [ θ ] sin α ) ( cos [ θ ] cos α sin α sin [ θ ] ) + cos α sin θ ( cos α cos [ θ ] sin α sin [ θ ] ) cos α sin θ ( cos α cos [ θ ] sin α sin [ θ ] ) ( sin α cos [ θ ] + cos α sin [ θ ] ) 2 ] = [ cos 2 ( α + [ θ ] ) sin ( α + [ θ ] ) cos ( α + [ θ ] ) cos ( α + [ θ ] ) sin ( α + [ θ ] ) sin 2 ( α + [ θ ] ) ]

Let θ = α + [ θ ] then we have

( R ( α ) R ( α ) ) V e c ( P [ θ ] ) = [ cos 2 [ θ ] sin [ θ ] cos [ θ ] cos [ θ ] sin [ θ ] sin 2 [ θ ] ]

Knowing the size of the original matrix i.e. a 2 × 2 matrix and using the properties of the V e c Operator i.e. the isomorphism 2 × 2 4 we find that

V e c 1 ( R ( α ) R ( α ) ) V e c ( P [ θ ] ) = V e c 1 ( [ cos 2 [ θ ] sin [ θ ] cos [ θ ] cos [ θ ] sin [ θ ] sin 2 [ θ ] ] ) = [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] = P [ θ ]

4. The Group Operation of G P [ θ ] as Matrix Products

We have already seen that the group operation on G ( [ θ ] ) is a follows

φ ( P [ θ ] , P [ θ ] ) = P [ θ + θ ] , P [ θ ] , P [ θ ] G P ( [ θ ] ) (12)

P [ θ ] = e i [ θ ] ( e i [ θ ] ) T , P [ θ ] = e i [ θ ] ( e i [ θ ] ) T

Hence, P [ θ + θ ] = e [ θ + θ ] ( e i [ θ + θ ] ) T .

Definition 4 is the Hadamard Product defined as follows.

Let A and B be two matrices the Hadamard product is

( A B ) i , j : = ( A ) i , j ( B ) i , j

Theorem 9

P [ θ + θ ] = φ ( P [ θ ] , P [ θ ] ) = [ φ 11 ( P [ θ ] , P [ θ ] ) φ 21 ( P [ θ ] , P [ θ ] ) φ 21 ( P [ θ ] , P [ θ ] ) φ 22 ( P [ θ ] , P [ θ ] ) ] (13)

where

φ ( P [ θ ] , P [ θ ] ) : = { φ 11 ( P [ θ ] , P [ θ ] ) : = 1 T V e c ( P [ θ ] P [ θ ] ) φ 12 ( P [ θ ] , P [ θ ] ) = φ 21 ( P [ θ ] , P [ θ ] ) : = 1 + T V e c ( P [ θ ] Q P [ θ ] Q * ) φ 22 ( P [ θ ] , P [ θ ] ) : = 1 T V e c ( P [ θ ] P [ θ ] )

where 1 T = [ 1, 1, 1,1 ] T and 1 + T = [ 1,1,1,1 ] T . The matrices Q and Q * are defined as

Q : = [ 0 1 1 0 ] , Q * = [ 1 0 0 1 ]

Proof.

1 T V e c ( P [ θ ] P [ θ ] ) = [ 1, 1, 1,1 ] v e c ( [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] ) = [ 1, 1, 1,1 ] v e c ( [ cos 2 [ θ ] cos 2 [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] sin 2 [ θ ] ] ) = [ 1, 1, 1,1 ] [ cos 2 [ θ ] cos 2 [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] sin 2 [ θ ] sin 2 [ θ ] ] = cos 2 [ θ ] cos 2 [ θ ] 2 sin [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] + sin 2 [ θ ] sin 2 [ θ ] = cos 2 ( [ θ + θ ] )

For the anti-diagonal elements, it is clear that they are equal due to symmetry.

φ 12 = φ 21 = [ 1,1,1,1 ] V e c ( [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ 0 1 1 0 ] [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ 1 0 0 1 ] ) = [ 1,1,1,1 ] V e c ( [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ sin [ θ ] cos [ θ ] sin 2 [ θ ] cos 2 [ θ ] cos [ θ ] sin [ θ ] ] [ 1 0 0 1 ] )

= [ 1,1,1,1 ] V e c ( [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ sin [ θ ] cos [ θ ] sin 2 [ θ ] cos 2 [ θ ] cos [ θ ] sin [ θ ] ] ) = [ 1,1,1,1 ] V e c ( [ cos 2 [ θ ] sin [ θ ] cos [ θ ] cos [ θ ] sin [ θ ] sin 2 [ θ ] sin [ θ ] cos [ θ ] cos 2 [ θ ] sin 2 [ θ ] cos [ θ ] sin [ θ ] ] ) = [ 1,1,1,1 ] [ cos 2 [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] cos 2 [ θ ] cos [ θ ] sin [ θ ] sin 2 [ θ ] sin 2 [ θ ] cos [ θ ] sin [ θ ] ]

= cos 2 [ θ ] sin [ θ ] cos [ θ ] + sin [ θ ] cos [ θ ] cos 2 [ θ ] cos [ θ ] sin [ θ ] sin 2 [ θ ] sin 2 [ θ ] cos [ θ ] sin [ θ ] = cos ( [ θ + θ ] ) sin ( [ θ + θ ] )

for the element p 22 we get the following result

1 T V e c ( P [ θ ] P [ θ ] ) = [ 1, 1, 1,1 ] V e c ( [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] [ sin 2 [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] cos 2 [ θ ] ] ) = [ 1, 1, 1,1 ] V e c ( [ cos 2 [ θ ] sin 2 [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] cos [ θ ] sin [ θ ] sin 2 [ θ ] cos 2 [ θ ] ] ) = [ 1, 1, 1,1 ] [ cos 2 [ θ ] sin 2 [ θ ] sin [ θ ] cos [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] cos [ θ ] sin [ θ ] sin 2 [ θ ] cos 2 [ θ ] ] = cos 2 [ θ ] sin 2 [ θ ] + 2 sin [ θ ] cos [ θ ] cos [ θ ] sin [ θ ] + sin 2 [ θ ] cos 2 [ θ ] = sin 2 ( [ θ + θ ] )

Now choosing θ = 0 implies that we have φ ( P [ θ ] , P [ 0 ] ) = P [ θ + 0 ] = P [ θ ] , that is we have

φ ( P [ θ ] , P [ 0 ] ) = [ φ 11 ( P [ θ ] , P [ 0 ] ) φ 21 ( P [ θ ] , P [ 0 ] ) φ 21 ( P [ θ ] , P [ 0 ] ) φ 22 ( P [ θ ] , P [ 0 ] ) ] = [ 1 T V e c ( P [ θ ] P [ 0 ] ) 1 + T V e c ( P [ θ ] Q P [ 0 ] Q * ) 1 + T V e c ( P [ θ ] Q P [ 0 ] Q * ) 1 T V e c ( P [ θ ] P [ 0 ] ) ] = [ cos 2 [ θ ] cos [ θ ] sin [ θ ] sin [ θ ] cos [ θ ] sin 2 [ θ ] ] = P [ θ ]

To be thorough let us choose θ = θ , we should expect to calculate φ ( P [ θ ] , P [ θ ] ) = P [ θ ] + [ θ ] = P [ 0 ] , that is

φ ( P [ θ ] , P [ θ ] ) = [ φ 11 ( P [ θ ] , P [ θ ] ) φ 21 ( P [ θ ] , P [ θ ] ) φ 21 ( P [ θ ] , P [ θ ] ) φ 22 ( P [ θ ] , P [ θ ] ) ] = [ cos 2 ( θ θ ) cos ( θ θ ) sin ( θ θ ) sin ( θ θ ) cos ( θ θ ) sin 2 ( θ θ ) ] = [ 1 0 0 0 ] = P [ 0 ] = P e

Last but least we will check that operation is associative that we want to make sure that

φ ( P [ θ ] , φ ( P [ θ ] , P [ θ ] ) ) = φ ( φ ( P [ θ ] , P [ θ ] ) , P [ θ ] )

Proof. Clearly the operation is associative if all component functions are also associative. Hence we shave to show that

φ 11 ( P [ θ ] , φ 11 ( P [ θ ] , P [ θ ] ) ) = φ 11 ( φ 11 ( P [ θ ] , P [ θ ] ) , P [ θ ] )

φ 11 ( P [ θ ] , φ 11 ( P [ θ ] , P [ θ ] ) ) = φ 11 ( P [ θ ] , P [ θ + θ ] ) = 1 T V e c ( P [ θ ] P [ θ + θ ] ) = [ 1, 1, 1,1 ] [ cos 2 θ cos 2 ( θ + θ ) cos θ sin θ cos ( θ + θ ) sin ( θ + θ ) sin θ cos θ sin ( θ + θ ) cos ( θ + θ ) sin 2 θ sin 2 ( θ + θ ) ] = cos 2 θ cos 2 ( θ + θ ) 2 cos θ sin θ cos ( θ + θ ) sin ( θ + θ ) + sin 2 θ sin 2 ( θ + θ )

= ( cos θ cos ( θ + θ ) sin θ sin ( θ + θ ) ) 2 = ( cos ( θ + θ + θ ) ) 2 = cos 2 ( θ + θ + θ )

It is known that the Hadamard Product is known to be associative we can conclude that

φ 11 ( P [ θ ] , φ 11 ( P [ θ ] , P [ θ ] ) ) = φ 11 ( P [ θ ] , P [ θ + θ ] ) = 1 T V e c ( P [ θ ] P [ θ + θ ] ) = 1 T V e c ( P [ θ ] ( P [ θ ] P [ θ ] ) ) = 1 T V e c ( ( P [ θ ] P [ θ ] ) P [ θ ] ) = φ 11 ( φ 11 ( P [ θ ] , P [ θ ] ) , P [ θ ] )

Next, we deal with the anti-diagonal elements. We need to show that

φ 12 ( P [ θ ] , φ 12 ( P [ θ ] , P [ θ ] ) ) = ( φ 12 ( P [ θ ] , P [ θ ] ) , P [ θ ] )

φ 12 ( P [ θ ] , φ 12 ( P [ θ ] , P [ θ ] ) ) = 1 + T V e c ( P [ θ ] Q P [ θ + θ ] Q * ) = 1 + T V e c ( [ cos 2 θ cos θ sin θ sin θ cos θ sin 2 θ ] [ sin ( θ + θ ) cos ( θ + θ ) sin 2 ( θ + θ ) cos 2 ( θ + θ ) cos ( θ + θ ) sin ( θ + θ ) ] ) = 1 + T V e c ( [ cos 2 θ sin ( θ + θ ) cos ( θ + θ ) cos θ sin θ sin 2 ( θ + θ ) sin θ cos θ cos 2 ( θ + θ ) sin 2 θ cos ( θ + θ ) sin ( θ + θ ) ] )

= [ 1,1,1,1 ] [ cos 2 θ sin ( θ + θ ) cos ( θ + θ ) sin θ cos θ cos 2 ( θ + θ ) cos θ sin θ sin 2 ( θ + θ ) sin 2 θ cos ( θ + θ ) sin ( θ + θ ) ] = cos 2 θ sin ( θ + θ ) cos ( θ + θ ) + sin θ cos θ cos 2 ( θ + θ ) cos θ sin θ sin 2 ( θ + θ ) sin 2 θ cos ( θ + θ ) sin ( θ + θ )

We now use the following the following results(without proof)

cos ( i = 1 3 ) = cos θ cos θ cos θ cos θ sin θ sin θ cos θ sin θ sin θ cos θ sin θ sin θ

sin ( i = 1 3 ) = sin θ cos θ cos θ + sin θ cos θ cos θ + sin θ cos θ cos θ sin θ sin θ sin θ

with some algebra, we can show that

cos ( i = 1 3 ) sin ( i = 1 3 ) = 1 + T V e c ( P [ θ ] Q P [ θ + θ ] Q * )

multiplication being commutative implies associativity for the anti-diagonal elements.

Finally, for element φ 22 we have to show that

φ 22 ( P [ θ ] , φ 22 ( P [ θ ] , P [ θ ] ) ) = φ 22 ( φ 22 ( P [ θ ] , P [ θ ] ) , P [ θ ] )

φ 22 ( P [ θ ] , φ 22 ( P [ θ ] , P [ θ ] ) ) = φ 22 ( P [ θ ] , P [ θ + θ ] ) = 1 T V e c ( P [ θ ] P [ θ + θ ] ) = [ 1, 1, 1,1 ] [ cos 2 θ sin 2 ( θ + θ ) cos θ sin θ cos ( θ + θ ) sin ( θ + θ ) sin θ cos θ cos ( θ + θ ) sin ( θ + θ ) sin 2 θ cos 2 ( θ + θ ) ] = cos 2 θ sin 2 ( θ + θ ) + 2 cos θ sin θ cos ( θ + θ ) sin ( θ + θ ) + sin 2 θ cos 2 ( θ + θ )

= ( cos θ sin ( θ + θ ) + sin θ cos ( θ + θ ) ) 2 = ( sin ( θ + θ + θ ) ) 2 = sin 2 ( θ + θ + θ ) = 1 T V e c ( P [ θ + θ ] P [ θ ] ) = φ 22 ( φ 22 ( P [ θ ] , P [ θ ] ) , P [ θ ] )

We can see, from the previous section, that

R z T ( α + α ) P [ θ ] R z ( α + α ) = P [ θ + θ ] = φ ( P [ θ ] , P [ θ ] ) = [ φ 11 ( P [ θ ] , P [ θ ] ) φ 21 ( P [ θ ] , P [ θ ] ) φ 21 ( P [ θ ] , P [ θ ] ) φ 22 ( P [ θ ] , P [ θ ] ) ]

it is also worthwhile noting since the Hadamard Product is commutative confirms that φ ( P [ θ ] , P [ θ ] ) is a commutative group operation which implies that the centre of G P ( [ θ ] ) is itself.

5. In Conclusion

We have shown that the Lie Group S O ( 2, ) acts on the on the manifold G P ( [ θ ] ) to generate new elements in G P ( [ θ ] ) where many interesting properties of this group action have been demonstrated. The group operation φ ( P [ θ ] , P [ θ ] ) in terms of matrix operations requires the use of the vectorisation operator and the Hadamard Product because it is not a traditional vector sum when the angles are added together. Adding the vectors in a traditional way would require the tensor product of the sum and a normalisation constant. I believe that projection matrices have more interesting structures that can be further studied. I hope that this article will raise some interest in what, I think, does deserve more investigation.

Acknowledgements

I wish to personally thank the Editor(s) and Mrs Eunice Du for all her help. I also wish to extend my gratitude to the referee for their comments.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Tapp, K. Matrix Groups for Undergraduates. ISBN 0-8218-3750-0.
http://www.ams.org/publications/authors/books/postpub/stml-29
[2] Yanai, H., Takeuchi, K. and Takane, Y. Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition. ISBN 1441998861, Springer.
[3] Rudolph, G. and Schmidt, M. (2103) Differential Geometry and Mathematical Physics: Part I Manifold, Lie Groups and Hamiltonian Systems. ISBN 978-94-007-5344-0, Springer, Berlin.
[4] Levine, M. GLn(R) as a Lie Group. University of Chicago.
http://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/REUPapers/Levine.pdf

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.