Definition and Properties of a Vector-Matrix Reversal Operator ()
1. Introduction
The author is aware of a large amount of literature in the form of books related to structure, manipulation, and solving equations involving matrices and vectors (considered here as row or column matrices). Thus, the aim of the present study cannot be an exhaustive bibliographic description of matrix algebra. However, a literature résumé will be given in any way to provide a maybe biased point of view related to the present author.
Perhaps the readers have in mind the basic definitions of Linear Algebra both in teaching volumes [1] [2] and in specialized treatises like the old book of Wilkinson [3], a compendium of the Linear Algebra techniques for computational purposes, along with the two-volume practical treatise [4] mainly based on the previous Wilkinson’s book. The exhaustive study of Durand [5] is of similar interest, where the most interesting Linear Algebra problems are studied and solved. One can obtain more recent information in [6].
Surely, the readers will know the words reversal, reverse, and reversing, among other meanings and uses, in certain aspects of time description [7]-[9], they are often encountered. However, as far as the present author knows, such wording is scarcely used enough in mathematics [10], if not at all, as some operation to manipulate vectors and matrices in a Linear Algebra context.
This paper describes how a reversal operator acting on matrices and vectors might be defined. That means one can set how the reversal of a vector or a matrix transforms the implied vector or matrix elements upon reversal, the structure of reverse vectors and matrices, and the relations with other well-known operators and operations acting over and belonging to matrix algebra.
The present paper is structured so that first, one can set the vector reversal definition and its properties and purpose. After this initial setup, one can discuss the structure of square matrix elements as a step to define their reversal operator action. At this stage, one will describe the concept of anti-diagonal and the new matrix regions that one can add to the already known ones. After this, one discusses several additional aspects of matrix reversal. This study continues with the action of the reversal operator upon the matrix product, the determinant of a matrix, and the matrix inverse.
2. Vector Reversal
2.1. Definition and Symbols of Row and Column Vectors
Suppose an N-dimensional vector space
, where one has chosen the rational field
instead of the real field
, stressing the computational background of vector-matrix operations developed in this study.
Choosing to represent a vector, noted by
, in the form of a row vector, and writing:
,
then, one selects vectors this bra way instead of the equivalent column ket symbols. Because this row form is easier to write explicitly than the equivalent column dual counterpart. Such consideration becomes easy to accept, taking into account that there is a straightforward relation between both representations involving the dual vector space
and the transposition of row into column vectors:
.
2.2. Reversal Operator Definition
The reverse of any vector belonging to the space
is defined and noted as:
.
Therefore, a superscript R on the right side of the vector symbol represents the reversal operator.
Also, if necessary, one can use the following equivalent notations to denote the reversed vectors under the reversal operator on a bra:
. The same notation can be supposed to hold for ket vectors, that is:
.
2.3. Properties of the Reversal Operator
With this above definition, the reversal of a vector is a linear operation similar to the transposition or the conjugation, as one can easily find:
1)
2)
3)
4)
Any vector:
, whenever the equality:
holds, one can call it reversal invariant. For instance, the unity vector
and all its homothecies are reversal invariant in any vector space dimension.
2.4. Reversal Operator and Inward Product of Vectors
Concerning the inward1 product [11] of two vectors, when defined as:
,
then, the reversal operation acts like an operation distributed within the inward product:
.
2.5. Reversal Operator and Scalar Product
Thus, the scalar product of two vectors is invariant under reversal. To demonstrate such invariance, one can first use the complete sum of the elements of a given vector, defined as:
,
which is a linear operator:
;
then the scalar product of two vectors can be written as:
Therefore, the scalar product is invariant upon the reversal operator application.
2.6. Reversal Operator and Euclidean Norms
Therefore, the Euclidean norm of a vector is invariant upon reversal because:
.
2.7. Half-Reversal Euclidean Norms
However, there is the possibility to define the half-reversal Euclidean norm of a vector, a scalar attached to any vector which can be defined as:
.
An example illustrates this interesting property in a 3-dimensional space:
.
If
, note that in this simple case, one can write:
.
2.8. Invariance of Higher Order Norms
One might write higher-order vector norms as complete sums of inward power vectors.
Using the following definition of the inward power of any vector:
,
when the power is attached to a natural number
, then the P-th order norm of the vector can be defined as:
.
Such a definition is invariant by vector reversal, as one can write:
.
3. Restructuration of Square Matrices: Definition of the
A-Diagonal Elements
One of the most used structures of square
matrices is the definition of the diagonal and the sub-diagonal elements parallel to it. This possibility gives rise to particular matrix types, such as triangular, tridiagonal, and band matrices.
As it is well-known, the diagonal of a matrix corresponds to elements starting at the position
and ending at the position
; that is, the index set, which one can describe as:
; or it is also of interest to define the diagonal element set as:
.
Among other possibilities not often employed, alternative structures exist within the elements of square
matrices. One can define the anti-diagonal (or shortly: a-diagonal), corresponding to the elements starting at the position
and ending at the position
; that is, in this case, the set of elements with indices:
, or using a similar definition to the diagonal:
.
Following the programming rules of some high-level languages like Python, one can construct a better description of a matrix’s a-diagonal within the index range
. In doing so, one can set the matrix dimension to
. However, using this indexing possibility, one can write the a-diagonal set of indices like:
.
Such procedures permit the identification of new matrix regions as the anti-diagonal and the sub-anti-diagonals and also identify matrix anti-triangles, which will be in the upper and lower regions seen from the anti-diagonal point of view. Because of the presence of these matrix regions, one can also augment matrix classification. Therefore, one can talk about anti-diagonal matrices, anti-triangular matrices, etc.
4. The Vector Indices of an A-Diagonal of a Square Matrix
To characterize the role of the a-diagonal of a matrix even better, one can transform the a-diagonal row indices into a vector. The a-diagonal column indices are contained in the reversal on the row indices vector
. That is:
5. Tensor Sum of Two Vectors Indices
An interesting numerical subproduct of this set of vector indices
, which one can use to discuss the Goldbach conjecture [12] [13] and the Fermat theorem [14], is the structure of what can be called the tensor sum of them.
One can easily define the tensor sum of two indices as:
whenever the range of both subindexes starts at 0. As a result of this construct, the a-diagonal and sub-a-diagonals of an index tensor sum bear the column index used as a unique element.
Therefore, the sub-a-diagonals of the tensor sum matrices
of arbitrary dimension contain the natural number of the starting column. The a-diagonal of
contains the associated dimension number
.
For example, choosing
, then the matrix
will have the form:
.
6. Reversal Matrix
One can define the reversal matrix
as a matrix similar to a diagonal matrix but as an a-diagonal one. It has been previously defined as the exchange matrix [15]. However, the nomenclature using the reversal adjective is better within the present paper’s ideas.
In this case, the matrix
is a null matrix with a unit principal a-diagonal. The elements in the a-diagonal consist of matrix elements perpendicular to the diagonal and made of 1s. One can define such a structure as:
The reversal matrix can be used to reverse arbitrary vectors of the adequate dimension. Concerning the row vectors, the reversal transformation acts on the right side of the vector. Then, one can write:
.
Also, the matrix
is involutory, or self-inverse, thus:
.
7. Reversal of a Matrix
Once one knows that matrix elements can be reordered into an isomorphic row (or column) vector, one can consider any matrix reversal procedure a trivial algorithm. Then, one can reduce matrix reversal to the algorithm of a vector reversal. However, it is interesting to discuss matrix reversal as an internal matrix operation on the same footing as conjugation, transposition, and inversion.
The action of the reversal operator on an arbitrary matrix can be defined through an algorithm as follows: first, the reversal operator acts on the set of matrix rows or columns, reversing it; second, it reverses every resultant row or column.
For
matrices, one can write using a column decomposition:
One can define the same algorithm for a matrix representation but in the row decomposition form.
For higher dimensionalities, like hypermatrices or higher-order tensors, whose elements can be supposed to contain matrices in turn, the reversal operator reverses the order of submatrices first, then reverses the order of the lower representation submatrices until the above action on matrices and vectors is found.
A simple example can be easily given:
If a row decomposition is chosen for the matrix above, then one can write:
Therefore, the reversal of a matrix leaves the matrix dimension invariant. This can be seen using a
matrix as an example:
8. Reversal of a Matrix Product
The reversal in the sum and product by a scalar or the inward product of matrices behaves like the already discussed formalism in the vectors because of the isomorphic relation between matrices and vectors already mentioned. However, reversal over matrix multiplication shall be studied in detail.
One can say that:
A simple example provides initial information:
Therefore, one can write:
.
Thus, the matrix reversal seems distributive regarding the matrix product, leaving the ordering of the matrices in the product invariant.
However, such a characteristic shall be generally demonstrated. To do this, one can write:
A less entangled demonstration can be obtained by realizing that in a matrix product between two compatible matrices, each product element is just the scalar product of a row of the left-side matrix by the right-side matrix column.
That is, one can write:
Thus, a similar demonstration to the previous one is obtained. One can deduce that the reversal of a matrix product corresponds to the product of the reversed matrices present in the product.
9. Determinant of a Reversed Matrix
It is easy to deduce that the determinant of a square matrix of any dimension is invariant concerning the reversal operation as defined before. That is:
The reason for this invariance is easy to understand, as a
square matrix reversal corresponds to an even number of row-column interchanges: precisely
of them. Determinants change the sign for every interchange of columns or rows; thus, an even number of interchanges leaves the determinant invariant.
As an example, one can write:
.
10. Reversal of the Inverse of a Matrix
Non-singular
matrices:
, possess a non-null determinant, that is:
, and in this case, an inverse matrix exists:
. Moreover, concerning the matrix product of the inverse by the original matrix, one can write:
,
being:
the unit matrix of dimension
.
One can describe the reversal of the inverse of a matrix using the reversal behavior on matrix multiplication. Taking into account the invariance of the unit matrix upon reversal:
, one can write:
,
This result implies that the inverse of the reverse of a matrix is the reverse of the inverse.
11. Discussion
The reversal of vectors and matrices of arbitrary dimension has been studied. As a result, a new operator can be adopted, acting similarly to the conjugation operator, but reordering the final elements of the involved matrix. Thanks to such a definition of a new internal operator, matrix elements, which are usually not mentioned, have become relevant: the anti-diagonal and the sub-anti-diagonals, adding more information to the study of matrix structure at the same footing as the role played by the diagonal and subdiagonals. Some aspects of this new perspective are currently applied to number theory, for example, Reference [16].
Acknowledgements
The author wants to dedicate this paper to my students of the 1978 Linear Algebra lectures at Barcelona’s Institut Químic de Sarrià: P. Bartolí, R. Bobet, E. Colomer, J. Nebot, J. J. Palma, J. R. Quintana, and C. Verdaguer. They still remember how the interaction between pupils and teachers can be evoked, so that this subtle connection, once born, will never die out. A special acknowledgment is due to Blanca Cercas MP; this paper might never have been written without her dedication. Prof. D. Nath, Vivekananda College (Kolkata) is also deeply acknowledged.
NOTES
1Also known as diagonal, Hadamard…product.