Explicit Inversion for Two Brownian-Type Matrices

We present explicit inverses of two Brownian--type matrices, which are defined as Hadamard products of certain already known matrices. The matrices under consideration are defined by $3n-1$ parameters and their lower Hessenberg form inverses are expressed analytically in terms of these parameters. Such matrices are useful in the theory of digital signal processing and in testing matrix inversion algorithms.


Introduction
Brownian matrices are frequently involved in problems concerning "digital signal processing". In particular, Brownian motion is one of the most common linear models used for representing nonstationary signals. The covariance matrix of a discrete-time Brownian motion has, in turn, a very characteristic structure, the so-called "Brownian matrix".
In [1] (Eq. (2)) the explicit inverse of a class of matrices G n = [β ij ] with elements is given. On the other hand, the analytic expressions of the inverses of two symmetric matrices K = [κ ij ] and N = [ν ij ], where κ ij = k i and ν ij = k j , i j, respectively, are presented in [2] (first equation in p. 113, and Eq. (1), respectively). The matrix K is a special case of Brownian matrix and G n is a lower Brownian matrix, as they have been defined in [3] (Eq. (2.1)). Earlier, in [4] (paragraph following Eq. (3.3)) the term "pure Brownian matrix" for the type of the matrix K has introduced. Furthermore, in [5] (discussion concerning Eqs. (28)-(30)) the so-called "diagonal innovation matrices" (DIM) have been treated, special cases of which are the matrices K and N .
In the present paper, we consider two matrices A 1 and A 2 defined by where the symbol • denotes the Hadamard product. Hence, the matrices have the forms and . . k n−1 b n−1 k n b n . . . k n−1 a 1 k n−1 a 2 k n−1 a 3 . . . k n−1 b n−1 k n b n k n a 1 k n a 2 k n a 3 . . . k n a n−1 k n b n Let us now define for a matrix B = [b ij ] the terms "pure upper Brownian matrix" and "pure lower Brownian matrix", for the elements of which the following relations are respectively valid The matrix A 1 (Eq. (4)) is a lower Brownian matrix. Furthermore, the matrix P N P , where P = [p ij ] is the permutation matrix with elements is a pure Brownian matrix and P G n P a pure lower Brownian matrix. Hence, their Hadamard product (P N P )•(P G n P ) gives a pure lower Brownian matrix, that is, the matrix P A 2 P .
In the following sections, we deduce in analytic form the inverses and determinants of the matrices A 1 and A 2 ; and we study the numerical complexity on evaluating A −1 1 and A −1 2 .

The Inverse and Determinant of A 1
The inverse of A 1 is a lower Hessenberg matrix expressed analytically by the 3n−1 parameters defining A 1 . In particular, the inverse A −1 1 = [α ij ] has elements given by the relations where and with the obvious assumptions k 1 = 0 and c i = 0, i = 1, 2, . . . , n.
To prove that the relations (8)-(10) give the inverse matrix A −1 1 , we reduce A 1 to the identity matrix I by applying a number of elementary row transformations. Then the product of the corresponding elementary matrices gives the inverse matrix of A 1 . These transformations are defined by the following sequence of row operations.
Operation 1 (applied on A 1 and on the identity matrix I): which transforms A 1 into the lower triangular matrix C 1 given by , and the identity matrix I into the upper bidiagonal matrix F 1 with main diagonal (1 , 1 , . . . , 1) and upper first diagonal Operation 2 (applied on C 1 and F 1 ): which derives a lower bidiagonal matrix C 2 with main diagonal . . , k n−1 c n−1 k n , k n c n and lower first diagonal . . , k n−2 k n−1 g n−1 f n−2 k n g n−2 , k n−1 k n f n−1 g n−1 ; while the matrix F 1 is transformed into the tridiagonal matrix F 2 given by Operation 3 (applied on C 2 and F 2 ): which derives the diagonal matrix and, respectively, the lower Hessenberg matrix F 3 given by with the symbol s standing for the quantity (−1) i+j .
Operation 4 (applied on C 3 and F 3 ): which transforms C 3 into the identity matrix I and the matrix F 3 into the inverse A −1 1 . The determinant of A 1 takes the form Evidently, A 1 is singular if k 1 = 0 or, considering the relations (9), if c i = 0 for some i ∈ {1, 2, . . . , n}.

The Inverse and Determinant of A 2
In the case of A 2 , its inverse A −1 2 = [α ij ] is a lower Hessenberg matrix with elements given by the relations where and with the obvious assumptions k n = 0 and c i = 0, i = 1, 2, . . . , n.
In order to prove that the relations (13)-(15) give the inverse matrix A −1 2 , we follow a similar manner to that of Sec. 2.
Operation 4 (applied on D 3 and L 3 ): . . , n − 1, and 1 k n c n × row n, which transforms D 3 into the identity matrix I and L 3 into the inverse A −1 2 . The determinant of A 2 has the form det(A 2 ) = k n b n (k 1 b 1 − k 2 a 1 ) (k 2 b 2 − k 3 a 2 ) . . . (k n−1 b n−1 − k n a n−1 ) , (17) which shows in turn that the matrix A 2 is singular if k n = 0, or, adopting the conventions (14), if c i = 0 for some i ∈ {1, 2, . . . , n}.

Concluding Remarks
The matrices A 1 and A 2 represent generalizations of known classes of test matrices. For instance, the test matrices given in [6] (Eqs. (2.1), (2.2)) and in [1] (Eq. (2)) belong to the categories presented. Furthermore, by restricting the a's and b's to unity, A 1 and A 2 reduce to the matrices given in [2]. Also, the matrices in [7] (pp. 41, 42, 49) are special cases of A 1 and A 2 . On the other hand, concerning the recursive algorithms given in Sec. 4, we have performed numerical experiments by assigning random values to the parameters of A 1 , and with a variety of order n from 256 to 1024. We have found that computing A −1 1 by the recursive algorithm (18)-(21) is ∼ 100 times faster than using the LU decomposition when n = 256 and increases gradually to ∼ 1000 times faster when n = 1024.