Generalized Inversions of Hadamard and Tensor Products for Matrices

We shall give natural generalized solutions of Hadamard and tensor products equations for matrices by the concept of the Tikhonov regularization combined with the theory of reproducing kernels.


Introduction
In this paper we shall consider the matrix equation however, here we shall consider the product AB (2) in the two senses; that is, the Hadamard product AB A B = * and the tensor (Kronecker) product Then, for the Hadamard product, for some same type matrices, the product may be considered as the matrix C made by each matrix elements-wise products, meanwhile, for the tensor product A B ⊗ , there is no any restriction on the matrices types.In this paper, we shall consider natural inversions A for given C and B by the idea of the Tikhonov regularization and the Moore-Penrose generalized inverses from the viewpoint of the theory of reproducing kernels.

Needed Backgrounds
We shall need the basic theory of reproducing kernels in connection with matrices.We shall fix the minimum requests for our purpose.

Reproducing Kernels and Positive Definite Hermitian Matrices
Let E be an arbitrary abstract (non-void) set.Let ( ) E  denote the set of all complex-valued functions on E .A reproducing kernel Hilbert spaces (RKHS for short) on the set E is a Hilbert space ( )  endowed with a function : K E E × →  , which is called the reproducing kernel and which satisfies the reproducing property.Namely we have ( ) , for all and ( ) for all p E ∈ and for all f ∈  .We denote by ( ) ) the reproducing kernel Hilbert space  whose corresponding reproducing function is K .
A complex-valued function : K E E × → is called a positive definite quadratic form function on the set E , or shortly, positive definite function, if, for an arbitrary function : X E →  and for any finite subset F of E , one has By a fundamental theorem, we know that, for any positive definite quadratic form function K on E , there exists a unique reproducing kernel Hilbert space on E E × with reproducing kernel K .So, in a sense, the correspondence between the reproducing kernel K and the reproducing kernel Hilbert space

( )
K H E is one to one.
A simple example of positive definite quadratic form function is a positive definite Hermitian matrix and we obtain the fundamental relation: of the complex valued functions on E , endowed with the inner product is a reproducing kernel Hilbert (complex Euclidean) space with reproducing kernel K defined by ( ) for all , 1, , i j n =  .Indeed, the validity of (5) follows by a straightforward calculation.To prove (6) we observe that ( ) We can thus combine the two theories of positive definite Hermitian matrices and of reproducing kernels.See [1]- [9] for various applications and numerical problems.

Tensor Products of Reproducing Kernel Hilbert Spaces and Restrictions
For any two positive definite quadratic form functions ( ) , K p q on 1 1 E E × and 2

2
E E × , respectively, the usual product ( ) ( ) , , K p q K p q is again a positive definite quadratic form function on E E E E × × × .Then, it is the reproducing kernel for the tensor product ( ) ( ) for their reproducing kernel Hilbert spaces.
We would like to recall that for any two positive definite quadratic form functions ( ) , , , K p q K p q K p q = is again a positive definite quadratic form function on E .Consequently, the reproducing kernel Hilbert space K H which admits the kernel ( ) , K p q can be seen as the restriction of the tensor product ( ) ( ) Proposition 2.1 (see, [10] [11]) Let us consider ( ) H E , respectively.Then, the reproducing kernel Hilbert space K H is comprised of all functions on E which are represented by on , , in the sense of absolutely convergence on E , and its norm in K H is given by are considered to satisfy (8).Moreover, the norm inequality

Generalized Inverses and the Tikhonov Regularization
Let L be any bounded linear operator from a reproducing kernel Hilbert space K H into a Hilbert space  .Then, the following problem is a classical and fundamental problem (known as the best approximate mean square norm problem): For any member d of  , we would like to find inf .
It is clear that we are considering operator equations, generalized solutions and corresponding generalized inverses within the framework of However, this problem has a complicated structure, specially in the infinite dimensional Hilbert spaces case, leading in fact to the consideration of generalized inverses (in the Moore-Penrose sense).Anyway, the problem turns to be well-posed within the reproducing kernels theory framework in the following proposition: Proposition 2.2 For any member d of  , there exists a function f if and only if, for the reproducing kernel Hilbert space k H , admitting the kernel defined by Furthermore, when there exists a function f  satisfying (10), there exists a uniquely determined function that minimizes the norms in K H among the functions satisfying the equality, and its function f d is S. Saitoh represented as follows: From this proposition, we see that the problem is considered in a very good way by the theory of reproducing kernels.Namely, the existence, uniqueness and representation of the solutions for this problem are wellformulated.In particular, note that the adjoint operator is represented in a useful way (which will be very important in our framework later on).The extremal function f d is the Moore-Penrose generalized inverse † L d of the equation Lf = d .The criteria (11) is involved and so the Moore-Penrose generalized inverse f d is-in general-not good, specially when the data contain error or noises in some practical cases.To overcome this difficulty, we will need the idea of Tikhonov regularization.
Proposition 2.3 Let : K L H →  be a bounded linear operator defined from any reproducing kernel Hilbert space K H into any Hilbert space  , and define the inner product is a reproducing kernel Hilbert space whose reproducing kernel is given by ( ) ( ) Here, ( ) (that is corresponding to the Fredholm integral equation of the second kind for many concrete cases).Moreover, we are attains the minimum and the minimum is attained only by the function Furthermore, ( )( ) For up-to-date versions of the Tikhonov regularization by using the theory of reproducing kernels, see [12] [13].Furthermore, various applications and numerical problems, see, for example, [14] [15].

The Solution of the Tikhonov Functional Equation
For our purpose, we use a natural representation of the Tikhonov extremal function considered in Proposition 2.4 by using complete orthogonal systems in the Hilbert spaces K H and  .The original result is given by [16].
be any fixed complete orthonormal systems of the Hilbert spaces  and in the sense of the tensor product K H ⊗  .In the expansion of the reproducing kernel ( ) , LK q ⋅ belongs to  , we can set ( ) with the uniquely determined constants { } and the following fundamental estimates hold: Here, note that , d ν µ are depending on λ .
In particular, if

Matrices and Reproducing Kernels
We shall combine matrices and reproducing kernels.In order to simplify the notations we shall consider the numbers on the real field R .First note that m R is the reproducing kernel Hilbert space admitting the reproducing kernel that is represented by the unit matrix m U of the m m × and then the inner product ( ) and when we take the simplest standard orthonormal system { } 1 m j j= e , the reproducing property is given by ( ) , , , , .
We shall consider another space n R ( ) 1 n ≥ , similarly and we shall consider the tensor product m n ⊗ R R .Then, the metric in the tensor product is given by ( ) m n j j j j j j x y x y in the mn dimensional space mn R for form the tensor product of the two reproducing spaces m R and n R and the metric is introduced by the 2 l in the mn dimensional space mn R and the reproducing kernel may be considered as in the case m R .By this idea, we can consider any matrices space of any type may be considered as reproducing kernel spaces and when we consider the metric, we shall consider it as in the above sense.As we stated in the introduction, since we are interested in the Hadamard product and tensor product of matrices, we will see that it is enough to consider the case of vector spaces by transformed to vectors.

Hadamard Product Inversions
We shall consider on n R and for any members , Of course, there exists a uniquely determined the Moore-Penrose generalized inverse, all the cases, by taking the limit ( ) In particular, in the sense of the Moore-Penrose generalized inverse, of course, we have, on R , 100 0 0, 0.
Mathematicians will expect for our mathematics, there exists some realization of our mathematics in some real world.So, we wonder: does there exist some real examples supporting the above results.
In particular, Sin-Ei,Takahashi ( [17]) established a simple and natural interpretation (27) by analyzing any extensions of fractions and by showing the complete characterization for such property (27).His result will show that our mathematics says that the results (27) should be accepted as natural ones.However, the results will be curious and so, the above question will be vital still as a very important problem.
For simple direct proofs and several physical meanings, see [18].

Inversions of Tensor Products of Matrices
Before a general situation, we shall show the simplest case as a prototype example in order to see the basic concept and problem for the inversion.We shall consider for 2 a R  and 3 b R  as follows: We can write the tensor product as follows: or with its transpose.However, as we stated, we shall consider it as follows: ( ) that will mean the equations: , 1, 2; 1, 2,3.
We are considering arbitrary , m n positive integers, and so we see that these equations will have important problems; the existence and representation problems.By the concept and theory in Proposition 2.5, we will be able to obtain the reasonable solutions.
the form (13) and the representation (14) in Proposition 2.4 of the Tikhonov extremal function , f λ d , we shall assume the representation

5 ,
is equivalent to the circumstance of the operator L be a Hilbert-Schmidt operator from K H into  .Assume that the operator L is a Hilbert-Schmidt operator and the Equation (20) have the solutions { } of Proposition 2.4 is given by This will mean that the m n × matrices formed by, for

Theorem 4 . 1
the Tikhonov inverse in the sense, for any positive λ and for any In the sense of (25), the general inversion Note that from the general theory of reproducing kernels, we obtain the inequality . is just the Cauchy-Schwart inequality and theory gives the background of the inequality with the precise structure of the Hadamard product.So, the multiplication operator, for any fixed b x x b → * is a bounded linear operator and by Proposition 2.5, directly we obtain the result.
, .a b a b a b a b a b a b ∈ R (31) Now we are interested in the matrix equation, for given ,

1
, we shall consider the Tikhonov inverse in the sense, for any positive λ the general theory of reproducing kernels, we obtain the equality .linear operator into mn R .So, we can apply Proposition 2.5.The Equation (16) corresponds to In the sense of (34), the general inversion ( ) ; j a c λ is given, with the uniquely determined solutions , and the corresponding results in Proposition 2.5 are valid.
Meanwhile, when we apply the above complete orthonormal systems to the Moore-Penrose generalized problem, we see that we cannot, in general, obtain any valuable information, because we cannot analyze the structure of the important reproducing kernel Hilbert space k H in Proposition 2.2.
K H and  are finite dimensional, then we obtain the complete representation for the Tikhonov extremal function ( )( ) , f p λ d .