Journal of Applied Mathematics and Physics
Vol.07 No.05(2019), Article ID:92713,12 pages
10.4236/jamp.2019.75079

Towards an Algebraic Theory of Orthogonal Polynomials in Several Variables

Habib Rebei

Department of Mathematics, College of Science, Qassim University, Al-Mulida, KSA

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: February 12, 2019; Accepted: May 4, 2019; Published: May 29, 2019

ABSTRACT

In this paper, we review on a general theory of orthogonal polynomials in several variables (O.P.S.V) in which we present two different approaches for the three-term recurrence relation. We draw attention to the fact that it is possible to take advantage of the orthogonal projection approach of the three-term recurrence relation towards the development of the algebraic theory of O.P.S.V.

Keywords:

Three-Term Recurrence Relation, Orthogonal Polynomials in Several Variables, Quantum Decomposition

1. Introduction

Let [ X ] be the vector space of polynomials with complex coefficients. For P [ X ] , we denote by u , P , the action of a linear functional u in the algebraic dual of [ X ] . In particular we denote by u n = u , X n , n 0 the moments of u. A sequence of polynomials { P n } n 0 is said to be a monic orthogonal polynomial sequence (MOPS) w.r.t a linear functional u [ X ] if (see Ref. [1] ):

1) deg P n = n ,

2) the leading coefficient of P n is equal to 1,

3) u , P n P m = r n δ n , m , n , m 0 , r n 0 , n 0 ,

where for all polynôme P, deg P denotes its degree.

Under these conditions, we say that u is regular. It will be said normalized if u 0 = 1 .

A sequence of monic orthogonal polynomials satisfies a three-term recurrence relation (see [1] ):

P 0 = 1 ; P 1 = X α 1 ; P n + 1 = ( X α n + 1 ) P n ω n P n 1 , n 1 , (1.1)

with

( α n , ω n ) × \ { 0 } , n 1.

For the quantum approach of this relation, quantum probability theory produced a new point of view that the three-term recurrence relation can be interpreted in term of fundamental operators in appropriate Fock space. In the case when the linear functional is positive it has an integral representation

u , f = f ( x ) μ ( d x ) (1.2)

where μ is a probability measure having a finite moment of all orders. In this case, the relation (1.1) can be written

q P n = P n + 1 + α n P n + w n P n 1 (1.3)

To explain the quantum interpretation, we briefly recall some notions.

Let { Φ n , n 0 } be an orthonormal system in a Hilbert space H 0 . Defining the operators:

B + Φ n = w n + 1 Φ n + 1 (1.4)

B Φ 0 = 0 , B Φ n = w n Φ n 1 , n 1. (1.5)

It’s known that B ± are mutually adjoint and the linear subspace Γ H 0 spanned by the set { ( B + ) n Φ 0 , n = 0,1,2, } is invariant under the action of B ± .

The quadruple { Γ , ( Φ n ) n , B ± } is called the interacting Fock probability space associated with μ . The operators B + and B are called the creation operator and the annihilation operators respectively. The linear operator given by

N Φ n = n Φ n , n = 0 , 1 , 2 , (1.6)

is called the number operator. More generally, with the sequence { α n ; n = 1 , 2 , } , we associate the preservation operator α N L ( Γ ) by the prescription

α N Φ n = α n Φ n , n = 1 , 2 , (1.7)

Let L 2 ( μ ) be the space of classes of complex valued, square integrable functions w.r.t μ . We assume that the sub-space [ x ] L 2 ( μ ) spanned by the polynomial functions is dense in L 2 ( μ ) . So that ( P n ) n is an Hilbertian basis of L 2 ( μ ) . In such case, we consider the isomorphism U from Γ to L 2 ( μ ) whose its restriction on H 0 given by:

U : Φ 0 P 0 , w n ! Φ n P n , n * = { 1,2, } , (1.8)

where w n ! = w 0 w 1 w n . Then the U is unitary and we have

q P n = P n + 1 + α n P n + w n P n 1 U 1 q U = B + + α N + B (1.9)

This means that the field operator T : = B + + α N + B is the U 1 -image of the position operator q : = M x on L 2 ( μ ) providing, in this way, a new interpretation of the recursion relation driving by OP in term of CAP operators. Since the random variable with distribution μ can be identified, up to stochastic equivalence , with the position operator q on L 2 ( μ ) , the previous new formulation of the tri-diagonal Jacobi relation in term of the CAP operators is called the quantum decomposition of the classical random variable. In fact we have seen that

q T = B + + α N + B (1.10)

This shows that any classical random variable has a built in non commutative structure which is intrinsic and canonical, and not artificially put by hands that is a sum of three non commuting random variables. This result motivated the apparition of a series of papers [2] [3] [4] [5] [6] and [7] dealing in the same context and provided many applications in the theory of quantum probability.

Compared to the 1-dimensional case the literature available in the multidimensional case is definitively scarce, even if several publications (see e.g., Refs. [8] [9] ) show an increasing interest to the problem in the past years, where it emerges in connection with different kinds of approximation problems. The need for an insightful theory was soon perceived by the mathematical community.

Several progresses followed, both on the analytical front concerning multidimensional extensions of Carlemans criteria, on the algebraic front, with the introduction of the matrix approach [10] and the early formulations of the multidimensional Favard lemma [11] However, even with these progresses in view, one cannot yet speak of a general theory of orthogonal polynomials in several variables and of a multi-dimensional Favard lemma. In fact the importance of Favard lemma consists in the fact that the pair ( α n , w n ) condensates the minimal information gained from the knowledge of the nth moment with respect to the knowledge of all the kth moments with k n 1 . The more recent multi-dimensional formulations of Favard lemma are based on two sequences of matrices, one of which rectangular, with quadratic constraints among the elements of these sequences (see [11] ). Since the multi-dimensional analogues of positive (resp. real) numbers are the positive definite (resp. Hermitian) matrices, one would intuitively expect that a multi-dimensional extension of the Favard lemma would replace the sequence ( w n ) by a sequence of positive definite matrices and the ( α n ) sequence by a sequence of Hermitian matrices for each coordinate function X j . The precise formulation of this naive conjecture is what we call the multi-dimensional Favard problem. The paper is organized as follows: The Section 2 is devoted to recall some necessary notations in the theory of orthogonal polynomials in several variables. In the Section 3, we give the matrix technic used by Xuan Xu to derive the three-term recurrence relation for O.P.S.V. While the Section 4 is reserved to main result which is the connection between the projection approach and matrix approach for the three-term recurrence relation. This can be done via the quantum decomposition process.

2. Preliminaries

For the multi-index α = ( α 1 , , α d ) d and the indeterminate X = ( X 1 , , X d ) , we denote X α = X 1 α 1 X d α d , 1 d . The total degree of X α is given by: | α | = α 1 + + α d .

The space of all polynomial in d-variables X 1 , , X d with real coefficients will be denoted by [ X 1 , , X d ] , i.e.,:

[ X 1 , , X d ] = { P ( X ) = α d a α X α , a α } (2.1)

The subspace of [ X 1 , , X d ] of all polynomial in d-variables X 1 , , X d with real coefficients and with degree at most equal to n is denoted by

n [ X 1 , , X d ] = { | α | n a α X α , a α } (2.2)

We denote by d n the number of the monomials X 1 α 1 X d α d of degree exactly equal to n, so that

d n = dim n [ X 1 , , X d ] dim n 1 [ X 1 , , X d ] = C n + d d C n 1 + d d = C n 1 + d n (2.3)

where

C n k : = n ! k ! ( n k ) ! , k = 0 , 1 , , n (2.4)

For n and for X = ( X 1 , , X d ) d , the vector of size d n will be denoted X n = ( X α ) | α | = n , where the monomials are arranged according to the lexicographical order of { α d : | α | = n } .

Remark 1. The lexicographical order of the vectors X α is given as follows:

X 0 = 1

X 1 = [ X 1 , , X d ] T

X 2 = [ X 1 2 , X 1 X 2 , , X 1 X d , X 2 2 , X 2 X 3 , , X 2 X d , X 3 2 , , X d 1 2 , X d 1 X d , X d 2 ] T

X 3 = [ X 1 3 , X 1 2 X 2 , , X 1 2 X d , X 1 X 2 2 , X 1 X 2 X 3 , , X 1 X 3 X d , X 1 X 3 2 , ] T

A multi-sequence m = ( m α ) α d is called positive definite if for every tuple ( β ( 1 ) , , β ( r ) ) of distinct multi-indices β ( j ) d , 1 j r , the determinant of the matrix

( m β ( 1 ) + β ( 1 ) m β ( 1 ) + β ( 2 ) m β ( 1 ) + β ( r ) m β ( 2 ) + β ( 1 ) m β ( 2 ) + β ( 2 ) m β ( 2 ) + β ( r ) m β ( r ) + β ( 1 ) m β ( r ) + β ( 2 ) m β ( r ) + β (r) )

is positive.

With each multi-sequence ( m α ) α d , one can associate a linear functional on [ X 1 , , X d ] given by:

u ( X α ) = m α . (2.5)

Note that if ( m α ) α d is positive definite then the associate linear functional u is square positive, that is for all non identically zero polynomial

P ( X ) = α a α X α , (2.6)

on has

u ( P 2 ) = α , β m α + β a α a β > 0 (2.7)

The results obtained in this paper concern only the square positive functionals. The case of semi square positive functionals which requires only u ( P 2 ) 0 for all P will be discussed later.

Let μ be a nonnegative Borel measure with an infinite support on d and having a finite absolute mixed moments, i.e.:

d | X α | d μ ( X ) < + , α d . (2.8)

The mixed moments of this measure are given by

m α = m α ( μ ) = d X α d μ ( X ) (2.9)

A d-sequence ( m α ) is called a moment sequence if it coincides with the sequence of mixed moments of a such measure μ . In this case, the associated linear functional u is called a moment functional, which has an integral representation

u ( f ) = d f ( x ) d μ ( x ) . (2.10)

Two Borel measure measures are called equivalent if they have the same mixed moments sequence. If the equivalent class of measures having the same moments as μ consists of μ only, the measure μ is called determinate. If u is a moment functional of a determined measure, then the integral representation is unique. It is known in the literature, that u is a moment functional if it is positive, which means that u ( P ) 0 whenever P 0 .

In one dimensional case ( d = 1 ), positivity means u ( P 2 ) 0 for all polynomial. However, in multi-dimensional case they are no longer equivalent, which is, in fact, the cause of many problems in the multidimensional moment problem.

A square positive linear functional u induces an inner product .,. on [ X 1 , , X d ] given by

P , Q = u ( P Q ) , P , Q [ X 1 , , X d ] . (2.11)

In the remain of this paper we always assume that u ( 1 ) = 1 . Two polynomials P and Q are said to be orthogonal with respect to u, if P , Q = 0 . With respect to such an u we can apply the Gram-Schmidt orthogonalization process on the monomials { X α } arranged in lexicographical order to derive a sequence of orthonormal polynomials, denoted by { P k n } k = 1 n = 0 d n , where the superscript n means that P k n n [ X 1 , , X d ] .

Let us introduce the vector notation that is essential in the development below:

P n ( X ) = [ P 1 n ( X ) P 2 n ( X ) P d n n ( X ) ] (2.12)

P n T ( X ) = [ P 1 n ( X ) , P 2 n ( X ) , , P d n n ( X ) ] (2.13)

Clearly that the sequence of polynomials ( P n ) n is orthonormal with respect to u. In fact one has

u ( P n P m T ) = δ m , n I d n = δ m , n ( 1 0 0 0 1 0 0 0 1 ) (2.14)

and P n can be expressed in terms of monomial vectors X n as follows:

P n = G n X n + G n , n 1 X n 1 + G n , n 2 X n 2 + + G n ,1 X 1 + G n ,0 (2.15)

where G n , n i M ( d n , d i ; ) .

Notice that we use the notation M n ( p , q ; ) to design the space of p × q -matrices with real entries and when p = q we use simply the notation M n ( p ; ) .

The leading coefficient of P n is the matrix G n = G n , n , which is invertible since u is square positive.

For each k 0 , let E k n [ X 1 , , X d ] be the set of polynomials spanned by the components of P k .This implies that E k is a vector space of dimension d k which is orthogonal to all polynomials in k 1 [ X 1 , , X d ] . Moreover, one has

n [ X 1 , , X d ] = 0 k n E k (2.16)

and

[ X 1 , , X d ] = 0 k E k , (2.17)

where the symbol denotes the direct orthogonal sum of vector spaces.

Remark 2. As well known that the sequence of orthonormal polynomial is not unique. Actually, it is easy to see that each orthogonal matrix O of order d k gives rise to an orthonormal basis O P k of E k and every orthonormal basis of E k is of the form O P n . One can also work with other bases of E k that are not necessarily orthonormal. In particular, one basis consists of polynomials P ˜ α k of the form

P ˜ α k = X α + R α k 1 , | α | = k , R α k 1 k 1 [ X 1 , , X d ] . (2.18)

This basis is sometimes called monomial basis, in general u ( P ˜ α k P ˜ β k ) 0 . Although P ˜ α k are orthogonal to all polynomials of lower degrees. It is easy to

see that the matrix H n = u ( P ˜ n P ˜ n T ) is positive definite and P ˜ n = H n 1 2 P n . Because

of the relation, most of the results below can be stated in terms of the monomial basis.

3. Matrix Technic to the Three-Term Recurrence Relation

The development of a general theory of orthogonal polynomials in several variables starts from a three-term relation in a vector-matrix notation very much like in the one variable theory.

Theorem 3.1 For n 0 , there exist matrices A n , i M ( d n , d n + 1 ; ) and B n , i M ( d n ; ) , such that

X i P n = A n , i P n + 1 + B n , i P n + A n 1, i T P n 1 , 1 i d , (3.1)

where we define P 1 = 0 and A 1 , i = 0 .

Proof. Since the components of X i P n are polynomials of degree ( n + 1 ) .

X i P n { P 0 , , P n + 1 } (3.2)

It follows that X i P n is of the form;

X i P n = i = 0 n a n , i P i (3.3)

The orthonormal property of P n implies that only the coefficient of P n + 1 , P n and P n 1 are nonzero. Then we obtain the relation (3.1). □

Remark 3. The matrices in the three-term relation are expressible as

A n , i = u ( X i P n P n + 1 T ) ; B n , i = u ( X i P n P n T ) (3.4)

As a consequence, the matrices B n , i are symmetric. If we are dealing with orthogonal polynomials, P ˜ n , which are not necessarily orthonormal, then the three-term relation takes the form:

X i P ˜ n = A n , i P ˜ n + 1 + B n , i P ˜ n + C n , i T P ˜ n 1 , 1 i d , (3.5)

where C n , i M ( d n , d n 1 ; ) is related to A n , i by

A n , i H n + 1 = H n C n + 1, i , where H n = u ( P ˜ n P ˜ n T ) .

Moreover, comparing the highest coefficient matrices at both sides of (3.1), it follows that

A n , i G n + 1 = G n U n , i , 1 i d , (3.6)

where the matrices U n , i M ( d n , d n + 1 ; ) which are denied by

U n , i X n + 1 = X i X n , 1 i d . (3.7)

Clearly, r a n k ( U n , i ) = d n , and r a n k ( U n ) = d n + 1 , where U n = ( U n ,1 T | | U n , d T ) T . For example, for d = 2 we have

U n ,1 = ( 1 0 0 0 0 0 0 0 1 0 )

and

U n , 2 = ( 0 1 0 0 0 0 0 0 0 1 )

From the relation (3.6) and the fact that G n is invertible, it follows that the matrices A n , i satisfy Rank conditions. For n 0 , r a n k ( A n , i ) = d n for 1 i d , and

r a n k ( A n ) = d n + 1 , A n = ( A n , 1 T , , A n , d T ) T . (3.8)

The importance of the three-term relation in the following analog of Favards theorem of one variable. We extend the notation (2.12) to an arbitrary sequence of polynomials { P k n } k = 1 d n . The following result is a second version of the Recursion formula in [11] .

Theorem 3.2. ( [11] ). Let { P n } n = 0 , P 0 = 1 , be a sequence in [ X 1 , , X d ] . Then the following statements are equivalent:

1) There exists a linear functional which is square positive and which makes { P n } n = 0 an orthonormal basis in [ X 1 , , X d ] .

2) there exist matrices A n , i M ( d n , d n + 1 ; ) and B n , i M ( d n ; ) such that

a) the polynomial vectors P n satisfy the three-term relation (3.1),

b) the matrices in the relation satises the rank condition.

The theorem 3.2 is an analog of Favards theorem in one variable, but it is not as strong as the classical Favards theorem. It does not state, for example, when the linear functional u in the theorem will have an integral representation. For now, we concentrate on the three-term relation (3.1). It is an analog of the three-term relation in one variable. The fact that its coefficients are matrices reflect the complexity of the structure for d 2 .

4. Orthogonal Projection Approach of the Three-Term Recurrence Relation and Quantum Decomposition

In this section we give the orthogonal projection approach of the three-term recurrence relation. This approach gives a new point of view to the tri-diagonal recurrence relation. In fact we will see that in analogy with the one variable case, there exist such a quantum decomposition of the multiplication operator M X j X j . The orthogonal polynomials in several variables associated with μ can be replaced by a sequence of orthogonal projections.

To this goal let us consider a square positive linear function u, on [ X 1 , , X d ] . For n , we denote

P n ] : [ X 1 , , X d ] n [ X 1 , , X d ] (4.1)

the u-orthogonal projection on the subspace k [ X 1 , , X d ] , where we refer to the inner product induced by u as in (2.11). We define the linear map P n as follow:

P n = P n ] P n 1 ] , n (4.2)

where we adopt the notation P 1 ] = 0 .

Lemma 4.1. For all n , the maps P n are orthogonal projections on the subspace E n [ X 1 , , X d ] spanned by the monomials ( X α ) | α | = n . Moreover for any m , n , one has

P n P m = δ m , n P n (4.3)

and

n = 0 P n = 1 , (4.4)

where 1 denotes the identity of the space [ X 1 , , X d ] .

Proof. Let Q = α a α X α [ X 1 , , X d ] , then

P n Q = P n ] Q P n 1 ] Q = | α | n a α X α | α | n 1 a α X α = | α | = n a α X α E n

which prove that ran ( P n ) E n . Conversely if Q E n , clearly that Q = P n ( Q ) ran ( P n ) . This implies that ran ( P n ) = E n .

Furthermore we have

P n 2 = ( P n ] P n 1 ] ) 2 = P n ] 2 P n 1 ] P n ] P n ] P n 1 ] + P n 1 ] 2 = P n ] P n 1 ] P n 1 ] + P n 1 ] = P n ] P n 1 ] = P n

and

P n * = ( P n ] P n 1 ] ) * = P n ] * P n 1 ] * = P n ] P n 1 ] = P n (4.5)

which prove that P n is an orthogonal projection.

Now let m , n . In the case n < m , we get

P m P n = ( P m ] P m 1 ] ) ( P n ] P n 1 ] ) = P m ] P n ] P m ] P n 1 ] P m 1 ] P n ] + P m 1 ] P n 1 ] = P n ] P n 1 ] P n ] + P n 1 ] = 0

The case n > m , we obtain

P m P n = ( P m ] P m 1 ] ) ( P n ] P n 1 ] ) = P m ] P n ] P m ] P n 1 ] P m 1 ] P n ] + P m 1 ] P n 1 ] = P m ] P m ] P m 1 ] + P m 1 ] = 0

When m = n , we have seen that P n 2 = P n , which proves the identity (4.3). It remains to prove the identity (4.4).

We have

n = 0 N P n = P 0 ] + n = 1 N P n ] P n 1 ] = P N ] (4.6)

Taking the limit as N + , we get (4.4). □

Theorem 4.1. For all n , we have

X j P n = P n + 1 X j P n + P n X j P n + P n 1 X j P n . (4.7)

Proof. Since

X j = 1 X j 1 = m , k P m X j P k

Then

X j P n = m = 0 P m X j P n

Since X j E n n + 1 [ X 1 , , X d ] it follows that

X j P n = P n + 1 ] X j P n

Since ( P m ] ) m is increasing, if m > n + 1 we get

P m ] P n + 1 ] = P m 1 ] P n + 1 ] = P n + 1 ]

hence

P m X j P n = P m P n + 1 ] X j P n = ( P m ] P m 1 ] ) P n + 1 ] X j P n = 0

If m < n 1 , then the first part of the prove implies

P m X j P n = ( P n X j P m ) * = 0

Summing up P m X P n , can be nonzero only if m { n 1, n , n + 1 } and this proves (4.7). □

Now we consider the operators

A j , n + = P n + 1 X j P n : E n E n + 1

A j , n = P n 1 X j P n : E n E n 1

N j , n = P n X j P n : E n E n

Then it is not difficult to show that

( A j , n + ) * = A j , n + 1 + ; N j , n * = N j , n (4.8)

Theorem 4.2. ( [12] ) Defining the C-A-P (creation, annihilation and preservation)-operators

A j + = n = 0 A j , n + ; A j = n = 0 A j , n ; N j = n = 0 N j , n (4.9)

Then the following quantum decomposition holds

X j = A j + + N j + A j (4.10)

5. Conclusion

The connection between the approach given in Section 3 and the projection approach given here is not sufficiently clear. In the remain of this section, we discuss some possible bridges connecting them. First, one of the most fundamental questions is the existence of such quantum decomposition of X j when the linear functional is not positive. The second task is, even for the square positive functional, what is the form of the quantum decomposition of X j . Another question can be addressed to algebraist is can we obtain such form of recurrence relation if the linear functional is not positive. The answers to these questions open the gate to many future investments towards the development of the algebraic theory of orthogonal polynomials in several variables.

Acknowledgements

The authors gratefully acknowledge Qassim University, represented by the Deanship of Scientific Research, on the material support for this research under the number (1994-cos-2016-1-12) during the academic year 1437 AH/2016 AD.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Rebei, H. (2019) Towards an Algebraic Theory of Orthogonal Polynomials in Several Variables. Journal of Applied Mathematics and Physics, 7, 1185-1196. https://doi.org/10.4236/jamp.2019.75079

References

  1. 1. Chihara, T.S. (1978) An Introduction to Orthogonal Polynomials. Gordon and Breach, New York.

  2. 2. Accardi, L., Rebei, H. and Riahi, A. (2013) The Quantum Decomposition of Infinitely Divisible Random Variables. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 16, 28. https://doi.org/10.1142/S0219025713500124

  3. 3. Accardi, L., Rebei, H. and Riahi, A. (2014) The Quantum Decomposition Associated with the Lévy White Noise Processes without Moments. Probability and Mathematical Statistics, 34, 337-362.

  4. 4. Accardi, L., Ouerdiane, H. and Rebei, H. (2010) On the Quadratic Heisenberg Group. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 13, 551-587. https://doi.org/10.1142/S0219025710004231

  5. 5. Rebei, H. (2015) The Generalized Heisenberg Group Arising from Weyl Relations, Quantum Studies. Mathematics and Foundations, 2, 323-350. https://doi.org/10.1007/s40509-015-0054-6

  6. 6. Rebei, H. (2016) On the One Mode Quadratic Weyl Operators. Journal of Mathematical Analysis and Applications, 439, 135-153. https://doi.org/10.1016/j.jmaa.2016.02.040

  7. 7. Rebei, H., Al-Mohaimeed, B. and Riahi, A. (2015) Classical Versions of Quantum Stochastic Processes Associated with the Adapted Oscillator Algebra. International Journal of Innovation in Science and Mathematics, 3, 245-253.

  8. 8. Koornwinder, T.H. (1990) Orthogonal Polynomials in Connection with Quantum Groups. In: Nevai, P., Ed., Orthogonal Polynomials, Springer, Dordrecht, 257-292. https://doi.org/10.1007/978-94-009-0501-6_12

  9. 9. Marcellàn, F. and Assche, V. (2006) Orthogonal Polynomials and Special Functions. Springer, Berlin. https://doi.org/10.1007/b128597

  10. 10. Kowalski, M.A. (1982) Orthogonality and Recursion Formulas for Polynomials in n Variables. SIAM Journal on Mathematical Analysis, 13, 316-323. https://doi.org/10.1137/0513023

  11. 11. Xu, Y. (1997) On Orthogonal Polynomials in Several Variables. Fields Institute Communications, 14, 247-270. https://doi.org/10.1090/fic/014/13

  12. 12. Accardi, L., Barhoumi, A. and Dhahri, A. (2017) Identification of the Theory of Orthogonal Polynomials in d-Indeterminates with the Theory of 3-Diagonal Symmetric Interacting Fock Spaces. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 20, 55. https://doi.org/10.1142/S0219025717500047