Carleman Linearization and Systems of Arbitrary Depth Polynomial Recursions

Abstract

New approach to systems of polynomial recursions is developed based on the Carleman linearization procedure. The article is divided into two main sections: firstly, we focus on the case of uni-variable depth-one polynomial recurrences. Subsequently, the systems of depth-one polynomial recurrence relations are discussed. The corresponding transition matrix is constructed and upper triangularized. Furthermore, the powers of the transition matrix are calculated using the back substitution procedure. The explicit expression for a solution to a broad family of recurrence relations is obtained. We investigate to which recurrences the framework can be applied and construct sufficient conditions for the method to work. It is shown how introduction of auxiliary variables can be used to reduce arbitrary depth systems to the depth-one system of recurrences dealt with earlier. Finally, the limitations of the method are discussed, outlining possible directions for future research.

Share and Cite:

Myszkowski, M. (2022) Carleman Linearization and Systems of Arbitrary Depth Polynomial Recursions. Advances in Linear Algebra & Matrix Theory, 12, 1-23. doi: 10.4236/alamt.2022.121001.

1. Introduction

Recurrence relations arise in various fields of mathematics and prove to be a powerful tool for studying probability [1] and combinatorics [2]. It is often the case that construction of a solution to a mathematical problem simplifies to solving a recurrence relation. The theory of linear recurrences is well studied, leading to various results such as the celebrated Binet’s formula or the characteristic polynomial method [3] [4]. The dynamics of non-linear recurrence sequences are of special interest, as there are various examples of non-linear (in particular, polynomial) maps exhibiting chaotic behaviour. The examples include the logistic, quadratic and exponential maps, as well as the Ricatti recurrence [5] - [10].

Definition 1.1. [11] Let u i k , i 0 be k sequences of complex numbers such that for every i n the following relation holds:

( u i 1 = F 1 ( u i 1 1 , , u i 1 k , u i 2 1 , , u i 2 k , , u i n 1 , , u i n k ) u i 2 = F 2 ( u i 1 1 , , u i 1 k , u i 2 1 , , u i 2 k , , u i n 1 , , u i n k ) u i k = F k ( u i 1 1 , , u i 1 k , u i 2 1 , , u i 2 k , , u i n 1 , , u i n k ) (1.1)

Then, we say that sequences u i k are a solution to the depth-n system of recurrences defined by functions F i .

In case when functions F i are linear, the system of Equations (1.1) can be rewritten in the matrix form [12]. If the auxiliary vectors y i , i n and e j are defined as:

y i = [ 1 , u i 1 , , u i k , u i 1 1 , , u i 1 k , u i 2 1 , , u i 2 k , , u i n + 1 1 , , u i n + 1 k ] T (1.2)

e j = [ δ 0 , j , δ 1 , j , , δ n k , j ] (1.3)

then the system of recurrences (1.1) can be rewritten as recurrence relation for the sequence of vectors y i :

y i = M y i 1 (1.4)

where ( 1 + n k ) × ( 1 + n k ) matrix M is called the transition matrix associated with (1.1). By induction, the solution to (1.4) is given by powers of matrix M:

y i = M i + 1 n y n 1 (1.5)

where y n 1 is defined by initial conditions. Equation (1.5) can be then multiplied by vectors e j from the left, hence extracting the original sequences from y i :

u i k = e k M i + 1 n y n 1 (1.6)

The above method of solving linear recurrences is fundamentally equivalent to the method of characteristic polynomials [13].

Even if functions F i are non-linear (but analytic), the solution to the system still can be formulated in terms of powers of the appropriately chosen transition matrix, in analogy with the linear counterpart [14]. The method of describing dynamical systems (recursions in particular) in terms of infinite-dimensional linear algebra is referred to as Carleman embedding or Carleman linearization [15] [16].

Theorem 1.2. [9] [16] Let k = 1 , and F 1 = F 1 ( u i 1 1 ) be analytic in u i 1 1 . Let the (infinite) auxiliary vectors be defined as:

y i = [ 1 , u i 1 , ( u i 1 ) 2 , ( u i 1 ) 3 , ] T , e j = [ δ 0 , j , δ 1 , j , δ 2 , j , ] (1.7)

Then, the solution can be written in the form:

u i 1 = e 1 M i y 0 (1.8)

where y 0 is specified by initial conditions, and the infinite-dimensional matrix M is given by coefficients of Maclaurin expansion of F 1 :

M a b = 1 b ! d b d x b ( F 1 ( x ) ) a | x = 0 (1.9)

Theorem 1.2 allows for an analytic function to be iterated by computing powers of the infinite transition matrix associated with the function. In practice, however, it is troublesome to give an explicit expression for the powers of matrix M [17]. Nevertheless, in some cases, the transition matrix can be diagonalized and the corresponding eigenvectors can be found [16]. We focus on systems of recurrence relations (1.1) with F i restricted to be finite degree polynomials. The article is divided into two main sections.

Firstly, the k = 1 (uni-variable) depth-one case is discussed, leading to the exact expression for the solution in terms of the initial condition and coefficients of polynomials F i . As it turns out, in most cases the transition matrix can be diagonalized by a suitable choice of variables.

The method of the Carleman embedding is then applied to the multi-variable depth one recurrence. With minor modifications of the auxiliary vectors, the transition matrix can be compactly written as a block matrix. Again, M is diagonalized and the eigenvectors are found. The main theorem of the article is given, solving the recursion.

Finally, we discuss the general case (1.1). Provided that F i is polynomial in u i k , the system of Equations (1.1) is embedded in an infinite linear space. It is shown that by introducing appropriate auxiliary variables, any multi-variable arbitrary depth recurrence can be reduced to multi-variable depth-one recurrence. Therefore, the recurrence (1.1) can be solved using the main theorem presented in Section 3.

Throughout the article, we assume that the functions F i are polynomial in all variables. For convenience, the first elements of vectors are indexed from 0 instead of 1. A similar rule applies to matrices.

2. Uni-Variable Recurrences of Depth One

In the uni-variable case, there is only one sequence u i 1 u i and one function F i F (we drop the unnecessary indices for simplicity). Furthermore, F is a function of a single variable. The recurrence relations of this kind can be written as:

u i = F ( u i 1 ) = j = 0 m c j ( u i 1 ) j (2.1)

where c j are constant coefficients of the polynomial F. The recurrence (2.1) is of special interest, as it includes many well-known nonlinear chaotic maps, i.e. the logistic map, which is used as an example at the end of this section [18].

Using Theorem 1.2 and the multinomial expansion theorem [19], the infinite dimensional transition matrix can be obtained:

T a b = 1 b ! d b d x b ( F 1 ( x ) ) a | x = 0 = 1 b ! d b d x b ( j = 0 m c j x j ) a | x = 0 = 1 b ! d b d x b l = 0 m k l = a ( a k 0 , k 1 , , k m ) t = 0 m ( c t x t ) k t | x = 0 (2.2)

where k l are assumed to be non-negative integers. The condition l = 0 m k l = a denotes the fact that the bigger sum runs only over such non-negative integers k l that sum up to a. This further simplifies to:

T a b = 1 b ! d b d x b l = 0 m k l = a ( a k 0 , k 1 , , k m ) t = 0 m ( c t x t ) k t | x = 0 = l = 0 m k l = a , l = 0 m l k l = b ( a k 0 , k 1 , , k m ) t = 0 m c t k t = l = 0 m k l = a , l = 0 m l k l = b a ! s = 0 m k s ! t = 0 m c t k t (2.3)

Again, the conditions under the bigger sum narrow down the possible combinations of k l to those that obey l = 0 m k l = a , l = 0 m l k l = b . If no such integers exist, the sum vanishes and T a b = 0 . The transition matrix (2.3), together with the auxiliary vectors from Equation (1.7) defines the Carleman embedding of the recurrence (2.1) in the infinite dimensional linear space. We now prove several facts that will become useful later on in calculating powers of T a b .

Theorem 2.1. Let T a b be the infinite dimensional transition matrix corresponding to the recurrence (2.1). Then, the following holds true:

1) T 0 b = δ 0 , b

2) If c 0 = 0 , T a b is upper triangular

Proof. (1) We obtain:

T 0 b = l = 0 m k l = a , l = 0 m l k l = b ( 0 k 0 , k 1 , , k m ) t = 0 m c t k t (2.4)

However, since k l 0 and l = 0 m k l = 0 , all of the integers k l have to vanish. Therefore l = 0 m l k l = b = 0 . If b 0 , there are no k l satisfying the conditions under the sum, and T 0 b = 0 . If b = 0 :

T 00 = ( 0 0 , , 0 ) t = 0 m c t 0 = 1 (2.5)

(2) If c 0 = 0 , then the lowest non-zero power of x appearing in polynomial F ( x ) is x 1 . By extension, the lowest power of x appearing in powers F a is of order a. As a result:

T a b = 1 b ! d b d x b ( j = 1 m c j x j ) a | x = 0 = 1 b ! d b d x b ( c 1 a x a + ) | x = 0 = 0 (2.6)

for b < a .

It is worth noting that the above mentioned polynomials have also been studied in the setting of graphs [20]. The fact that the transition matrix can take the upper triangular form makes diagonalization feasible. One is always free to work with new shifted variables u i = u i d , which satisfy a new recurrence:

u i = F ( u i 1 ) = j = 0 m c j ( u i 1 ) j (2.7)

The coefficients c j can be obtained by substituting u i = u i d into (2.1):

u i + d = j = 0 m c j ( u i 1 + d ) j = j = 0 m c j l = 0 j ( j l ) d l u i 1 j l = l = 0 m j = l m c j ( j l ) d j l u i 1 l = j = 0 m c j u i 1 j + d (2.8)

Which implies:

c 0 = j = 0 m c j d j d , c l = j = l m c j ( j l ) d j l for l 0 (2.9)

Therefore, new coefficients c j are functions of old coefficients c j and the shift parameter d. It turns out that there always exists d such that the transition matrix of the transformed recurrence becomes upper triangular.

Lemma 2.2. Let the recurrence on u i be defined as in (2.1). For every F ( u i 1 ) there exists d such that the transition matrix T a b corresponding to the new sequence u i = u i d is upper triangular.

Proof. The new sequence u i is obtained by performing a shift transformation with a parameter d. As already derived in (2.7), the recurrence of the shifted sequence is:

u i = j = 0 m c j u i 1 j (2.10)

where the constant term (term near 0-th power of u i 1 ) reads:

c 0 = j = 0 m c j d j d (2.11)

Theorem 2.1 states that the transition matrix T a b corresponding to the shifted recurrence is upper triangular if the constant term c 0 = 0 vanishes. This leads to the following polynomial equation in d:

0 = j = 0 m c j d j d (2.12)

By the fundamental theorem of algebra, there always exists d C such that c 0 = 0 [21]. Therefore, the transition matrix T a b can be always brought to an upper triangular form by a suitable shift of variables.

As a result, we can consider only the recurrences satisfying c 0 = 0 without loss of generality.

Because the transition matrix is upper triangular, the diagonal elements T a a are its eigenvalues [22]. Using (2.2):

λ a = T a a = 1 a ! d a d x a ( j = 1 m c j x j ) a | x = 0 = 1 a ! d a d x a ( c 1 a x a + ) | x = 0 = c 1 a (2.13)

The transition matrix is diagonalizable if all its eigenvalues are different, i.e. when c 1 a = c 1 b a = b [23]. Since in general c 1 , this condition can be rewritten as c 1 0 ( | c 1 | 1 a r g ( c 1 ) ) . It should be noted that the right hand side of Equation (2.11) may have more than one distinct root, which means that there exist recurrences that can be shift transformed from c 1 = 0 ( | c 1 | = 1 a r g ( c 1 ) ) to c 1 0 ( | c 1 | 1 a r g ( c 1 ) ) .

Example 2.3. Let F ( x ) = x 3 + 2 x 2 + x . The recurrence reads:

u i = F ( u i 1 ) = ( u i 1 ) 3 + 2 ( u i 1 ) 2 + u i 1 (2.14)

The linear coefficient c 1 = 1 , hence there is no guarantee that the corresponding transition matrix is diagonalizable. Nevertheless, by performing the shift transformation with d = 2 , we obtain (using formulas (2.9)):

u i = ( u i 1 ) 3 4 ( u i 1 ) 2 + 5 u i 1 (2.15)

with c 0 = 0 , | c 1 | 1 0 .

The above example shows that the condition c 1 0 ( | c 1 | 1 a r g ( c 1 ) ) is sufficient, but not necessary for the diagonalization of T a b . Some transition matrices that are hard to deal with can be diagonalized after shift of variables, thus the recurrence can be solved.

The next two Lemmas regard the procedure of diagonalization of infinite upper triangular matrices. The set of linearly independent eigenvectors is obtained. Even though the matrix is infinite, the eigenvector corresponding to a-th eigenvalue λ a has at most a + 1 non-zero components. An explicit expression for the inverse of an upper triangular matrix is given (provided that it exists), which allows for calculation of the inverse of the corresponding modal matrix. The transition matrix is then diagonalized, and the powers of the matrix are subsequently calculated.

Lemma 2.4. Let M a b be an infinite upper triangular matrix with a λ a = M a a 0 and different eigenvalues, i.e. λ a = λ b a = b . Let v a b denote the b-th component of the (not necessary normalized) eigenvector corresponding to the a-th eigenvalue λ a . Then:

1) b > a , v a b = 0 ;

2) b = a , v a b = 1 ;

3) l 0 < l p + 1 :

v l p + 1 l 0 = p = 0 l p + 1 l 0 1 l 0 < l 1 < < l p < l p + 1 ( 1 ) p + 1 j = 0 p M l j , l j + 1 M l j , l j λ l p + 1 (2.16)

Proof. The eigenvector equation M v a = λ a v a in the matrix form reads:

[ M 00 λ a M 01 M 02 M 03 M 04 M 05 0 0 M a 1 , a 1 λ a M a 1 , a M a 1 , a + 1 M a 1 , a + 2 0 0 0 M a , a + 1 M a , a + 2 0 0 0 M a + 1 , a + 1 M a + 1 , a + 2 ] [ v a 0 v a a 1 v a a v a a + 1 ] = 0 (2.17)

Lets consider a finite upper triangular system of equations that arises from truncating the above matrix at ( a + 1 ) × ( a + 1 ) :

[ M 00 λ a M 01 M 0 a 0 0 M a 1 , a 1 λ a M a 1 , a 0 0 0 ] [ v a 0 v a a 1 v a a ] = 0 (2.18)

Since all of the eigenvalues are different, the diagonal elements b a λ b λ a 0 and the matrix M are diagonalizable. The determinant of (2.16) vanishes, and thus the solution is not unique (up to a scaling factor), as expected. The explicit expression for v a b as a function of v a a can be obtained by the back substitution method [24]:

v l p + 1 l 0 = v l p + 1 l p + 1 p = 0 l p + 1 l 0 1 l 0 < l 1 < < l p < l p + 1 ( 1 ) p + 1 j = 0 p M l j , l j + 1 M l j , l j λ l p + 1 (2.19)

For simplicity, we set the scaling factor v l p + 1 l p + 1 = 1 for all l p + 1 . It can be seen that l 0 > l p + 1 would contradict with l 0 < l 1 < < l p < l p + 1 , hence (2.19) vanishes and the identity (1) holds (for the eigenvectors of the truncated matrix). Since we know that b > a v a b = 0 , the eigenvectors of the truncated matrix (2.18) with zeros substituted for the rest of the components are also the solution to the infinite Equation (2.17). Therefore, expressions (1), (2), (3) and (2.16) also hold for the matrix M.

The following formula is an explicit expression for the inverse of an upper triangular matrix:

Lemma 2.5. Let U be an infinite upper triangular matrix with complex entries:

U k m = [ U 00 U 01 U 02 U 03 0 U 11 U 12 U 13 0 0 U 22 U 23 0 0 0 U 33 ] (2.20)

and let n U n n 0 . The explicit expression for the inverse of U is given by:

U k m 1 = ( 0 for k > m 1 U k m for k = m (2.21)

and:

U l 0 , l p + 1 1 = 1 U l p + 1 , l p + 1 p = 0 l p + 1 l 0 1 l 0 < l 1 < < l p < l p + 1 ( 1 ) p + 1 j = 0 p U l j , l j + 1 U l j , l j (2.22)

for l 0 < l p + 1 .

Proof. The inverse of the upper triangular matrix is also upper triangular [25], therefore the equation U U 1 = I reads:

[ U 00 U 01 U 02 U 03 0 U 11 U 12 U 13 0 0 U 22 U 23 0 0 0 U 33 ] [ U 00 1 U 01 1 U 02 1 U 03 1 0 U 11 1 U 12 1 U 13 1 0 0 U 22 1 U 23 1 0 0 0 U 33 1 ] = I (2.23)

For the diagonal elements, we have:

U n n 1 = 1 U n n (2.24)

Elements further off the diagonal can be obtained by the back substitution algorithm. Using (2.23):

U n n U n , n + 1 1 + U n , n + 1 U n + 1 , n + 1 1 = 0 (2.25)

Which together with (2.24) gives:

U n , n + 1 1 = U n , n + 1 U n + 1 , n + 1 1 U n n = U n , n + 1 U n n U n + 1 , n + 1 (2.26)

By repeating the same process for the rest of the elements of U k m 1 , the relation (2.22) for k < m is obtained.

The powers of the transition matrix can be calculated by diagonalizing T:

T n = ( P D P 1 ) n = P D n P 1 (2.27)

where D denotes the infinite matrix with the eigenvalues of T as diagonal entries, while P is the infinite modal matrix composed of eigenvectors of T:

D = d i a g [ λ 0 , λ 1 , λ 2 , ] , P = [ v 0 , v 1 , v 2 , ] (2.28)

We now prove the main theorem of this section:

Theorem 2.6. Let u i , i 0 be a sequence of numbers satisfying the recurrence relation (2.1). Let F ( u i 1 ) = j = 0 m c j ( u i 1 ) j be polynomial in u i 1 such that the corresponding polynomial equation:

j = 0 m c j d j = d (2.29)

has a root d satisfying:

| j = 1 m j c j d j 1 | 0 ( | j = 1 m j c j d j 1 | 1 a r g ( j = 1 m j c j d j 1 ) ) (2.30)

Then, the solution to the recurrence (2.1) is given by:

u i = d + l = 1 j = 1 l v j 1 c 1 i j P j l 1 ( u 0 d ) l (2.31)

where:

P l 0 , l p + 1 1 = ( 0 for l 0 > l p + 1 1 for l 0 = l p + 1 p = 0 l p + 1 l 0 1 l 0 < l 1 < < l p < l p + 1 ( 1 ) p + 1 j = 0 p v l j + 1 l j v l j l j for l 0 < l p + 1

v l p + 1 l 0 = ( 0 for l 0 > l p + 1 1 for l 0 = l p + 1 p = 0 l p + 1 l 0 1 l 0 < l 1 < < l p < l p + 1 ( 1 ) p + 1 j = 0 p T l j , l j + 1 T l j , l j λ l p + 1 for l 0 < l p + 1 T a b = l = 0 m k l = a , l = 0 m l k l = b a ! s = 0 m k s ! t = 0 m c t k t c 0 = 0 , c l = j = l m c j ( j l ) d j l for l 0 (2.32)

Proof. Because the Equation (2.29) is equivalent to requiring c 0 = 0 ( c 0 is the free term of the polynomial F obtained after shift of variables by d), the variables u i = u i d are the ones in which the free term vanishes. In the same way, from (2.9) we have:

c 1 = j = 1 m c j ( j 1 ) d j 1 = j = 1 m j c j d j 1 (2.33)

Therefore condition (2.30) expresses the fact that all of the powers of c 1 have different values.

The transition matrix T is (2.3):

T a b = l = 0 m k l = a , l = 0 m l k l = b a ! s = 0 m k s ! t = 0 m c t k t (2.34)

By Lemma 2.2, c 0 = 0 and the infinite transition matrix is upper triangular, while (2.30) implies that T is diagonalizable. Furthermore, by Lemma 2.4 the a-th eigenvector has at most a + 1 non-zero elements, and the (infinite) matrix P can be chosen to be upper triangular (the inverse of P is then upper triangular as well, Lemma 2.5). The powers of transition matrix T can be obtained from (2.27). Hence, using (1.8), (2.34) and Lemma 2.4:

u i = e 1 T i y 0 = e 1 P D i P 1 y 0 = [ 0 1 0 ] T [ v 0 0 v 1 0 v 2 0 0 v 1 1 v 2 1 0 0 v 2 2 ] [ 1 0 0 0 c 1 0 0 0 c 1 2 ] i [ P 00 1 P 01 1 P 02 1 0 P 11 1 P 12 1 0 0 P 22 1 ] [ 1 u 0 u 0 2 ] (2.35)

where P a b 1 denote the elements of the inverse of P that can be calculated using Lemma 2.5, while the formula for v l j is given in (2.16). The i-th power of the diagonal matrix is just a matrix with i-th power of its elements on the diagonal. Carrying out the matrix multiplication, we obtain:

u i = l = 1 u 0 l j = 1 l v j 1 ( c 1 j ) i P j l 1 = l = 1 j = 1 l v j 1 c 1 i j P j l 1 u 0 l (2.36)

By retrieving the original sequence u i = u i + d and substituting back the original coefficients c j for the shifted parameters c j , we arrive at the result.

Theorem 2.6 allows for solving a large family of nonlinear recurrences. It is apparent from (2.31) that the general solution has a qualitative form:

u i = j = 0 f j ( i ) u 0 j (2.37)

where f j are some (in general complex) functions of number of iterations i. Therefore, Theorem 2.6 can be used to determine the form of functions f j . As an example, the results are applied to the well-known logistic mapping:

Example 2.7. The logistic map is a recurrence relation of the form [26]:

u i = r u i 1 r u i 1 2 (2.38)

with the coefficient r . Conditions (2.29) and (2.30) are trivially fulfilled for r 1 and d = 0 . We obtain:

P = [ 1 0 0 0 0 1 1 1 r 2 r 3 r 2 r + 1 0 0 1 2 1 r 0 0 0 1 ] , P 1 = [ 1 0 0 0 0 1 1 r 1 2 r r 3 r 2 r + 1 0 0 1 2 r 1 0 0 0 1 ] (2.39)

Thus, by Theorem 2.6 the solution takes the form:

u i = r i u 0 + r i r 2 i r 1 u 0 2 + 2 r r i 2 ( r + 1 ) r 2 i + 2 r 3 i r 3 r 2 r + 1 u 0 3 + (2.40)

It can be seen that the “coefficient functions” f j in front of the powers of initial value u 0 become progressively more complicated as the order of u 0 j increases.

3. Multi-Variable Recurrences of Depth One

The treatment of polynomial multi-variable recurrences of depth one follows the same structure as Section 2. With appropriately chosen auxiliary vectors, the recurrence can be Carleman-linearized and represented by an appropriate transition matrix. Let:

z i = [ u i 1 , u i 2 , , u i k ] T (3.1)

be a vector composed of k variables u i k . Then, a general polynomial recurrence of depth one in k variables can be compactly written in the form:

u i p + 1 = z p i = F p + 1 ( u i 1 1 , u i 1 2 , , u i 1 k ) = C p 0 + j = 1 m l 1 , , l j = 0 k 1 C p j l 1 , , l j s = 1 j z l s i 1 = C p 0 + l 1 = 0 k 1 C p 1 l 1 z l 1 i 1 + l 1 = 0 k 1 l 2 = 0 k 1 C p 2 l 1 , l 2 z l 1 i 1 z l 2 i 1 + (3.2)

where symbols C p j l 1 , , l j represent j + 1 dimensional arrays. It is apparent that the description in terms of C p j l 1 , , l j is somewhat redundant: every permutation of lower right indices l 1 , , l j is the coefficient of the same combination of powers of variables u i k . For convenience, we choose C p j l 1 , , l j to vanish unless the lower right indices are in the non-decreasing order, i.e. C p j l 1 , , l j = 0 if ¬ ( l 1 l j ) . As a result, the symbol C p j l 1 , , l j has only k ( k + j 1 j ) independent components, when all of the lower indices are taken into account [27].

Theorem 3.1. Let the polynomial multi-variable depth one recurrence be defined as in (3.2), and let the Kronecker powers of z i be defined as in [13]:

( z i ) α = [ u i 1 ( z i ) α 1 , u i 2 ( z i ) α 1 , , u i k ( z i ) α 1 ] T , ( z i ) 0 : = 1 (3.3)

Then, the Carleman linearization of the system can be defined on infinite auxiliary vectors:

y i = [ ( z i ) 0 , ( z i ) 1 , ( z i ) 2 , ] T (3.4)

and the solution to recurrence (3.2) can be written as:

u i k = e k T i y 0 (3.5)

where y 0 is defined by initial conditions and the infinite dimensional transition matrix is:

T a b = [ 1 0 0 C p 0 C p 1 l 1 C p 2 1, l 1 C p 0 C p 0 C p 0 C p 1 l 1 + C p 1 l 1 C p 0 C p 0 C p 0 C p 0 ] (3.6)

The Kronecker product in (3.6) acts on the subscripts only.

Proof. Since y i is composed of consecutive Kronecker powers of z i , any polynomial F ( u i 1 , u i 2 , , u i k ) in k variables can be rewritten as linear combination of elements of y i . The recurrence is defined solely by the arrays C p j l 1 , , l j , therefore the transition matrix consists of the numbers appearing in C p j l 1 , , l j . The elements of vector y i can be identified with the product of elements of z i . From the definitions (3.3), (3.4) and the properties of the Kronecker product we have:

y l i = ( 1 for l = 0 s = 1 log k ( 1 + l ( k 1 ) ) u i ( ( l 1 k s 1 k ) k 1 s % k ) + 1 j = 1 s u i l j = y 1 k s 1 k + j = 1 s k s j ( l j 1 ) i = y j = 1 s k s j l j i (3.7)

The product of elements of z i can be expressed in terms of elements of z i 1 :

j = 0 k 1 ( z j i ) t j = j = 0 k 1 ( F j ( z i 1 ) ) t j (3.8)

Analogously to (1.9), the infinite transition matrix can be defined as:

T a b = R b ( s = 1 log k ( 1 + b ( k 1 ) ) u i 1 ( ( b 1 k s 1 k ) k 1 s % k ) + 1 ) ( s = 1 log k ( 1 + a ( k 1 ) ) F ( ( a 1 k s 1 k ) k 1 s % k ) + 1 ( u i 1 1 , , u i 1 k ) ) | u i 1 k = 0 (3.9)

The constants R b do not depend on F i , and are defined as a product:

R b = 1 c b s = 1 k 1 a s ! (3.10)

where a s is the number of times u i 1 s appears in y b i 1 , while c b is the number of times the element y b i appears in y i . Equivalently, T a b can be written in the matrix form:

[ 1 0 0 C 0 0 C 0 1 0 C 0 1 k 1 C k 1 0 C k 1 1 0 C k 1 1 k 1 C 0 0 C 0 0 C 0 0 C 0 1 0 + C 0 0 C 0 1 0 C 0 0 C 0 1 k 1 + C 0 0 C 0 1 k 1 C 0 0 C k 1 0 C 0 0 C k 1 1 0 + C k 1 0 C 0 1 0 C 0 0 C k 1 1 k 1 + C k 1 0 C 0 1 k 1 C 1 0 C 0 0 C 1 0 C 0 1 0 + C 0 0 C 1 1 0 C 1 0 C 0 1 k 1 + C 0 0 C 1 1 k 1 C 1 0 C k 1 0 C 1 0 C k 1 1 0 + C k 1 0 C 1 1 0 C 1 0 C k 1 1 k 1 + C k 1 0 C 1 1 k 1 C k 1 0 C k 1 0 C k 1 0 C k 1 1 0 + C k 1 0 C k 1 1 0 C k 1 0 C k 1 1 k 1 + C k 1 0 C k 1 1 k 1 ] (3.11)

By replacing the groups of elements from (3.11) with the block-matrix notation of Kronecker powers of arrays C p j l 1 , , l j , we arrive at (3.6).

For k = 1 , expression (3.6) reduces to (2.2). Furthermore, the analogue of the Theorem 2.1 can be proven. If C p 0 = 0 for all p, then all of the Kronecker products involving C p 0 vanish. Since all of the block matrices below the diagonal in (3.6) include the C p 0 in the Kronecker product, the lower part of the block transition matrix vanishes and the matrix becomes block upper triangular. The block matrices on the diagonal of T a b are the Kronecker powers of the array standing near the linear term C p 1 l . The Kronecker powers of C p 1 l are defined as in [13]:

( C 1 ) α = [ C 0 1 0 ( C 1 ) α 1 C 0 1 1 ( C 1 ) α 1 C 0 1 k 1 ( C 1 ) α 1 C 1 1 0 ( C 1 ) α 1 C 1 1 1 ( C 1 ) α 1 C 1 1 k 1 ( C 1 ) α 1 C k 1 1 0 ( C 1 ) α 1 C k 1 1 1 ( C 1 ) α 1 C k 1 1 k 1 ( C 1 ) α 1 ] , ( C 1 ) 0 = 1 (3.12)

Therefore, the transition matrix can be brought to the upper triangular form if p C p 0 = 0 p > l C p 1 l = 0 . Even though in general recurrences do not fulfill those conditions, variables can be (in some cases) transformed in such a way that the new transition matrix is upper triangular.

Lemma 3.2. Let the recurrence on u i k be defined as in (3.2). There exists a linear transformation of variables for F k such that the transition matrix T a b corresponding to the new sequence is upper triangular if and only if there exists k dimensional vector B satisfying:

B p = s = 0 m 1 s ! l 1 , , l s = 0 k 1 σ C p s l σ 1 , , l σ s t = 1 s B l t (3.13)

Proof. The general (invertible) linear transformation of k variables u i k can be written in the matrix form:

z i = A z i + B z i = A 1 ( z i B ) (3.14)

with d e t ( A ) 0 . The new recurrence defined on primed variables z i satisfies:

z p 1 i = C p 0 + j = 1 m l 1 , , l j = 0 k 1 C p j l 1 , , l j s = 1 j z l s i 1 (3.15)

where:

C k 0 = A k j 1 B j + s = 0 m j , l 1 , , l s = 0 k 1 A k j 1 C ^ j s l 1 , , l s t = 1 s B l t C k p x 1 , , x p = s = p m ( s p ) j , l 1 , , l s = 0 k 1 A k j 1 C ^ j s l 1 , , l s ( t 1 = 1 p A l t 1 x t 1 ) ( t 2 = p + 1 s B l t 2 ) C ^ j s l 1 , , l s = 1 s ! σ C j s l σ 1 , , l σ s (3.16)

The symbol C ^ j s l 1 , , l s denotes the symmetrized array that is equal to C j s l 1 , , l s averaged over all of the permutations σ of the right subscripts.

It is trivial that convolution of linear transformations is a linear transformation. We firstly perform a shift of the variables by the vector B. From (3.16) we have:

C p 0 = B k + s = 0 m 1 s ! l 1 , , l s = 0 k 1 σ C p s l σ 1 , , l σ s t = 1 s B l t (3.17)

Setting C p 0 = 0 , above equation reduces to a set of coupled polynomial equations. Unfortunately, the resulting system of equations is not always solvable. Assuming that (3.17) has a solution, the other arrays transform according to (3.16).

Subsequently a linear transformation is performed with B = 0 . The constant term C p 0 = C p 0 = 0 (i.e. still vanishes, regardless of the choice of A), while the linear term is:

C p 1 t = j , l 1 = 0 k 1 A p j 1 C j 1 l A l t (3.18)

Similarly, there always exists a suitable choice of k × k dimensional matrix A such that C p 1 t becomes upper triangular in lower indices [13] and thus the transition matrix can be brought to an upper triangular form provided (3.13) has a solution.

If the shift vector B satisfying (3.13) exists, it’s not necessary unique. The situation is analogous to the one found for a uni-variable recurrence. Different choices of B satisfying (3.13) with C p 0 = 0 will lead to C p 1 l with different eigenvalues. Therefore, it is suitable to choose B such that it satisfies the diagonalizability conditions stated in Lemma 3.3.

It is worth noting that even though the arrays C p j l 1 , , l j vanish for a non-decreasing order of lower right indices, the same may not be true for the transformed arrays C p j l 1 , , l j . Therefore, after transforming the coefficient arrays with the lineary transformation (3.13), it is necessary to upper triangularize the transformed coefficients so that they also satisfy C p j l 1 , , l j = 0 if ¬ ( l 1 l j ) .

Since the eigenvalues of the Kronecker product of two matrices are the products of eigenvalues of the two matrices, we have:

S p e c ( ( C 1 ) α ) = S p e c ( C 1 ) α (3.19)

where the right-hand side represents the Cartesian powers of the set of eigenvalues of C 1 [24]. As a result, the general eigenvalue is of the form:

λ = λ 0 a 0 λ 1 a 1 λ k 1 a k 1 (3.20)

for some numbers a p , λ p S p e c ( C 1 ) . In the same way as in (3.7), the l-th eigenvalue of the transition can be written as:

λ l = ( 1 for l = 0 s = 1 log k ( 1 + l ( k 1 ) ) λ ( l 1 k s 1 k ) k 1 s % k (3.21)

The matrix is diagonalizable if all of its eigenvalues are different, thus a sufficient condition for the existence of diagonal form can be constructed.

Although this is true, a problem arises due to redundancy in description in terms of the auxiliary vectors (3.4). Since y i are composed of Kronecker powers of z i , there are components that are redundant, i.e. that represent the same product of variables u i k . For example, if k = 2 , the infinite auxiliary vectors are:

y i = [ 1 , u i 1 , u i 2 , ( u i 1 ) 2 , u i 1 u i 2 , u i 2 u i 1 , ( u i 2 ) 2 , ] T (3.22)

and it is apparent that the 4-th and 5-th components are the same.

Due to the redundancy, the repeating copies of the same eigenvalues appear in the upper triangular form of the transition matrix (3.6). Lemmas 2.4 and 2.5 are applicable only if the eigenvalues of T a b are non-degenerate, hence the Lemmas cannot be used to diagonalize the transition matrix. Nevertheless, the special choice of C p j l 1 , , l j allows one to “reduce” the transition matrix and get rid of the undesired copies.

The redundant terms arise from the permutation of lower right indices of C p j l 1 , , l j . By definition (the begining of Section 3) the arrays vanish for a decreasing sequence of lower indices. All of the columns in T a b associated with the copies of the same power of variables u i k vanish except for the diagonal element. For example, since the 4-th and 5-th element of the auxiliary vectors (3.22) are similar, all of the elements T a 5 = 0 , a 5 vanish. This observation allows one to get rid of the redundant columns (and rows) of T a b , hence making the calculation of the powers of the transition matrix feasible. In other words, the redundancy has to be taken into account when assessing the diagonalizability of T a b .

In order to differentiate between quantities related to the redundant (original) and non-redundant system, tilde notation is introduced. The quantities related to the system without redundancy (vectors without redundant components and matrices without redundant rows and columns) are denoted with tilde above the symbol. For example, the equivalent of the auxiliary vector (3.22) for the non-redundant Carleman embedding is:

y ˜ i = [ 1, u i 1 , u i 2 , ( u i 1 ) 2 , u i 1 u i 2 , ( u i 2 ) 2 , ] T (3.23)

Similarly, the transition matrix for y ˜ i is denoted by T ˜ a b respectively. Although it is the non-redundant Carleman embedding defined by T ˜ a b and y ˜ i that is eventually diagonalized, it is hard to express it using a concise formula. Equivalents of expression (3.4) for y ˜ i and (3.6) for T ˜ a b have not been found. As opposed to the redundant system, the transition matrix T ˜ a b cannot be simply expressed in terms of Kronecker products. On the other hand, it is much easier to calculate powers of T ˜ a b rather than T a b . The following can be said about diagonalizability of the “non-redundant” transition matrix:

Lemma 3.3. A sufficient condition for the infinite upper triangular transition matrix T ˜ a b to be diagonalizable is for the corresponding k × k dimensional matrix C 1 to have eigenvalues λ p such that the equation:

( x 0 + x 1 log | λ 0 | | λ 1 | + + x k 1 log | λ 0 | | λ k 1 | = 0 A r g ( λ 0 ) x 0 + A r g ( λ 1 ) x 1 + + A r g ( λ k 1 ) x k 1 = 2 π t (3.24)

has no (non-trivial) solution x 0 , , x k 1 , t . In case of two variables k = 2 , this can be further simplified to:

log | λ 0 | | λ 1 | π A r g ( λ 1 ) A r g ( λ 0 ) log | λ 0 | | λ 1 | (3.25)

Proof. As stated before, the redundancy in the auxiliary vectors y i implies that there always will be an eigenvector with algebraic multiplicity of at least two. However, since all of the elements of the corresponding column except the diagonal vanish, a sufficient condition for diagonalization can be still formulated based on the tilded transition matrix T ˜ a b .

The condition for T ˜ a b to have different eigenvalues can be written as:

λ 0 a 0 λ 1 a 1 λ k 1 a k 1 = λ 0 a 0 λ 1 a 1 λ k 1 a k 1 p a p = a p (3.26)

Dividing by the primed side:

λ 0 a 0 a 0 λ 1 a 1 a 1 λ k 1 a k 1 a k 1 = 1 (3.27)

Therefore, the condition is satisfied if and only if the only solution a p a p to (3.27) is a p a p = 0 for all p. By rewriting the eigenvalues in the polar form λ p = r p e i θ p :

( r 0 a 0 a 0 r 1 a 1 a 1 r k 1 a k 1 a k 1 = 1 e i θ 0 ( a 0 a 0 ) e i θ 1 ( a 1 a 1 ) e i θ k 1 ( a k 1 a k 1 ) = 1 (3.28)

and taking the logarithm of both sides of (3.28), we obtain a set of linear equations:

( ( a 0 a 0 ) + ( a 1 a 1 ) log r 0 r 1 + + ( a k 1 a k 1 ) log r 0 r k 1 = 0 θ 0 ( a 0 a 0 ) + θ 1 ( a 1 a 1 ) + + θ k 1 ( a k 1 a k 1 ) = 2 π t , t (3.29)

There always exists a trivial solution p a p a p = 0 .

In case k = 2 , the equations reduce to:

( ( a 0 a 0 ) + ( a 1 a 1 ) log r 0 r 1 = 0 θ 0 ( a 0 a 0 ) + θ 1 ( a 1 a 1 ) = 2 π t , t (3.30)

The upper equation has integer solutions if and only if log r 0 r 1 . Substituting the result to the lower equation, we conclude that the above system of equations does not have an integer solution if:

log r 0 r 1 π θ 1 θ 0 log r 0 r 1 (3.31)

If the transition matrix T ˜ a b can be brought into an upper triangular form, Lemmas 2.4 and 2.5 can be used to diagonalize it. This allows one to give an explicit expression for powers of T ˜ a b , thus solving the recurrence. We now prove the main theorem of the article:

Theorem 3.4. Let u i n , i 0,1 n k be k sequences of numbers satisfying the recurrence relation (3.2) and let A and B denote the parameters of a linear transformation that brings the transition matrix to an upper triangular form, i.e. k × k matrix A and k dimensional vector B such that:

B p = s = 0 m 1 s ! l 1 , , l s = 0 k 1 σ C p s l σ 1 , , l σ s t = 1 s B l t (3.32)

Furthermore, let the k × k array C p 1 x be upper triangular, where:

C p 1 x = j , l 1 = 0 k 1 A p j 1 C j 1 l A l x C p 1 l 1 = s = 1 m l 2 , , l s = 0 k 1 1 ( s 1 ) ! σ C p s l σ 1 , , l σ s ( t = 2 s B l t ) (3.33)

Let the eigenvalues C p 1 x (by eigenvalues of the array we understand the eigenvalues of the corresponding matrix created by considering only two lower indices) λ p be such that the equation:

( x 0 + x 1 log | λ 0 | | λ 1 | + + x k 1 log | λ 0 | | λ k 1 | = 0 A r g ( λ 0 ) x 0 + A r g ( λ 1 ) x 1 + + A r g ( λ k 1 ) x k 1 = 2 π t (3.34)

has no non-trivial solutions x 0 , , x k 1 , t . Then the solution to the recurrence is given in terms of the initial condition y ˜ 0 by:

u i k = B k 1 + s = 1 k l = 1 j = 1 l A k 1 , s 1 v ˜ j s T ˜ j j i P ˜ j l 1 y ˜ l 0 (3.35)

where:

P l 0 , l p + 1 1 = ( 0 for l 0 > l p + 1 1 for l 0 = l p + 1 p = 0 l p + 1 l 0 1 l 0 < l 1 < < l p < l p + 1 ( 1 ) p + 1 j = 0 p v ˜ ' l j + 1 l j v ˜ ' l j l j for l 0 < l p + 1 v l p + 1 l 0 = ( 0 for l 0 > l p + 1 1 for l 0 = l p + 1 p = 0 l p + 1 l 0 1 l 0 < l 1 < < l p < l p + 1 ( 1 ) p + 1 j = 0 p T ˜ l j , l j + 1 T ˜ l j , l j λ l p + 1 for l 0 < l p + 1 (3.36)

Proof. Conditions (3.32) and (3.33) ensure that after the linear transformation z i = A 1 ( z i B ) the corresponding transition matrix T a b is upper triangular. Similarly, condition (3.34) is simply a sufficient condition for the diagonalization of T ˜ a b stated in Lemma 3.3.

Starting from Theorem 3.1, we have u i k = e k T i y 0 . However, as mentioned before, due to redundancy of description in terms of T a b , the corresponding T ˜ a b is used instead. Furthermore, since there is no redundancy in the first k + 1 elements of y i , the solution can be equivalently written as:

u i k = e k T i y 0 = e k T ˜ i y ˜ 0 = l = 1 j = 1 l v ˜ j k T ˜ j j i P ˜ j l 1 y ˜ l 0 (3.37)

where the symbols with tilde are symbols related to T ˜ a b free from the redundancy mentioned before. The prime refers to the quantities related to the recurrence obtained after the linear transformation of variables. Transforming (3.37) back to the original sequences:

u i k = s = 1 k A k 1 , s 1 u i s + B k 1 = B k 1 + s = 1 k l = 1 j = 1 l A k 1 , s 1 v ˜ j s T ˜ j j i P ˜ j l 1 y ˜ l 0 (3.38)

Thus, we arrive at expression (3.35). The eigenvectors and the inverse of the infinite modal matrix P 1 are given in Lemmas 2.4 and 2.5.

The solutions to the recurrence (3.2) have a general form:

u i = j 1 , , j k = 0 f j 1 , , j k ( i ) s = 1 k ( u 0 s ) j s (3.39)

which closely resembles Formula (2.37). A noticeable difference is that for the multi-variable recurrence, the solution takes the form of a multi-variable (instead of uni-variable) power series in the initial condition.

Example 3.5. As an example, we consider the simple recurrence defined by:

( u i 1 = 8 u i 1 1 + 10 u i 1 2 + ( u i 1 1 ) 2 + 3 u i 1 1 u i 1 2 + ( u i 1 2 ) 2 u i 2 = 3 u i 1 1 3 u i 1 2 + ( u i 1 1 ) 2 u i 1 1 u i 1 2 + ( u i 1 2 ) 2 (3.40)

The arrays C p j l 1 , , l j can be obtained by comparing (3.40) with (3.2):

C p 0 = [ 0 0 ] , C p 1 l = [ 8 10 3 3 ] , C 0 2 l 1 , l 2 = [ 1 3 0 1 ] , C 1 2 l 1 , l 2 = [ 1 1 0 1 ] (3.41)

In order to bring the transition matrix into the upper triangular form, the variables are transformed by a 2 × 2 matrix A:

A = [ 1 2 3 5 ] (3.42)

to obtain the new recurrence:

( u i 1 = 2 u i 1 1 + 87 ( u i 1 1 ) 2 + 67 u i 1 1 u i 1 2 + 13 ( u i 1 2 ) 2 u i 2 = 3 u i 1 2 212 ( u i 1 1 ) 2 164 u i 1 1 u i 1 2 32 ( u i 1 2 ) 2 (3.43)

with C p 0 = 0 and:

C p 1 l = [ 2 0 0 3 ] , C 0 2 l 1 , l 2 = [ 87 67 0 13 ] , C 1 2 l 1 , l 2 = [ 212 164 0 32 ] (3.44)

The eigenvalues satisfy conditions (3.34), therefore T ˜ a b can be diagonalized and the powers of the transition matrix can be calculated. The infinite transition matrix is:

T a b = [ 1 0 0 0 0 0 0 0 0 0 0 0 0 2 0 87 67 0 13 0 0 0 0 0 0 0 3 212 164 0 32 0 0 0 0 0 0 0 0 4 0 0 0 348 268 0 52 0 0 0 0 0 6 0 0 424 67 0 137 0 0 0 0 0 0 6 0 424 67 0 137 0 0 0 0 0 0 0 9 0 1272 0 987 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 12 0 0 0 0 0 0 0 0 0 0 0 0 12 0 0 0 0 0 0 0 0 0 0 0 0 18 0 0 0 0 0 0 0 0 0 0 0 0 12 ] (3.45)

The 5-th, 9-th and 11-th rows and columns are examples of the arising redundancy. Lemmas 2.4 and 2.5 can now be applied to the reduced transition matrix T ˜ a b , which is obtained by eliminating all of the redundant rows and columns:

T ˜ a b = [ 1 0 0 0 0 0 0 2 0 87 67 13 0 0 3 212 164 32 0 0 0 4 0 0 0 0 0 0 6 0 0 0 0 0 0 9 ] (3.46)

This can be further rewritten in the transformed form:

T ˜ a b = [ 1 0 0 0 0 0 0 1 0 87 201 39 0 0 1 424 656 112 0 0 0 2 0 0 0 0 0 0 12 0 0 0 0 0 0 21 ] [ 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0 0 0 0 4 0 0 0 0 0 0 6 0 0 0 0 0 0 9 ] × [ 1 0 0 0 0 0 0 1 0 87 2 67 4 13 7 0 0 1 212 164 3 16 3 0 0 0 1 2 0 0 0 0 0 0 1 12 0 0 0 0 0 0 1 21 ] (3.47)

The matrices to the left and right of the diagonal matrix are P and P 1 respectively. The powers of T ˜ a b can now be easily calculated. Using (3.5), we finally arrive at:

u i 1 = 2 i u 0 1 + 87 2 ( 4 i 2 i ) ( u 0 1 ) 2 + 67 4 ( 6 i 2 i ) u 0 1 u 0 2 + 13 7 ( 9 i 2 i ) ( u 0 2 ) 2 + u i 2 = 3 i u 0 2 212 ( 4 i 3 i ) ( u 0 1 ) 2 164 3 ( 6 i 3 i ) u 0 1 u 0 2 16 3 ( 9 i 3 i ) ( u 0 2 ) 2 + (3.48)

The above solution to the primed recurrence (3.43) can now be transformed back to the original variables, therefore giving a rather complicated expression for the solution to recurrence (3.40):

u i 1 = ( 6 3 i 5 2 i ) u 0 1 + 10 ( 3 i 2 i ) u 0 2 + ( 87 7 9 i 307 4 6 i + 413 2 4 i 192 3 i + 1395 28 2 i ) ( u 0 1 ) 2 + ( 290 7 9 i 3377 12 6 i + 826 4 i 2440 3 3 i + 6365 28 2 i ) u 0 1 u 0 2 + ( 725 21 9 i 1535 6 6 i + 826 4 i 2608 3 3 i + 3705 14 2 i ) ( u 0 2 ) 2 +

u i 2 = 3 ( 3 i 2 i ) u 0 1 ( 5 3 i 6 2 i ) u 0 2 + ( 15 7 9 i + 53 4 6 i 163 2 4 i + 96 3 i 837 28 2 i ) ( u 0 1 ) 2 + ( 50 7 9 i + 583 12 6 i 326 4 i + 1220 3 3 i 3819 28 2 i ) u 0 1 u 0 2 + ( 125 21 9 i + 265 6 6 i 326 4 i + 1304 3 3 i 2223 14 2 i ) ( u 0 2 ) 2 + (3.49)

Comparing (3.49) with the general form of solution (3.39), the coefficient functions can be obtained.

4. Notes on Multi-Variable Recurrences of Arbitrary Depth

Theorem 3.4 is also applicable to arbitrary-depth recursions. For any finite-depth recursion, auxiliary variables can be introduced and the original recurrence can be reduced to multi-variable recurrence of depth one. The method follows closely the one used for the finite-depth linear recurrences [4].

Let the depth-n recurrence be in the form (1.1). Set of k × n auxiliary variables is defined by:

u i j l = u i l + j k (4.1)

for 0 j < n . Equation (1.1) now takes the form:

{ u i 1 = F 1 ( u i 1 1 , , u i 1 k n ) u i k = F k ( u i 1 1 , , u i 1 k n ) u i k + 1 = u i 1 u i k n = u i k ( n 1 ) (4.2)

Which is a multi-variable depth one recurrence, already discussed in Section 3.

5. Further Remarks

In Section 2, an explicit formula for the solution of wide range of uni-variable depth-one recurrences is given. Although the final expression is rather complicated, it can be used to derive the coefficient functions f j ( i ) in (2.37). Even if the recurrence does not fulfill the diagonalizability condition, there is a chance that it can be reformulated in terms of new variables. The transformation should be chosen such that the corresponding transition matrix has different eigenvalues and Lemmas 2.4 and 2.5 can be applied. The shift transformation is of particular importance, nevertheless, any transformation that leaves the recurrence polynomial could be used. Transformations by a higher degree invertible polynomial, such us u i = u i 3 + 1 are just one of many examples.

Although the method allows tackling many different polynomial recurrence relations, there are some notable shortcomings. The diagonalizability conditions in Sections 2 and 3 are only sufficient and not necessary for the transition matrix to be diagonalizable. Therefore, the method may not be applicable for all systems that are solvable by diagonalization approach. In addition, transition matrices which are known to be non-diagonalizabe are yet another group of cases where Theorems 2.6 and 3.4 cannot be used. Possibly, an analogous method could be formulated, where Jordanization is used to calculate the powers of the transition matrix instead of diagonalization.

Theorem 3.4 utilizes the non-redundant system (with T ˜ ) instead of the one defined in Theorem 3.1. It is dictated by the fact that the transition matrix defined for the auxiliary vectors 3.4 necessary contains redundant copies of its eigenvalues. This may lead to difficulties with evaluating the expansion (3.39) for large j 1 , , j k , as only the general rule for constructing T ˜ is given as opposed to explicit expression as in the case of T, i.e. (3.9).

It can be noted that examples 2.7 and 3.5 belong to a family of recurrences for which no shift is needed. The logistic map automatically fulfills the assumptions for r 1 , while the recurrence relation (3.40) needs to be transformed only by matrix A, with B = 0 . Those examples were chosen on purpose, as the solution of the transformed series can be truncated and the calculations simplify significantly. In general, however, if one wants to obtain the exact solution to the original recurrence with non-zero shift parameter, all of the terms of the solution corresponding to the transformed recurrence have to be kept, which often makes the calculations cumbersome.

Even though the Carleman linearization method does not provide a way to systematically approach the topic of polynomial recurrences, a large group of relations (1.1) fulfill the conditions and the method can be applied. Furthermore, it is straightforward to generalize the method to systems of recurrences (Theorem 3.4), as well as systems of arbitrary-depth recurrences, as discussed in Section 4.

Improvements to the Carleman linearization of polynomial recurrences may include research on the diagonalizability of the transition matrices, possibly resulting in less-restrictive diagonalizability conditions. In addition, polynomial transformations may play an important role in further extending the group of recurrences the method is applicable to.

Acknowledgements

The author is grateful to Daniel Gagliardi for valuable remarks, and to the anonymous referees for devoting their time and effort to the manuscript.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Borovkov, A.A. (2013) Probability Theory. Springer, London.
https://doi.org/10.1007/978-1-4471-5201-9
[2] Mladenović, P. (2019) Combinatorics: A Problem-Based Approach. Springer, Cham.
https://doi.org/10.1007/978-3-030-00831-4
[3] Andrica, D. and Bagdasar, O. (2020) Recurrent Sequences: Key Results, Applications, and Problems. Springer, Cham.
https://doi.org/10.1007/978-3-030-51502-7
[4] Everest, G., Poorten, A., Shparlinski, I. and Ward, T. (2003) Recurrence Sequences. American Mathematical Society, Providence.
https://doi.org/10.1090/surv/104
[5] Zhang, X., Shi, Y. and Chen, G. (2009) Constructing Chaotic Polynomial Maps. International Journal of Bifurcation and Chaos, 19, 531-543.
https://doi.org/10.1142/S0218127409023172
[6] Grosjean, N. and Huillet, T. (2016) Some Combinatorial Aspects of Discrete Non-Linear Population Dynamics. Chaos, Solitons & Fractals, 93, 71-79.
https://doi.org/10.1016/j.chaos.2016.10.004
[7] Han, D.D., Min, L.Q., Zang, H.Y. and Yang, X.P. (2019) Robust Chaos of Cubic Polynomial Discrete Maps with Application to Pseudorandom Number Generators. Mathematical Problems in Engineering, 2019, Article ID: 8250903.
https://doi.org/10.1155/2019/8250903
[8] Wang, C.F. and Ding, Q. (2019) A Class of Quadratic Polynomial Chaotic Maps and Their Fixed Points Analysis. Entropy, 21, 658-671.
https://doi.org/10.3390/e21070658
[9] Rabinovich, S., Berkolaiko, G. and Havlin, S. (1996) Solving Nonlinear Recursions. Journal of Mathematical Physics, 37, Article No. 5828.
https://doi.org/10.1063/1.531702
[10] Shang, Y. (2012) A Brief Note on an Exponential Recursive Sequence. International Journal of Open Problems in Computer Science and Mathematics, 5, 5 p.
https://doi.org/10.12816/0006093
[11] Cadilhac, M., Mazowiecki, F., Paperman, C., Pilipczuk, M. and Sénizergues, G. (2021) On Polynomial Recursive Sequences. Theory of Computing Systems.
https://doi.org/10.1007/s00224-021-10046-9
[12] Cull, P., Flahive, M. and Robson, R. (2005) Difference Equations: From Rabbits to Chaos. Springer, New York.
[13] Hogben, L. (2014) Handbook of Linear Algebra. 2nd Edition, Chapman and Hall/CRC, New York.
https://doi.org/10.1201/b16113
[14] Berkolaiko, G., Rabinovich, S. and Havlin, S. (1998) Analysis of Carleman Representation of Analytical Recursions. Journal of Mathematical Analysis and Applications, 224, 81-90.
https://doi.org/10.1006/jmaa.1998.5986
[15] Kowalski, K. and Steeb, W.H. (1991) Nonlinear Dynamical Systems and Carleman Linearization. World Scientific, Singapore.
https://doi.org/10.1142/1347
[16] Gralewicz, P. and Kowalski, K. (2002) Continuous Time Evolution from Iterated Maps and Carleman Linearization. Chaos, Solitons & Fractals, 14, 563-572.
https://doi.org/10.1016/S0960-0779(01)00222-3
[17] Rabinovich, S., Berkolaiko, G., Buldyrev, S., Shehter, A. and Havlin, S. (1995) ‘Logistic Map’: An Analytical Solution. Physica A: Statistical Mechanics and its Applications, 218, 457-460.
https://doi.org/10.1016/0378-4371(95)00163-2
[18] Belozyorov, V.Y. and Volkova, S.A. (2016) Role of Logistic and Ricker’s Maps in Appearance of Chaos in Autonomous Quadratic Dynamical Systems. Nonlinear Dynamics, 83, 719-729.
https://doi.org/10.1007/s11071-015-2360-2
[19] Tauber, S. (1963) On Multinomial Coefficients. The American Mathematical Monthly, 70, 1058-1063.
https://doi.org/10.1080/00029890.1963.11992172
[20] Shang, Y. (2011) A Remark on the Chromatic Polynomials of Incomparability Graphs of Posets. International Journal of Pure and Applied Mathematics, 67, 159-164.
[21] Basu, S. and Velleman, D.J. (2017) On Gauss’s First Proof of the Fundamental Theorem of Algebra. The American Mathematical Monthly, 124, 688-694.
https://doi.org/10.4169/amer.math.monthly.124.8.688
[22] Herrero, D.A. (1991) Triangular Operators. Bulletin of the London Mathematical Society, 23, 513-554.
https://doi.org/10.1112/blms/23.6.513
[23] Bronson, R. and Costa, G.B. (2020) Matrix Methods: Applied Linear Algebra and Sabermetrics. Academic Press, London.
[24] Ford, W. (2015) Numerical Linear Algebra with Applications: Using MATLAB. Academic Press, London.
[25] Baliarsingh, P. and Dutta, S. (2015) On an Explicit Formula for Inverse of Triangular Matrices. Journal of the Egyptian Mathematical Society, 23, 297-302.
https://doi.org/10.1016/j.joems.2014.06.001
[26] Strogatz, S.H. (2015) Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. CRC Press, Boca Raton.
[27] Comon, P., Golub, G., Lim, L. and Mourrain, B. (2008) Symmetric Tensors and Symmetric Tensor Rank. SIAM Journal on Matrix Analysis and Applications, 30, 1254-1279.
https://doi.org/10.1137/060661569

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.