Carleman Linearization and Systems of Arbitrary Depth Polynomial Recursions ()
1. Introduction
Recurrence relations arise in various fields of mathematics and prove to be a powerful tool for studying probability [1] and combinatorics [2]. It is often the case that construction of a solution to a mathematical problem simplifies to solving a recurrence relation. The theory of linear recurrences is well studied, leading to various results such as the celebrated Binet’s formula or the characteristic polynomial method [3] [4]. The dynamics of non-linear recurrence sequences are of special interest, as there are various examples of non-linear (in particular, polynomial) maps exhibiting chaotic behaviour. The examples include the logistic, quadratic and exponential maps, as well as the Ricatti recurrence [5] - [10].
Definition 1.1. [11] Let
be k sequences of complex numbers such that for every
the following relation holds:
(1.1)
Then, we say that sequences
are a solution to the depth-n system of recurrences defined by functions
.
In case when functions
are linear, the system of Equations (1.1) can be rewritten in the matrix form [12]. If the auxiliary vectors
and
are defined as:
(1.2)
(1.3)
then the system of recurrences (1.1) can be rewritten as recurrence relation for the sequence of vectors
:
(1.4)
where
matrix M is called the transition matrix associated with (1.1). By induction, the solution to (1.4) is given by powers of matrix M:
(1.5)
where
is defined by initial conditions. Equation (1.5) can be then multiplied by vectors
from the left, hence extracting the original sequences from
:
(1.6)
The above method of solving linear recurrences is fundamentally equivalent to the method of characteristic polynomials [13].
Even if functions
are non-linear (but analytic), the solution to the system still can be formulated in terms of powers of the appropriately chosen transition matrix, in analogy with the linear counterpart [14]. The method of describing dynamical systems (recursions in particular) in terms of infinite-dimensional linear algebra is referred to as Carleman embedding or Carleman linearization [15] [16].
Theorem 1.2. [9] [16] Let
, and
be analytic in
. Let the (infinite) auxiliary vectors be defined as:
(1.7)
Then, the solution can be written in the form:
(1.8)
where
is specified by initial conditions, and the infinite-dimensional matrix M is given by coefficients of Maclaurin expansion of
:
(1.9)
Theorem 1.2 allows for an analytic function to be iterated by computing powers of the infinite transition matrix associated with the function. In practice, however, it is troublesome to give an explicit expression for the powers of matrix M [17]. Nevertheless, in some cases, the transition matrix can be diagonalized and the corresponding eigenvectors can be found [16]. We focus on systems of recurrence relations (1.1) with
restricted to be finite degree polynomials. The article is divided into two main sections.
Firstly, the k = 1 (uni-variable) depth-one case is discussed, leading to the exact expression for the solution in terms of the initial condition and coefficients of polynomials
. As it turns out, in most cases the transition matrix can be diagonalized by a suitable choice of variables.
The method of the Carleman embedding is then applied to the multi-variable depth one recurrence. With minor modifications of the auxiliary vectors, the transition matrix can be compactly written as a block matrix. Again, M is diagonalized and the eigenvectors are found. The main theorem of the article is given, solving the recursion.
Finally, we discuss the general case (1.1). Provided that
is polynomial in
, the system of Equations (1.1) is embedded in an infinite linear space. It is shown that by introducing appropriate auxiliary variables, any multi-variable arbitrary depth recurrence can be reduced to multi-variable depth-one recurrence. Therefore, the recurrence (1.1) can be solved using the main theorem presented in Section 3.
Throughout the article, we assume that the functions
are polynomial in all variables. For convenience, the first elements of vectors are indexed from 0 instead of 1. A similar rule applies to matrices.
2. Uni-Variable Recurrences of Depth One
In the uni-variable case, there is only one sequence
and one function
(we drop the unnecessary indices for simplicity). Furthermore, F is a function of a single variable. The recurrence relations of this kind can be written as:
(2.1)
where
are constant coefficients of the polynomial F. The recurrence (2.1) is of special interest, as it includes many well-known nonlinear chaotic maps, i.e. the logistic map, which is used as an example at the end of this section [18].
Using Theorem 1.2 and the multinomial expansion theorem [19], the infinite dimensional transition matrix can be obtained:
(2.2)
where
are assumed to be non-negative integers. The condition
denotes the fact that the bigger sum runs only over such non-negative integers
that sum up to a. This further simplifies to:
(2.3)
Again, the conditions under the bigger sum narrow down the possible combinations of
to those that obey
,
. If no such integers exist, the sum vanishes and
. The transition matrix (2.3), together with the auxiliary vectors from Equation (1.7) defines the Carleman embedding of the recurrence (2.1) in the infinite dimensional linear space. We now prove several facts that will become useful later on in calculating powers of
.
Theorem 2.1. Let
be the infinite dimensional transition matrix corresponding to the recurrence (2.1). Then, the following holds true:
1)
2) If
,
is upper triangular
Proof. (1) We obtain:
(2.4)
However, since
and
, all of the integers
have to vanish. Therefore
. If
, there are no
satisfying the conditions under the sum, and
. If
:
(2.5)
(2) If
, then the lowest non-zero power of x appearing in polynomial
is
. By extension, the lowest power of x appearing in powers
is of order a. As a result:
(2.6)
for
.
It is worth noting that the above mentioned polynomials have also been studied in the setting of graphs [20]. The fact that the transition matrix can take the upper triangular form makes diagonalization feasible. One is always free to work with new shifted variables
, which satisfy a new recurrence:
(2.7)
The coefficients
can be obtained by substituting
into (2.1):
(2.8)
Which implies:
(2.9)
Therefore, new coefficients
are functions of old coefficients
and the shift parameter d. It turns out that there always exists d such that the transition matrix of the transformed recurrence becomes upper triangular.
Lemma 2.2. Let the recurrence on
be defined as in (2.1). For every
there exists
such that the transition matrix
corresponding to the new sequence
is upper triangular.
Proof. The new sequence
is obtained by performing a shift transformation with a parameter d. As already derived in (2.7), the recurrence of the shifted sequence is:
(2.10)
where the constant term (term near 0-th power of
) reads:
(2.11)
Theorem 2.1 states that the transition matrix
corresponding to the shifted recurrence is upper triangular if the constant term
vanishes. This leads to the following polynomial equation in d:
(2.12)
By the fundamental theorem of algebra, there always exists
such that
[21]. Therefore, the transition matrix
can be always brought to an upper triangular form by a suitable shift of variables.
As a result, we can consider only the recurrences satisfying
without loss of generality.
Because the transition matrix is upper triangular, the diagonal elements
are its eigenvalues [22]. Using (2.2):
(2.13)
The transition matrix is diagonalizable if all its eigenvalues are different, i.e. when
[23]. Since in general
, this condition can be rewritten as
. It should be noted that the right hand side of Equation (2.11) may have more than one distinct root, which means that there exist recurrences that can be shift transformed from
to
.
Example 2.3. Let
. The recurrence reads:
(2.14)
The linear coefficient
, hence there is no guarantee that the corresponding transition matrix is diagonalizable. Nevertheless, by performing the shift transformation with
, we obtain (using formulas (2.9)):
(2.15)
with
.
The above example shows that the condition
is sufficient, but not necessary for the diagonalization of
. Some transition matrices that are hard to deal with can be diagonalized after shift of variables, thus the recurrence can be solved.
The next two Lemmas regard the procedure of diagonalization of infinite upper triangular matrices. The set of linearly independent eigenvectors is obtained. Even though the matrix is infinite, the eigenvector corresponding to a-th eigenvalue
has at most
non-zero components. An explicit expression for the inverse of an upper triangular matrix is given (provided that it exists), which allows for calculation of the inverse of the corresponding modal matrix. The transition matrix is then diagonalized, and the powers of the matrix are subsequently calculated.
Lemma 2.4. Let
be an infinite upper triangular matrix with
and different eigenvalues, i.e.
. Let
denote the b-th component of the (not necessary normalized) eigenvector corresponding to the a-th eigenvalue
. Then:
1)
,
;
2)
,
;
3)
:
(2.16)
Proof. The eigenvector equation
in the matrix form reads:
(2.17)
Lets consider a finite upper triangular system of equations that arises from truncating the above matrix at
:
(2.18)
Since all of the eigenvalues are different, the diagonal elements
and the matrix M are diagonalizable. The determinant of (2.16) vanishes, and thus the solution is not unique (up to a scaling factor), as expected. The explicit expression for
as a function of
can be obtained by the back substitution method [24]:
(2.19)
For simplicity, we set the scaling factor
for all
. It can be seen that
would contradict with
, hence (2.19) vanishes and the identity (1) holds (for the eigenvectors of the truncated matrix). Since we know that
, the eigenvectors of the truncated matrix (2.18) with zeros substituted for the rest of the components are also the solution to the infinite Equation (2.17). Therefore, expressions (1), (2), (3) and (2.16) also hold for the matrix M.
The following formula is an explicit expression for the inverse of an upper triangular matrix:
Lemma 2.5. Let U be an infinite upper triangular matrix with complex entries:
(2.20)
and let
. The explicit expression for the inverse of U is given by:
(2.21)
and:
(2.22)
for
.
Proof. The inverse of the upper triangular matrix is also upper triangular [25], therefore the equation
reads:
(2.23)
For the diagonal elements, we have:
(2.24)
Elements further off the diagonal can be obtained by the back substitution algorithm. Using (2.23):
(2.25)
Which together with (2.24) gives:
(2.26)
By repeating the same process for the rest of the elements of
, the relation (2.22) for
is obtained.
The powers of the transition matrix can be calculated by diagonalizing T:
(2.27)
where D denotes the infinite matrix with the eigenvalues of T as diagonal entries, while P is the infinite modal matrix composed of eigenvectors of T:
(2.28)
We now prove the main theorem of this section:
Theorem 2.6. Let
be a sequence of numbers satisfying the recurrence relation (2.1). Let
be polynomial in
such that the corresponding polynomial equation:
(2.29)
has a root
satisfying:
(2.30)
Then, the solution to the recurrence (2.1) is given by:
(2.31)
where:
(2.32)
Proof. Because the Equation (2.29) is equivalent to requiring
(
is the free term of the polynomial
obtained after shift of variables by d), the variables
are the ones in which the free term vanishes. In the same way, from (2.9) we have:
(2.33)
Therefore condition (2.30) expresses the fact that all of the powers of
have different values.
The transition matrix T is (2.3):
(2.34)
By Lemma 2.2,
and the infinite transition matrix is upper triangular, while (2.30) implies that T is diagonalizable. Furthermore, by Lemma 2.4 the a-th eigenvector has at most
non-zero elements, and the (infinite) matrix P can be chosen to be upper triangular (the inverse of P is then upper triangular as well, Lemma 2.5). The powers of transition matrix T can be obtained from (2.27). Hence, using (1.8), (2.34) and Lemma 2.4:
(2.35)
where
denote the elements of the inverse of P that can be calculated using Lemma 2.5, while the formula for
is given in (2.16). The i-th power of the diagonal matrix is just a matrix with i-th power of its elements on the diagonal. Carrying out the matrix multiplication, we obtain:
(2.36)
By retrieving the original sequence
and substituting back the original coefficients
for the shifted parameters
, we arrive at the result.
Theorem 2.6 allows for solving a large family of nonlinear recurrences. It is apparent from (2.31) that the general solution has a qualitative form:
(2.37)
where
are some (in general complex) functions of number of iterations i. Therefore, Theorem 2.6 can be used to determine the form of functions
. As an example, the results are applied to the well-known logistic mapping:
Example 2.7. The logistic map is a recurrence relation of the form [26]:
(2.38)
with the coefficient
. Conditions (2.29) and (2.30) are trivially fulfilled for
and
. We obtain:
(2.39)
Thus, by Theorem 2.6 the solution takes the form:
(2.40)
It can be seen that the “coefficient functions”
in front of the powers of initial value
become progressively more complicated as the order of
increases.
3. Multi-Variable Recurrences of Depth One
The treatment of polynomial multi-variable recurrences of depth one follows the same structure as Section 2. With appropriately chosen auxiliary vectors, the recurrence can be Carleman-linearized and represented by an appropriate transition matrix. Let:
(3.1)
be a vector composed of k variables
. Then, a general polynomial recurrence of depth one in k variables can be compactly written in the form:
(3.2)
where symbols
represent j + 1 dimensional arrays. It is apparent that the description in terms of
is somewhat redundant: every permutation of lower right indices
is the coefficient of the same combination of powers of variables
. For convenience, we choose
to vanish unless the lower right indices are in the non-decreasing order, i.e.
if
. As a result, the symbol
has only
independent components, when all of the lower indices are taken into account [27].
Theorem 3.1. Let the polynomial multi-variable depth one recurrence be defined as in (3.2), and let the Kronecker powers of
be defined as in [13]:
(3.3)
Then, the Carleman linearization of the system can be defined on infinite auxiliary vectors:
(3.4)
and the solution to recurrence (3.2) can be written as:
(3.5)
where
is defined by initial conditions and the infinite dimensional transition matrix is:
(3.6)
The Kronecker product in (3.6) acts on the subscripts only.
Proof. Since
is composed of consecutive Kronecker powers of
, any polynomial
in k variables can be rewritten as linear combination of elements of
. The recurrence is defined solely by the arrays
, therefore the transition matrix consists of the numbers appearing in
. The elements of vector
can be identified with the product of elements of
. From the definitions (3.3), (3.4) and the properties of the Kronecker product we have:
(3.7)
The product of elements of
can be expressed in terms of elements of
:
(3.8)
Analogously to (1.9), the infinite transition matrix can be defined as:
(3.9)
The constants
do not depend on
, and are defined as a product:
(3.10)
where
is the number of times
appears in
, while
is the number of times the element
appears in
. Equivalently,
can be written in the matrix form:
(3.11)
By replacing the groups of elements from (3.11) with the block-matrix notation of Kronecker powers of arrays
, we arrive at (3.6).
For
, expression (3.6) reduces to (2.2). Furthermore, the analogue of the Theorem 2.1 can be proven. If
for all p, then all of the Kronecker products involving
vanish. Since all of the block matrices below the diagonal in (3.6) include the
in the Kronecker product, the lower part of the block transition matrix vanishes and the matrix becomes block upper triangular. The block matrices on the diagonal of
are the Kronecker powers of the array standing near the linear term
. The Kronecker powers of
are defined as in [13]:
(3.12)
Therefore, the transition matrix can be brought to the upper triangular form if
. Even though in general recurrences do not fulfill those conditions, variables can be (in some cases) transformed in such a way that the new transition matrix is upper triangular.
Lemma 3.2. Let the recurrence on
be defined as in (3.2). There exists a linear transformation of variables for
such that the transition matrix
corresponding to the new sequence is upper triangular if and only if there exists k dimensional vector B satisfying:
(3.13)
Proof. The general (invertible) linear transformation of k variables
can be written in the matrix form:
(3.14)
with
. The new recurrence defined on primed variables
satisfies:
(3.15)
where:
(3.16)
The symbol
denotes the symmetrized array that is equal to
averaged over all of the permutations
of the right subscripts.
It is trivial that convolution of linear transformations is a linear transformation. We firstly perform a shift of the variables by the vector B. From (3.16) we have:
(3.17)
Setting
, above equation reduces to a set of coupled polynomial equations. Unfortunately, the resulting system of equations is not always solvable. Assuming that (3.17) has a solution, the other arrays transform according to (3.16).
Subsequently a linear transformation is performed with
. The constant term
(i.e. still vanishes, regardless of the choice of A), while the linear term is:
(3.18)
Similarly, there always exists a suitable choice of
dimensional matrix A such that
becomes upper triangular in lower indices [13] and thus the transition matrix can be brought to an upper triangular form provided (3.13) has a solution.
If the shift vector B satisfying (3.13) exists, it’s not necessary unique. The situation is analogous to the one found for a uni-variable recurrence. Different choices of B satisfying (3.13) with
will lead to
with different eigenvalues. Therefore, it is suitable to choose B such that it satisfies the diagonalizability conditions stated in Lemma 3.3.
It is worth noting that even though the arrays
vanish for a non-decreasing order of lower right indices, the same may not be true for the transformed arrays
. Therefore, after transforming the coefficient arrays with the lineary transformation (3.13), it is necessary to upper triangularize the transformed coefficients so that they also satisfy
if
.
Since the eigenvalues of the Kronecker product of two matrices are the products of eigenvalues of the two matrices, we have:
(3.19)
where the right-hand side represents the Cartesian powers of the set of eigenvalues of
[24]. As a result, the general eigenvalue is of the form:
(3.20)
for some numbers
,
. In the same way as in (3.7), the l-th eigenvalue of the transition can be written as:
(3.21)
The matrix is diagonalizable if all of its eigenvalues are different, thus a sufficient condition for the existence of diagonal form can be constructed.
Although this is true, a problem arises due to redundancy in description in terms of the auxiliary vectors (3.4). Since
are composed of Kronecker powers of
, there are components that are redundant, i.e. that represent the same product of variables
. For example, if
, the infinite auxiliary vectors are:
(3.22)
and it is apparent that the 4-th and 5-th components are the same.
Due to the redundancy, the repeating copies of the same eigenvalues appear in the upper triangular form of the transition matrix (3.6). Lemmas 2.4 and 2.5 are applicable only if the eigenvalues of
are non-degenerate, hence the Lemmas cannot be used to diagonalize the transition matrix. Nevertheless, the special choice of
allows one to “reduce” the transition matrix and get rid of the undesired copies.
The redundant terms arise from the permutation of lower right indices of
. By definition (the begining of Section 3) the arrays vanish for a decreasing sequence of lower indices. All of the columns in
associated with the copies of the same power of variables
vanish except for the diagonal element. For example, since the 4-th and 5-th element of the auxiliary vectors (3.22) are similar, all of the elements
,
vanish. This observation allows one to get rid of the redundant columns (and rows) of
, hence making the calculation of the powers of the transition matrix feasible. In other words, the redundancy has to be taken into account when assessing the diagonalizability of
.
In order to differentiate between quantities related to the redundant (original) and non-redundant system, tilde notation is introduced. The quantities related to the system without redundancy (vectors without redundant components and matrices without redundant rows and columns) are denoted with tilde above the symbol. For example, the equivalent of the auxiliary vector (3.22) for the non-redundant Carleman embedding is:
(3.23)
Similarly, the transition matrix for
is denoted by
respectively. Although it is the non-redundant Carleman embedding defined by
and
that is eventually diagonalized, it is hard to express it using a concise formula. Equivalents of expression (3.4) for
and (3.6) for
have not been found. As opposed to the redundant system, the transition matrix
cannot be simply expressed in terms of Kronecker products. On the other hand, it is much easier to calculate powers of
rather than
. The following can be said about diagonalizability of the “non-redundant” transition matrix:
Lemma 3.3. A sufficient condition for the infinite upper triangular transition matrix
to be diagonalizable is for the corresponding
dimensional matrix
to have eigenvalues
such that the equation:
(3.24)
has no (non-trivial) solution
,
. In case of two variables
, this can be further simplified to:
(3.25)
Proof. As stated before, the redundancy in the auxiliary vectors
implies that there always will be an eigenvector with algebraic multiplicity of at least two. However, since all of the elements of the corresponding column except the diagonal vanish, a sufficient condition for diagonalization can be still formulated based on the tilded transition matrix
.
The condition for
to have different eigenvalues can be written as:
(3.26)
Dividing by the primed side:
(3.27)
Therefore, the condition is satisfied if and only if the only solution
to (3.27) is
for all p. By rewriting the eigenvalues in the polar form
:
(3.28)
and taking the logarithm of both sides of (3.28), we obtain a set of linear equations:
(3.29)
There always exists a trivial solution
.
In case
, the equations reduce to:
(3.30)
The upper equation has integer solutions if and only if
. Substituting the result to the lower equation, we conclude that the above system of equations does not have an integer solution if:
(3.31)
If the transition matrix
can be brought into an upper triangular form, Lemmas 2.4 and 2.5 can be used to diagonalize it. This allows one to give an explicit expression for powers of
, thus solving the recurrence. We now prove the main theorem of the article:
Theorem 3.4. Let
be k sequences of numbers satisfying the recurrence relation (3.2) and let A and B denote the parameters of a linear transformation that brings the transition matrix to an upper triangular form, i.e.
matrix A and k dimensional vector B such that:
(3.32)
Furthermore, let the
array
be upper triangular, where:
(3.33)
Let the eigenvalues
(by eigenvalues of the array we understand the eigenvalues of the corresponding matrix created by considering only two lower indices)
be such that the equation:
(3.34)
has no non-trivial solutions
. Then the solution to the recurrence is given in terms of the initial condition
by:
(3.35)
where:
(3.36)
Proof. Conditions (3.32) and (3.33) ensure that after the linear transformation
the corresponding transition matrix
is upper triangular. Similarly, condition (3.34) is simply a sufficient condition for the diagonalization of
stated in Lemma 3.3.
Starting from Theorem 3.1, we have
. However, as mentioned before, due to redundancy of description in terms of
, the corresponding
is used instead. Furthermore, since there is no redundancy in the first
elements of
, the solution can be equivalently written as:
(3.37)
where the symbols with tilde are symbols related to
free from the redundancy mentioned before. The prime refers to the quantities related to the recurrence obtained after the linear transformation of variables. Transforming (3.37) back to the original sequences:
(3.38)
Thus, we arrive at expression (3.35). The eigenvectors and the inverse of the infinite modal matrix
are given in Lemmas 2.4 and 2.5.
The solutions to the recurrence (3.2) have a general form:
(3.39)
which closely resembles Formula (2.37). A noticeable difference is that for the multi-variable recurrence, the solution takes the form of a multi-variable (instead of uni-variable) power series in the initial condition.
Example 3.5. As an example, we consider the simple recurrence defined by:
(3.40)
The arrays
can be obtained by comparing (3.40) with (3.2):
(3.41)
In order to bring the transition matrix into the upper triangular form, the variables are transformed by a
matrix A:
(3.42)
to obtain the new recurrence:
(3.43)
with
and:
(3.44)
The eigenvalues satisfy conditions (3.34), therefore
can be diagonalized and the powers of the transition matrix can be calculated. The infinite transition matrix is:
(3.45)
The 5-th, 9-th and 11-th rows and columns are examples of the arising redundancy. Lemmas 2.4 and 2.5 can now be applied to the reduced transition matrix
, which is obtained by eliminating all of the redundant rows and columns:
(3.46)
This can be further rewritten in the transformed form:
(3.47)
The matrices to the left and right of the diagonal matrix are
and
respectively. The powers of
can now be easily calculated. Using (3.5), we finally arrive at:
(3.48)
The above solution to the primed recurrence (3.43) can now be transformed back to the original variables, therefore giving a rather complicated expression for the solution to recurrence (3.40):
(3.49)
Comparing (3.49) with the general form of solution (3.39), the coefficient functions can be obtained.
4. Notes on Multi-Variable Recurrences of Arbitrary Depth
Theorem 3.4 is also applicable to arbitrary-depth recursions. For any finite-depth recursion, auxiliary variables can be introduced and the original recurrence can be reduced to multi-variable recurrence of depth one. The method follows closely the one used for the finite-depth linear recurrences [4].
Let the depth-n recurrence be in the form (1.1). Set of
auxiliary variables is defined by:
(4.1)
for
. Equation (1.1) now takes the form:
(4.2)
Which is a multi-variable depth one recurrence, already discussed in Section 3.
5. Further Remarks
In Section 2, an explicit formula for the solution of wide range of uni-variable depth-one recurrences is given. Although the final expression is rather complicated, it can be used to derive the coefficient functions
in (2.37). Even if the recurrence does not fulfill the diagonalizability condition, there is a chance that it can be reformulated in terms of new variables. The transformation should be chosen such that the corresponding transition matrix has different eigenvalues and Lemmas 2.4 and 2.5 can be applied. The shift transformation is of particular importance, nevertheless, any transformation that leaves the recurrence polynomial could be used. Transformations by a higher degree invertible polynomial, such us
are just one of many examples.
Although the method allows tackling many different polynomial recurrence relations, there are some notable shortcomings. The diagonalizability conditions in Sections 2 and 3 are only sufficient and not necessary for the transition matrix to be diagonalizable. Therefore, the method may not be applicable for all systems that are solvable by diagonalization approach. In addition, transition matrices which are known to be non-diagonalizabe are yet another group of cases where Theorems 2.6 and 3.4 cannot be used. Possibly, an analogous method could be formulated, where Jordanization is used to calculate the powers of the transition matrix instead of diagonalization.
Theorem 3.4 utilizes the non-redundant system (with
) instead of the one defined in Theorem 3.1. It is dictated by the fact that the transition matrix defined for the auxiliary vectors 3.4 necessary contains redundant copies of its eigenvalues. This may lead to difficulties with evaluating the expansion (3.39) for large
, as only the general rule for constructing
is given as opposed to explicit expression as in the case of T, i.e. (3.9).
It can be noted that examples 2.7 and 3.5 belong to a family of recurrences for which no shift is needed. The logistic map automatically fulfills the assumptions for
, while the recurrence relation (3.40) needs to be transformed only by matrix A, with
. Those examples were chosen on purpose, as the solution of the transformed series can be truncated and the calculations simplify significantly. In general, however, if one wants to obtain the exact solution to the original recurrence with non-zero shift parameter, all of the terms of the solution corresponding to the transformed recurrence have to be kept, which often makes the calculations cumbersome.
Even though the Carleman linearization method does not provide a way to systematically approach the topic of polynomial recurrences, a large group of relations (1.1) fulfill the conditions and the method can be applied. Furthermore, it is straightforward to generalize the method to systems of recurrences (Theorem 3.4), as well as systems of arbitrary-depth recurrences, as discussed in Section 4.
Improvements to the Carleman linearization of polynomial recurrences may include research on the diagonalizability of the transition matrices, possibly resulting in less-restrictive diagonalizability conditions. In addition, polynomial transformations may play an important role in further extending the group of recurrences the method is applicable to.
Acknowledgements
The author is grateful to Daniel Gagliardi for valuable remarks, and to the anonymous referees for devoting their time and effort to the manuscript.