1. Introduction
We start with the following equation describing a discrete-time motion in of a particle with mass in the presence of a random potential and a viscosity force proportional to velocity:
Here d-vector is the velocity at time matrix represents an anisotropic damping coefficient, and d-vector is a random force applied at time The above equation is a discrete-time counterpart of the Langevin SDE [1,2]. Applications of the Langevin equation with a random non-Gaussian term are addressed, for instance, in [3,4]. Setting and we obtain:
(1)
The random walk associated with this equation is given by
(2)
Similar models of random motion in dimension one, with i.i.d. forces were considered in [5-8], see also [9,10] and references therein. See, for instance, [11-14] for interesting examples of applications of Equation (1) with i.d.d. coefficients in various areas.
In this paper we will assume that the coefficients
are induced (in the sense of the following definition) by certain Gibbs’s states.
Definition 1. Coefficients are said to be induced by random variables each valued in a finite set if there exists a sequence of independent random d-vectors which is independent of and is such that for a fixed
are i.i.d. and
The randomness of is due to two factors:
1) Random environment which describes a “state of Nature”; and, given the realization of
2) The “intrinsic” randomness of systems’ characteristics which is captured by the random variables
Note that when is a finite Markov chain,
is a Hidden Markov Model. See, for instance, [15] for a survey of HMM and their applications. Heavy tailed HMM as random coefficients of multivariate linear time-series models have been considered, for instance, in [16,17]. In the context of financial time series, can be interpreted as an exogenous factor determined by the current state of the underlying economy. The environment changes due to seasonal effects, response to the news, dynamics of the market, etc. When is a function of the state of a Markov chain, stochastic difference Equation (1) is a formal analogue of the Langevin equation with regime switches, which was studied in [18]. The notion of regime shifts or regime switches traces back to [19,20], where it was proposed in order to explain the cyclical feature of certain macroeconomic variables.
In this paper we consider that belong to the following class of random processes:
Definition 2 ([21]). A C-chain is a stationary random process taking values in a finite set (alphabet)
such that the following holds:
i) For any
ii) For any and any sequence the following limit exists:
where the right-hand side is a regular version of the conditional probabilities.
iii) Let
Then,
C-chains form an important subclass of chains with complete connections/chains of in-finite order [22-24]. They can be described as exponentially mixing full shifts, and alternatively defined as an essentially unique random process with a given transition function (g-measure) [25]. Stationary distributions of these processes are Gibbs states in the sense of Bowen
[21,26]. For any C-chain there exists a Markovian representation [21,25], that is a stationary irreducible Markov chain in a countable state space and a function such that
where means equivalence of distributions. Chains of infinite order are well-suited for modeling of long range-dependence with fading memory, and in this sense constitute a natural generalization of finite-state Markov chains [24,27-30].
We will further assume that the vectors are multivariate regularly varying. Recall that, for a function is said to be regularly varying of index if for some function
such that for any positive real (i.e., is a slowly varying function). Let
Definition 3 ([31]). A random vector is regularly varying with index if there exist a function regularly varying with index and a Radon measure in the space such that
as where denotes the vague convergence and
We denote by the set of all d-vectors regularly varying with index associated with function
The corresponding limiting measure is called the measure of regular variation associated with
We next summarize our assumptions on the coefficients and Let and
for, respectively, a vector and a matrix
Assumption 1. Let be a stationary C-chain defined on a finite state space and suppose that is induced by Assume in addition that:
A1) where for
A2) The spectral radius is strictly between zero and one.
A3) There exist a constant and a regularly varying function with index such that for all with associated measure of regular variation
2. Statement of Results
For any (random) initial vector the series converges in distribution, as to
which is the unique initial value making into a stationary sequence [32]. The following result, whose proof is omitted, is a “Gibssian” version of a “Markovian” [16, Theorem 1]. The claim can be established following the line of argument in [16] nearly verbatim, exploiting the Markov representation of C-chains obtained in [21].
Theorem 1. Let Assumption 1 hold. Then with associated measure of regular variation
where stands for and
In a slightly more general setting, the existence of the limiting velocity suggests the following law of large numbers, whose short proof is included in Section 3.1.
Theorem 2. Let Assumption 1 hold with A3) being replaced by the condition Then1
, a.s.
Let denote independent copies of and let be a sequence of vectors such that the sequence of processes
converges in law as in the Skorokhod space
to a Lévy process
where are introduced in A3) with stationary independent increments, and being distributed according to a stable law of index whose domain of attraction includes For an explicit form of the centering sequence and the characteristic function of see, for instance, [33] or [34]. Remark that one can set if and if
For each define a process in
by setting
(3)
Theorem 3. Let Assumption 1 hold with Then the sequence of processes converges weakly in as to
It follows from Definition 3 (see, for instance, [31])
that if then the following limit exists for any vector
(4)
Theorem 4. Assume that the conditions of Theorem 3 hold. If assume in addition that the law of is symmetric for any Let be defined by Equation (4) with Then, for any such that we have
a.s. (5)
In particular,
a.s.
If either Assumption 1 holds with or
is assumed instead of A3), then, in view of Equation (6), a Gaussian counterpart of Theorem 3 can be obtained as a direct consequence of general CLTs for uniformly mixing sequences (see, for instance, [35, Theorem 20.1] and [36, Corollary 2]) applied to the sequence If then a law of iterated logarithm in the usual form follows from Equation (5) and, for instance, [37, Theorem 5] applied to the sequence
We remark that in the case of i.i.d. additive component similar to our results are obtained in [7] for a more general than Equation (1) mapping
3. Proofs
3.1. Proof of Theorem 2
It follows from the definition of the random walk and Equation (1) that
(6)
Note that implies
It follows then from the Borel-Cantelli lemma that
a.s.
Furthermore, we have
Thus the law of large numbers for follows from the ergodic theorem applied to the sequence □
3.2. Proof of Theorem 3
Only the second term in the right-most side of Equation (5) contributes to the asymptotic behavior of The proof rests on the application of Corollary 5.9 in [34] to the partial sums In view of condition iii) in Definition 2 and the decomposition shown in Equation (6), we only need to verify that the following “local dependence” condition (which is condition (5.13) in [34]) holds for the sequence
The above convergence to zero follows from the mixing condition iii) in Definition 2 and the regular variation, as t goes to infinity, of the marginal distribution tail
□
3.3. Proof of Theorem 4
For let be the number of occurrences of in the set That is,
Define recursively and
(with the usual convention that the greatest lower bound over an empty set is equal to infinity). For let
where
Denote
Further, for each let if whereas if let
Then and hence
It follows from the decomposition given by Equation (6) along with the Borel-Cantelli lemma that for any
a.s.
Let Then
It follows, for instance, from Theorem 5 in [37] that if then for any the following limit exists and the identity holds with probability one:
Therefore (since is regularly varying with index), in order to complete the proof Theorem 4 it suffices to show that for any that satisfies the condition of the theorem, we have
a.s.
We first observe that by the law of iterated logarithm for heavy-tailed i.i.d. sequences (see Theorems 1.6.6 and 3.9.1 in [33]),
, a.s.
for any and Since by the ergodic theorem,
a.s.this yields
, a.s.and hence
, a.s.
On the other hand, if Theorem 3.9.1 in [33] implies that for any and any such that we have
a.s.
To conclude the proof of the theorem it thus remains to show that for any any and all
(7)
where, for the events are defined as follows:
.
For let and define
.
Then
The Ruelle-Perron-Frobenius theorem (see [26]) implies that the sequence satisfies the large deviation principle (by the Gärtner-Ellis theorem), and hence
for some constants
and Furthermore, for any
and there exists a constant such that (see [33, p. 177]), Therefore, since we can choose such that with suitable
and A standard argument using the Borel-Cantelli lemma imply then the identity in Equation (7). □