The Table Auto-Regressive Moving-Average Model for (Categorical) Stationary Series: Mathematical Perspectives (Invertibility; Maximum Likelihood Estimation)

Abstract

Once invertibility for a causal TARMA series is defined and accompanied by conditions on the probability parameters of the model, all focus concentrates on the maximum likelihood estimators. Under the coexistence of essential causality and invertibility, the estimators are shown to be convergent to the real values and asymptotically obedient to the Gaussian distribution: their variance matrix identifies with a classic result. Some real-like examples are simulated and simplification attempts include the derivation of the non-parametric chi-square test extension for stationary TAR series.

Share and Cite:

Dimitriou-Fakalou, C. (2022) The Table Auto-Regressive Moving-Average Model for (Categorical) Stationary Series: Mathematical Perspectives (Invertibility; Maximum Likelihood Estimation). Open Journal of Statistics, 12, 385-407. doi: 10.4236/ojs.2022.123025.

1. Introduction

The scientific progress, from defining a valid time series model to making it useful in practice, depends on the consolidation of inference results. With regard to the non-linear stationary time series, the general TARMA model ( [1]) has recently pledged an unprecedented flexibility: given the mild requirement to categorize the variables’ values, its competence to express any serial order conditional or joint probability dependence is unquestionable. So it will be the aim of this paper to bond the theoretical with the most deserving TARMA sample properties. Before that, a scent of other results and popular models in the field is offered.

The DARMA model undoubtedly triggered a significant number of scientists’ interest for the discrete time series analysis: the reader may skim through [2] to get acquainted with the definition and some asymptotic properties. A recent fix to replication issues of the older model, can be discovered in [3]. Having to do with an additive model though, it should be crystal clear that its (and those pieces’ following it) simplicity and parsimony contribution to the discrete stationary series modelling, are due to its inarguable impairment to manage any better than the marginal and covariance dependence (which is not the case for the TARMA).

A valuable equivalence has been established in the past between the analysis of Bernoulli variables and the strictly stationary series obeying any law ( [4]): this saves the trouble of generalizing the Gaussian ARMA as in [5] for a family of infinitely divisible distributions. Nevertheless, most of the derivations concern the Gaussian stationarity, such as [6] who deals with the alternative (via 0 - 1) estimation of autoregressive parameterizations: those binary data have traditionally benefited from a rich bibliography anyway.

A promotion to the maximum likelihood estimation together with some solid statements can be found in [7], but the dense and short paper uses an approximation to the fixed Markovian or auto-regressive order; the inclusion of a moving-average part only presents itself with the Gaussian ARMA (1, 1) illustration, but it is exactly converting the AR to the ARMA that makes the problem challenging, ever more so when the distribution is not Gaussian. In the best estimators’ properties department, it is worth reading [8] to view the well-known ARMA efficiency result.

Notable attempts have taken place for count series as well; besides modelling as usual, [9] reproduced standard discrete-time renewal process inference results. In the other hand there are the INARMA models, for example, the INAR (1) (with random coefficient) asymptotic behaviour has been studied extensively by [10]. A recent review on the subject of count time series is [11]. One can always incorporate a count variable into a categorical one, encouraging the reader to elect the TARMA model as superior for the series stationarity. For bilinear models (and ‘any’-valued variables) there has been a plethora of inference results too.

Though a distinction wall is often raised between the time series and the Markov chain schemes, the causal TAR model embodies the homogeneous Markov dependence attached to a unique stationary distribution. Hence, [12] were the original contributors for the sake of the Markov chain inference. Later, [13] restricted the derivations to two states basic chain dependence managing explicit efficiency-related results, when (for two parameters only) the inverse of a square matrix can be computed with ease.

In this paper, the main objective is to perform the TARMA parameters inference, filling the gap of estimation for the infinite Markovian dependence: Section 2 summarizes the TARMA definition together with some raw statements concerning the use of the model parameters for a stationary series. Then Section 3 newly defines the invertible TARMA series and Section 3.1 invents a condition that safeguards invertibility, using a parallel with the existing definition and condition for causality. Section 3.2 continues with the invertibility topic, delivering news on the limit of probabilities based on a finite to the countably infinite past. Then all Section 4 is to establish the two grand properties of the maximum likelihood estimators: the weak convergence to the parameters is in Section 4.1, while Section 4.2 uses numerous lemmas and a theorem to conclude with the asymptotic normality, as this is explicitly stated in the beginning of Section 6; there, the normally distributed estimator vector transforms to a chi-square statistic to test a null hypothesis on the specified TARMA model fitting stationarity. The purely theoretical presentations are quickly reminded (Sections 2, 3), established (Sections 3, 4 and 6) or discussed (Section 6). In addition, the simulation Section 5 examines the performance of TARMA estimators (or how to compute the estimates) for small sample sizes, and returns favorable conclusions as well.

2. Reminders

Previously in [1], a strictly stationary time series { X t , t } was considered: the variables can be of any (bounded or unbounded) range, which for the serial stationarity clothing must be assigned to ( k + 1 ) categories (for fixed k ) with conventional (or, natural, if that is the case) category codes, say, 0 and v 1 , , v k 0 . The X-variables are built on a multivariate sequence of independent in time and identically distributed ( k + 1 ) p + q × 1 random vectors: for each i = ( i 1 , , i p ) , j = ( j 1 , , j q ) , i 1 , , i p , j 1 , , j q = 0 , v 1 , , v k , a univariable I t ( i | j ) ( I t ( 0 p | 0 q ) I t ) with same range as X is considered at time t, and marginally it has to hold { I t ( i | j ) } ~ IID with probabilities

π x ( i | j ) = ( I t ( i | j ) = x ) ( 0,1 ) , x = 0, v 1 , , v k , x = 0, v 1 , , v k π x ( i | j ) = 1

(and π x ( 0 p | 0 q ) π x ).

It is reminded that a multivariate IID time series is such that the random vectors are independent in time only, or that any two univariables indexed at different times are independent: the variables’ dependence at the same time within the same vector will be referred to as interdependence with interindependence being a special case; hence the π x ( i | j ) above are marginal probabilities that do not necessarily suffice to determine the joint dependence at the same timing.

In the general setting, { I t ( i | j ) = x } will simplify to the event that the variable I t ( i | j ) belongs into the category with code x = 0 , v 1 , , v k . The Markov chain terminology of a state space, say S : = { 0, v 1 , , v k } and S will be avoided deliberately. Then the definition introduced in [1] is reminded here.

For fixed p , q 0 (such that p + q ), { X t , t } is a TableAuto-Regressive Moving-Average process of order ( k , p , q ) , and it is written { X t } ~ TARMA ( k , p , q ) , if (it is causal and invertible and) it holds that

X t : = I t ( ( X t 1 X t p ) | ( I t 1 I t q ) ) , for every t . (1)

In explanation, X t is set equal to a I t ( ... ) (at the original range); nevertheless, which I (out of the ( k + 1 ) p + q ) contributes its value, depends on the previous X t 1 , , X t p , I t 1 , , I t q , i.e., which category each one of these ( p + q ) variable realization falls into.

As an example of a TARMA (1, 1), the real-valued variables may be categorized into k + 1 = 3 groups, say ( , 3 ] , ( 3,3 ) and [ 3, ) with codes “-3”, “0” and “3”, respectively; then the (9 × 1) random vectors I t ( i | j ) , i , j = 3 , 0 , 3 are considered with fixed distribution for every t , and then over time with I t ( i | j ) being independent of I t * ( i * | j * ) for t t * . The X t is defined to be equal to I t ( 3 | 3 ) if both X t 1 and I t 1 ( 0 | 0 ) are less than or equal to −3, or equal to I t ( 0 | 3 ) if 3 < X t 1 < 3 and I t 1 ( 0 | 0 ) 3 , or...., or equal to I t ( 3 | 3 ) if both X t 1 and I t 1 ( 0 | 0 ) are greater than or equal to 3.

Hopefully, the example above clarifies that it is not necessarily a count series that is under study. This is an ‘occurrence-not occurrence’ analysis of categorical variables and, for the serial evolution, it matters not whether X t is set equal to the exact value or the relevant category code: it is a probability of occurrence analysis and it works subject to the categorization (together with k of course).

Then (1) can be re-arranged as

f x ( X t ) : = i 1 , , i p , j 1 , , j q = 0 , v 1 , , v k f x ( I t ( i | j ) ) l = 1 p f i l ( X t l ) n = 1 q f j n ( I t n ) , for x = 0 , v 1 , , v k , (2)

where for x , y = 0 , v 1 , , v k , the index functions

f x ( y ) = x * = 0 , v 1 , , v k , x * x ( y x * ) x * = 0 , v 1 , , v k , x * x ( x x * ) = { 1 , if y = x 0 , if y = 0 , v 1 , , v k , y x

have been set.

For (2) to hold for every t , at the very least the assumption of uniqueness of a stationary distribution solution should be applicable; in fact, { X t , t } should not be considered at all, unless it is a (table) causal process (based on { I t } ) according to the definition in [1]. The causality element will be revisited here together with the new element of invertibility.

Since the definition of this model has been the focus of past work, the reader is encouraged to look for examples and form a clearer picture from there ( [1]). The understanding and appreciation of the TARMA definition must have already taken place to proceed with this paper’s inference related goals. For the reader that wishes to derive the auto-covariance function of TARMA series, it is highlighted that the second moments stationarity is a special case of the all moments stationarity: the work of [1] has handed over a methodology to compute all joint probabilities, and hence the auto-covariance function is a special case, as it demands the joint probabilities of two variables. Not only are those derivations outside the scope of this paper, but also taming the all- rather than just second-order stationarity is the great TARMA contribution over other models.

From this point on, it will be taken for granted (and without a formal proof) that the TARMA construction may represent any strictly stationary time series, particularly under the convention- if there needs to be one- of tabulating the variables’ values. Together with a solution on identification (of k and p, q) issues, the TARMA theory is a contribution to the non-parametric version of statistical science. When q = 0 the dependence is pth order Markovian, while for q 1 a valid Markovian dependence of infinite order is achieved; when p = 0 , a q-dependent strictly stationary series is concerned, and the joint dependence limits to infinity for p 1 . Hence it will be considered ( [1]) that, subject to the categorization, the TARMA can be the all-moments stationary analogue to the ARMA model for the covariance stationary series.

Some Basic Requirements

The probabilities π will be taking the role of parameters of the model, for which the estimation results will be established. The conditions in [1] are remembered:

(C1): { I t ( i | j ) , t } are jointly interindependent series, i.e., it holds that

( I t ( i 1 | j 1 ) = x 1 , , I t ( i n | j n ) = x n ) = m = 1 n ( I t ( i m | j m ) = x m ) , t

for any n = 2, , ( k + 1 ) p + q , ( i m , j m ) = ( i m ,1 , , i m , p , j m ,1 , , j m , q ) , i m ,1 , , i m , p , j m ,1 , , j m , q = 0, v 1 , , v k , m = 1 , , n , ( i m 1 , j m 1 ) ( i m 2 , j m 2 ) , m 1 , m 2 = 1 , , n ( m 1 m 2 ) and x 1 , , x n = 0, v 1 , , v k .

Condition (C1) enforces the I t ( ... ) variables (at the same time t) to be independent: otherwise, the form of their interdependence should accompany (1), in order to properly define a TARMA process. Under an interdependence scenario, the computation of all joint TARMA probabilities heavily relies on

π x | y * ( i | j ) : = ( I t ( i | j ) = x | I t = y ) , x , y = 0, v 1 , , v k , ( i , j ) 0 p + q

(as well as π ) according to [1]. Hence in this case, π * are also parameters for an apposite TARMA definition.

In the absence of (C1), an “h to h + 1” way to plex the interdependence may work by first considering π x and then gradually building for n = 1 , , p + q and every l 1 , , l n = 1, , p + q ( l 1 < < l n ), i l 1 , , i l n = v 1 , , v k , the conditional probabilities ( I t ( 0 l 1 1 i l 1 i l n 0 p + q l n ) = x n | I t = x , ) : “ ” in the condition sets all I t ( 0 l 1 * 1 i l 1 * i l n * * 0 p + q l n * * ) , n * = 1, , n 1 , l 1 * , , l n * * = l 1 , , l n ( l 1 * < < l n * * ). That way one gets a feel of how (C1) can be replaced, especially when other attributes need to be accomplished.

Other conditions in [1] are listed below:

(C2): { X t , t } is not a deterministic process, i.e.,

( X t = x | X t 1 = x 1 , X t 2 = x 2 , ) ( 0 , 1 )

for all x , x n = 0 , v 1 , , v k , n .

(C3): { X t , t } is not overparameterized, i.e., in the simplest of cases, it cannot be that π x ( i | j ) = π x ( i | j ) , x = 0, v 1 , , v k for ( i , j ) ( i , j ) .

It would be misleading that the process { X t } is of order ( p , q ) with ( k + 1 ) categories, when in fact there are fewer parameters that are active (due to identical distributions), so this is preserved by (C3): the reader may reflect on the extreme example that all distributions are identical and { X t } is in fact an IID process! Nevertheless, the usual (opposite) notion that the different I-distributions should be as alike as possible, will be fundamental for the process validity according to other conditions (causality/invertibility) as in the next section.

Finally, it is stressed that the assumption of a convoluted multivariate IID time series, which is attached to the definition of the TARMA, is the necessary block for the inference that will be presented in this paper, the same way that the univariate error IID series is for the well-known ARMA.

3. Invertibility

In [1], the process { X t , t } was called table causal (based on { I t , t } ) also implying that it is strictly stationary. After setting

d h f x ( I t ( 0 l 1 1 i l 1 0 l 2 l 1 1 i l 2 i l h 0 p + q l h ) ) : = f x ( I t ( 0 l 1 1 i l 1 0 l 2 l 1 1 i l 2 i l h 0 p + q l h ) ) n = 1 h f x ( I t ( 0 l 1 1 , i l 1 , , i l n 1 , 0 l n + 1 l n 1 1 , i l n + 1 , , i l h , 0 p + q l h ) ) + + ( 1 ) h 1 n = 1 h f x ( I t ( 0 l n 1 , i l n , 0 p + q l n ) ) + ( 1 ) h f x ( I t )

for h = 1 , , p + q , l 1 , , l h = 1, , p + q ( l 1 < < l h ), i l 1 , , i l h = v 1 , , v k , it can be derived from (2), that

f x ( I t ) = f x ( X t ) h p = 1 p l 1 , , l h p = 1 l 1 < < l h p p i l 1 , , i l h p = v 1 , , v k d h p f x ( I t ( ( 0 l 1 1 , i l 1 , , i l h p , 0 p l h p ) | 0 q ) ) ( r = 1 h p f i l r ( X t l r ) ) h q = 1 q h p = 0 p l 1 , , l h p = 1 l 1 < < l h p p n 1 , , n h q = 1 n 1 < < n h q q i l 1 , , i l h p , j n 1 , , j n h q = v 1 , , v k d h p + h q f x ( I t ( ( 0 l 1 1 , i l 1 , , i l h p , 0 p l h p ) | ( 0 n 1 1 , j n 1 , , j n h q , 0 q n h q ) ) ) ( r = 1 h q f j n r ( I t n r ) ) ( r = 1 h p f i l r ( X t l r ) ) : (3)

similarly was done for causality ( [1]) and the representation

f x ( X t ) = f x ( I t ) + h n 1 , , n h n 1 < < n h j n 1 , , j n h = v 1 , , v k f x Ψ t , ( n 1 , , n h ) ( j n 1 , , j n h ) ( r = 1 h f j n r ( I t n r ) ) . (4)

Here, a new substitution of ( r = 1 h q f j n r ( I t n r ) ) and so on, will also lead to a countably infinite representation (of the upcoming form (5)). An adequate definition for invertibility follows.

Definition 3.1: The process { X t , t } as defined in (1) will be called invertible, in the sense that it can be written

f x ( X t ) = f x ( I t ) + h l 1 , , l h l 1 < < l h i l 1 , , i l h = v 1 , , v k f x Φ t , ( l 1 , , l h ) ( i l 1 , , i l h ) ( r = 1 h f i i r ( X t l r ) ) (5)

for x = v 1 , , v k and t ; the random variable f x Φ t , ( l 1 , , l h ) ( i l 1 , , i l h ) , ( h 1 ) is independent of I t + n ( ... ) , n , I t l ( ... ) , l l h , l (it is a function of I t l ( i | j ) , 0 l l h 1 ) and remains unchanged for t .

Additionally, it must hold that the probabilities

( X t = x | X t l = i l , l ) , i l = 0 , v 1 , , v k , l (6)

can be uniquely determined from (5).

Remark 1: For h , h * , h L , h L h * h , l 1 , , l h , ( l 1 < < l h ), consider { l 1 * , , l h L * l h * } { l 1 , , l h } , l 1 * < < l h L * and any i l 1 , , i l h = v 1 , , v k , and write the events

B ( t , h ) , ( ( l 1 , , l h ) , ( i l 1 , , i l h ) ) : = { I t l 1 = i l 1 , , I t l h = i l h , I t l = 0 , l , l l 1 , , l h } ,

C ( t , h ) , ( ( l 1 , , l h ) , ( i l 1 , , i l h ) ) : = { X t l 1 = i l 1 , , X t l h = i l h , X t l = 0 , l , l l 1 , , l h }

and the probability of interest

( f x Φ t , ( l 1 * , , l h L * ) ( i l 1 * , , i l h L * ) y | C ( t , h ) , ( ( l 1 , , l h ) , ( i l 1 , , i l h ) ) ) .

As opposed to the probability

( f x Ψ t , ( l 1 * , , l h L * ) ( i l 1 * , , i l h L * ) y | B ( t , h ) , ( ( l 1 , , l h ) , ( i l 1 , , i l h ) ) ) ( f x Ψ t , ( l 1 * , , l h L * ) ( i l 1 * , , i l h L * ) y | I t l 1 = i l 1 , , I t l h * = i l h * , I t l = 0 , 1 l < l h * , l l 1 , , l h * )

from the “causality” topic in [1] when it is used that { I t ( i | j ) } ~ IID ( f x Ψ t , ( l 1 , , l h ) are functions of I t l ( i | j ) , 0 l < l h ), in the case of “invertibility” { X t } is a serially dependent series. Nevertheless, a simplification is in order, i.e., it holds that

( f x Φ t , ( l 1 * , , l h L * ) ( i l 1 * , , i l h L * ) y | C ( t , h ) , ( ( l 1 , , l h ) , ( i l 1 , , i l h ) ) ) = ( f x Φ t , ( l 1 * , , l h L * ) ( i l 1 * , , i l h L * ) y | Φ ( t 1 , h ) , ( ( l 1 1 , , l h 1 ) , ( i l 1 , , i l h ) ) * = 0 , Φ ( t l 1 , h 1 ) , ( ( l 2 l 1 , , l h l 1 ) , ( i l 2 , , i l h ) ) * = i l 1 , , Φ ( t l h 1 , 1 ) , ( l h l h 1 , i l h ) * = i l h 1 , Φ ( t l h + 1 , 1 ) , ( 1 , i l h ) * = 0 , I t l h = i l h ) (7)

as it can be shown (together with the notation Φ * ).

3.1. Conditions Relating to Causality and Invertibility

To perform the TARMA parameters inference, there will be the need to somehow “squeeze” the random coefficients f x Φ t , ( l 1 , , l h ) to become smaller as l h .

To manage convergence results, first set the remainder

f x R t , > l : = r = l + 1 i r = v 1 , , v k f x Φ t , r ( i r ) f i r ( X t r ) + r 1 = 1 l r 2 = l + 1 i r 1 , i r 2 = v 1 , , v k f x Φ t , ( r 1 , r 2 ) ( i r 1 , i r 2 ) f i r 1 ( X t r 1 ) f i r 2 ( X t r 2 ) + r 1 , r 2 = l + 1 r 1 < r 2 i r 1 , i r 2 = v 1 , , v k f x Φ t , ( r 1 , r 2 ) ( i r 1 , i r 2 ) f i r 1 ( X t r 1 ) f i r 2 ( X t r 2 ) +

from the “invertible” representation (5). Similarly, it has been set by [1] that

f x r t , > l : = r = l + 1 j r = v 1 , , v k f x Ψ t , r ( j r ) f j r ( I t r ) + r 1 = 1 l r 2 = l + 1 j r 1 , j r 2 = v 1 , , v k f x Ψ t , ( r 1 , r 2 ) ( j r 1 , j r 2 ) f j r 1 ( I t r 1 ) f j r 2 ( I t r 2 ) + r 1 , r 2 = l + 1 r 1 < r 2 j r 1 , j r 2 = v 1 , , v k f x Ψ t , ( r 1 , r 2 ) ( j r 1 , j r 2 ) f j r 1 ( I t r 1 ) f j r 2 ( I t r 2 ) +

from the “causal” representation (4).

Consider generic constants C > 0 and α ( 0,1 ) . The condition is presented below:

(C4): The parameters of the TARMA equation are such that it holds that

E { | f x R t , > l | | X t n = x n , n } C α l + 1 , for x = v 1 , , v k , t , l 0 and any x n = 0, v 1 , , v k , n . (8)

(C5): The parameters of the TARMA equation are such that it holds that

E | f x r t , > l | C α l + 1 , for x = v 1 , , v k , t , l 0 . (9)

To justify (C4), see that it can be written that

f x ( X t ) = f x ( I t ) + l 0 ( f x R t , > l f x R t , > l + 1 ) ,

so that the conditional probability of interest (6) is bounded by

( I t = x | X t n = i n , n ) + l 0 C * α l + 1 ,

where a converging geometric series is involved and, under causality, it is ( I t = x | X t n = i n , n ) π x ; straight from (8), it holds that E { | f x R t , > l f x R t , > l + 1 | | X t n = i n , n } E { | f x R t , > l | | X t n = i n } + E { | f x R t , > l + 1 | | X t n = i n } , so that it has been inserted C * : = C ( 1 + α ) .

Regarding (C5) and how it secures that all joint probabilities can be safely bounded, the answer is easy to show.

Since (C4) (as opposed to (C5)) involves conditional expectations, it might be wished for | f x R t , > l | given “...” to simplify the condition “...” from { X t n = x n , n } to { X t n = x n , n = 1 , , l } . According to the argument laid out above, it is in fact the absolute value of

f x R t , > l f x R t , > l + 1 i l + 1 = v 1 , , v k f x Φ t , l + 1 ( i l + 1 ) f i l + 1 ( X t ( l + 1 ) ) + r 1 = 1 l i r 1 , i l + 1 = v 1 , , v k f x Φ t , ( r 1 , l + 1 ) ( i r 1 , i r 2 ) f i r 1 ( X t r 1 ) f i l + 1 ( X t ( l + 1 ) ) +

(instead of that of f x R t , > l ) that is needed, which is “contained” within the random coefficients f x Φ t , ( l 1 , , l h ) , l 1 < < l h l + 1 (that are functions of I t n ( ... ) , n = 0 , 1 , , l ). In the case that X t n = 0 , n l + 1 , it is clear from Remark 1 that (under causality) a simplification is possible, though not concerning the variables X in the condition: otherwise, it’d better not be attempted. The interested reader may look at Section 3.2 to verify what happens in the general case given the requirement in (C4) as it is.

For causality, due to the difficulties in determining the random coefficients f x Ψ t (as functions of I t l ( ... ) , l 0 ), [1] resorted to an alternative (to (4)) representation: then based on the new (rather than (4)) form, salvaged a condition relating to causality. The equivalent representation for invertibility may be demonstrated with a Proposition 1 and Propositions 2 and 3 may lead to an invertibility-related condition. Nevertheless, it might be stressed here that Proposition 2 (and, consequently, Proposition 3) is established using the prerequisite of causality; this is because an “invertible” representation relies on the index variables f ( X t l ) (not f ( I t l ) ) and, under causality, it can be certified that X t l , l is independent of I t ( ... ) .

Furthermore, remember that the causality consideration is attached to the definition of a TARMA process, as it guarantees probability stationarity, which is what this theory is about: so this is before the results for the inference are seeked. In the contrary, the contribution of invertibility will shine, when the weak convergence (consistency) of the maximum likelihood estimators for the relevant TARMA probabilities is established. Nevertheless, it is not wished to undermine the value of invertibility as compared to that of causality: after all, both type coefficients f x Ψ t and f x Φ t are built as functions of the I ( ... ) variables from present and past, using a similar mechanism. The conditions obtained seem to complete rather than contradict each other too, so it is clear that causality and invertibility need to work together for the TARMA body to stand straight.

The reader needs to acquaint themselves with the following notation: it is set for x = v 1 , , v k , that

γ x ( ν ) : = { max { E | d ν f x ( I t ( 0 p | ( 0 n 1 1 j n 1 j n ν 0 q n ν ) ) ) | } , if ν = 1, , q max i , j { π x ( i | j ) } , if ν = 0 ,

as well as for each h p = 1 , , p , ν = 0 , 1 , , q , that

γ p , x ( ν ) ( h p ) : = max { E | d h p + ν f x ( I t ( ( 0 l 1 1 i l 1 i l h p 0 p l h p ) | ( 0 n 1 1 j n 1 j n ν 0 q n ν ) ) ) | } ,

where both maximums take place for any n 1 , , n ν = 1 , , q ( n 1 < < n ν ), j n 1 , , j n ν = v 1 , , v k ; the second also taking place for any l 1 , , l h p = 1 , , p ( l 1 < < l h p ), i l 1 , , i l h p = v 1 , , v k . It is defined for x = v 1 , , v k

γ x : = max ν = 1 , , q { γ x ( ν ) + h p = 1 p ( p h p ) γ p , x ( ν ) ( h p ) } and

γ * : = max ν = 0 , , q { ( x = v 1 , , v k γ x ( ν ) ) + h p = 1 p ( p h p ) ( x = v 1 , , v k γ p , x ( ν ) ( h p ) ) } .

By relaxing the sum of maximums into a maximum of sums, x = v 1 , , v k γ x ( 0 ) is replaced by max i , j { 1 π 0 ( i | j ) } . More research is welcome on the subject of the equivalence with (C4), which was demonstrated to be sufficient for invertibility.

Remark 2: (i) Under (C4) and (C5), all joint probabilities and all probabilities ( X t = x | X t n = x n , n ) , x , x n = 0 , v 1 , , v k are contained (away from infinity): it can be concluded that any joint probability is away from zero (hence away from one as well), because if any such probability, say ( X t ( a ) ) , was 0 it would have to be that any conditional probability as above- with X t * ( a ) { X t n = x n , n } for some t * - would not be bounded and properly defined; then it can be concluded (by division of a joint non zero probability over a joint non infinity probability) that all conditional probabilities are strictly larger than zero (hence smaller than one as well). Consequently, it will be taken that (C4) and (C5) can suffice for (C2).

(ii) For the π that yield a causal and invertible TARMA series { X t } , it can be arranged under (8) in (C4) (or, (9) in (C5)) that

| π E ( f x R t , > l | X t n = x n , n ) | C α l + 1 , ( or | π E ( f x r t , > l ) | C α l + 1 ) (10)

for some 0 < C < and α ( 0,1 ) , and

| 2 π π E ( f x R t , > l | X t n = x n , n ) | C α l + 1 ( or | 2 π π E ( f x r t , > l ) | C α l + 1 ) (11)

under (8) (or (9)): this can be shown.

It is being taken for granted that the π ( X t = x | X t n = x n , n , π ) , 2 π π ( X t = x | X t n = x n , n , π ) derivatives exist.

3.2. Random Coefficient Modelling Based on the Past

For l , n 0 , n l and a set { l 1 * , , l n * } { 1, , l } , it is written (for fixed i l 1 * , , i l n * = v 1 , , v k but this is omitted from the symbol N ), that

N t , n l : = { X t l 1 * = i l 1 * , , X t l n * = i l n * , X t l * = 0 , l * = 1 , , l , l * l 1 * , , l n * } .

Eventually it can be shown that

lim l ( | ( X t = x | N t , n l , X t l l * , l * ) ( X t = x | N t , n l ) | C α l + 1 ) = 1 (12)

( n , l 1 * , , l n * , i l 1 * , , i l n * remain unchanged as l ).

Remark 3: From (10) and (11) and in the same manner, it will be considered occasionally that

lim l ( | π ( X t = x | N t , n l ) π ( X t = x | N t , n l , X t l * , l * l + 1 ) | C α l + 1 ) = 1 (13)

and

lim l ( | 2 π π ( X t = x | N t , n l ) 2 π π ( X t = x | N t , n l , X t l * , l * l + 1 ) | C α l + 1 ) = 1 , (14)

respectively. A justification may be offered.

4. Maximum Likelihood Estimation

First, the parameter space, say Θ , must be considered; the fixed k , categories v 1 , , v k and order p , q 0 , p + q are attached to it:

The parameter space Θ is a set that includes all candidate parameter vectors π Θ that model a process of interest { X t , t } using a TARMA (k, p, q) equation with a predetermined form of interdependence, such that conditions (C3), (C4) and (C5) are satisfied (plus any extra requirements added in this section).

Now, suppose that { X 1 , , X T } have been made available from (1), and the target is to estimate the true parameters of the model, say π 0 Θ , based on these observations. The likelihood expressed as a function of π , takes the form

L ( π ) = t = 1 T ( p ˜ t , t 1 ( π ) ) ,

where

p ˜ t , 0 ( π ) : = ( X t = x ; π ) , if X t = x , where x = 0 , v 1 , , v k ,

and for the general i , it is written

p ˜ t , i ( π ) : = ( X t = x | X t 1 , , X t i ; π ) , if X t = x , where x = 0 , v 1 , , v k ;

p ˜ t , i are functions of X t 1 , , X t i as well as X t and it is considered that these (conditional) probabilities result from { X t , t } as if those have been generated by π Θ . Of course, the natural logarithm of the likelihood may be taken

l ( π ) = t = 1 T ln ( p ˜ t , t 1 ( π ) )

as the maximum likelihood estimators π ^ then satisfy the equations

π l ( π ) | π = π ^ = 0 or t = 1 T 1 p ˜ t , t 1 ( π ^ ) π p ˜ t , t 1 ( π ) | π = π ^ = 0. (15)

4.1. Weak Convergence of Estimators

For π Θ and any t , it is written

p t ( π ) : = ( X t = x | X t i , i ; π ) , if X t = x , where x = 0 , v 1 , , v k ,

which is a function of X t i , i 0 and the (conditional) distribution law is generated by π Θ .

According to (12) (thanks to invertibility) and for any π Θ and x 1 , , x T = 0 , v 1 , , v k , it holds as T that

( ( X 1 = x 1 ; π ) ( X 1 = x 1 | X 1 n , n ; π ) ( X 2 = x 2 | X 1 = x 1 ; π ) ( X 2 = x 2 | X 1 = x 1 , X 1 n , n ; π ) ( X T = x T | X T 1 = x T 1 , , X 1 = x 1 ; π ) ( X T = x T | X T 1 = x T 1 , , X 1 = x 1 , X 1 n , n ; π ) ) 1 / T P 1 , (16)

because the geometric mean weighs with (rather than just multiplies) the previous values: as the new values get closer to 1, so must the mean itself. The geometric mean from (T + 1) observations is a weighted product mean of the geometric mean of the T observations and the (T + 1)th observation (with a C A adjusted constant for the ratio of probabilities to one, by allowing the exponential rate for the geometric mean of ratios, this becomes apparent with the bound { C A T α ( T + 1 ) T 2 } 1 / T 0 as T ).

Once the fixated sample series x 1 , , x T has been collected (this has been generated by π 0 ), remember that the maximum likelihood estimate π ^ is set that way, such that it is true for any π Θ , that

t = 1 T ( X t = x t | X t 1 = x t 1 , , X 1 = x 1 ; π ^ ) t = 1 T ( X t = x t | X t 1 = x t 1 , , X 1 = x 1 ; π )

where the (conditional) probabilities (the distribution law of X 1 , , X T ) are calculated as if π are the real parameters that generated this realization. Thanks to (16) (and invertibility), this becomes

( t = 1 T ( X t = x t | X t 1 = x t 1 , , X 1 = x 1 , X 1 n , n ; π ^ ) ( X t = x t | X t 1 = x t 1 , , X 1 = x 1 , X 1 n , n ; π ) ) 1 / T 1

with probability that tends to one as T . Thinking about the random variables (rather than the realizations) now, it can be written

( t = 1 T p t ( π ^ ) p t ( π ) ) 1 / T = x 1 , , x T = 0 , v 1 , , v k { ( t = 1 T ( X t = x t | X t 1 = x t 1 , , X 1 = x 1 , X 1 n , n ; p ^ ) ( X t = x t | X t 1 = x t 1 , , X 1 = x 1 , X 1 n , n ; p ) ) 1 / T } f x 1 ( X 1 ) f x T ( X T ) ,

where X 1 , , X T are the same random variables for both numerator/denominator with realizations as in f x 1 ( X 1 ) f x T ( X T ) (those have been generated by the real π 0 ), and it is verified that

lim T ( ( t = 1 T p t ( π ^ ) p t ( π ) ) 1 / T 1 ) = 1 for any π Θ . (17)

In the other hand, it is true for any π Θ , that

E ( t = 1 T p t ( π ) p t ( π 0 ) ) = E ( E ( t = 1 T p t ( π ) p t ( π 0 ) | X 1 n , n ; π 0 ) )

and it is essential that it is the “real” π 0 that generates the random variables and governs the (conditional) expectation, i.e.,

E ( t = 1 T p t ( π ) p t ( π 0 ) | X 1 n , n ; π 0 ) = x 1 , , x T = 0 , v 1 , , v k ( X 1 = x 1 , , X T = x T | X 1 n , n ; π 0 ) t = 1 T ( X t = x t | X t 1 = x t 1 , , X 1 = x 1 , X 1 n , n ; π ) ( X t = x t | X t 1 = x t 1 , , X 1 = x 1 , X 1 n , n ; π 0 ) x 1 , , x T = 0 , v 1 , , v k ( X 1 = x 1 , , X T = x T | X 1 n , n ; π ) = 1 :

the last statement is true since these are probabilities adding to one (regardless of the π Θ ). Then it holds that

E ( t = 1 T p t ( π ) p t ( π 0 ) ) = 1 for any π Θ .

As in Jensen’s inequality (use the function g ( y ) = y 1 / T , y > 0 with g ( y ) = ( 1 / T ) ( ( 1 T ) / T ) y ( 1 / T ) 2 < 0 for T > 1 ), this last statement can be transformed to

E { ( t = 1 T p t ( π ) p t ( π 0 ) ) 1 / T } 1 for any π Θ , (18)

with the equality holding if and only if π = π 0 (it is considered here that p t ( π 1 ) = p t ( π 2 ) , π 1 , π 2 Θ can only result from π 1 = π 2 ).

In order to conclude, one can combine (17) and write

lim T ( ( t = 1 T p t ( π ^ ) p t ( π 0 ) ) 1 / T 1 ) = 1

with (18) that writes

E { ( t = 1 T p t ( π ^ ) p t ( π 0 ) ) 1 / T } 1

and they are both satisfied when

( t = 1 T p t ( π ^ ) ) 1 / T ( t = 1 T p t ( π 0 ) ) 1 / T P 0 as T

or, in other words, when

π ^ P π 0 as T . (19)

4.2. Asymptotic Distribution

Thanks to a Taylor’s expansion, (15) can be turned to

t = 1 T 1 p ˜ t , t 1 ( π 0 ) π i p ˜ t , t 1 ( π ) | π = π 0 + t = 1 T j { 1 p ˜ t , t 1 ( π 0 ) 2 π j π i p ˜ t , t 1 ( π ) | π = π 0 1 p ˜ t , t 1 2 ( π 0 ) ( π j p ˜ t , t 1 ( π ) | π = π 0 ) ( π i p ˜ t , t 1 ( π ) | π = π 0 ) } ( π ^ j π j , 0 ) + t = 1 T E t , i ( π ^ ) = 0 ,

where i (and j) are indexes that refer to all different scalar π in π Θ and E t , i ( π ^ ) are scalars; to understand their role better, consider the expansion for fixed t and define the function

e t , i ( π ) : = 1 p ˜ t , t 1 ( π ) π i p ˜ t , t 1 ( π ) x = 0 , 1 x π x ( 1 p ˜ t , t 1 ( π ) π i p ˜ t , t 1 ( π ) ) | π = π 0 ( π π 0 ) x x !

with “ x = 0 ” applying no derivative at all to the function ( ( π π 0 ) 0 1 ) and “ x = 1 ” being the usual row vector of first derivatives (times a column vector): then it is clear that e ( π 0 ) = 0 . Due to (19) and the continuity of the function e at π 0 , it can be concluded that e ( π ^ ) P 0 as T (without worrying about the ( t , t 1 ) label, i.e., the convergence to zero takes place anyway for T ). It will be taken for granted then that

t = 1 T E t , i ( π ^ ) ... P 0 as T , (20)

where “…” is the convenient divisor T or even T1/2. Note that (20) has been justified without the presumption of existence of a higher than second derivative of the conditional probabilities under study.

After omitting the extra terms, all the equations are stacked to come up with

t = 1 T ( 1 p ˜ t , t 1 2 ( π 0 ) δ ˜ t δ ˜ t τ 1 p ˜ t , t 1 ( π 0 ) D ˜ t ) ( π ^ π 0 ) = t = 1 T 1 p ˜ t , t 1 ( π 0 ) δ ˜ t ,

where δ ˜ t is the column vector of π i p ˜ t , t 1 ( π ) | π = π 0 , i = 1,2, and D ˜ t is the matrix with elements 2 π i π j p ˜ t , t 1 ( π ) | π = π 0 , i , j = 1 , 2 , and “ τ ” stands for the transpose operator. It is re-written that

T 1 / 2 ( π ^ π 0 ) = { 1 T t = 1 T ( 1 p ˜ t , t 1 2 ( π 0 ) δ ˜ t δ ˜ t τ 1 p ˜ t , t 1 ( π 0 ) D ˜ t ) } 1 T 1 / 2 t = 1 T 1 p ˜ t , t 1 ( π 0 ) δ ˜ t . (21)

Next, consider δ t to be the column vector of π i p t ( π ) | π = π 0 , i = 1,2, .

Lemma 4.1: It holds that

T 1 / 2 t = 1 T ( 1 p t ( π 0 ) δ t 1 p ˜ t , t 1 ( π 0 ) δ ˜ t ) P 0 as T .

Proof: This may be presented.

Once Lemma 4.1 has been established, Equation (22) should be replaced by

T 1 / 2 ( π ^ π 0 ) = { 1 T t = 1 T ( 1 p ˜ t , t 1 2 ( π 0 ) δ ˜ t δ ˜ t τ 1 p ˜ t , t 1 ( π 0 ) D ˜ t ) } 1 T 1 / 2 t = 1 T 1 p t ( π 0 ) δ t . (22)

To proceed further, consider D t to be the matrix with elements 2 π i π j p t ( π ) | π = π 0 , i , j = 1, .

Lemma 4.2: It holds that

1 T t = 1 T ( ( 1 p ˜ t , t 1 ( π 0 ) D ˜ t 1 p ˜ t , t 1 2 ( π 0 ) δ ˜ t δ ˜ t τ ) ( 1 p t ( π 0 ) D t 1 p t 2 ( π 0 ) δ t δ t τ ) ) P O

as T , where O (use bold for emphasis) is the square matrix of zeros.

Proof: This may be presented.

Once Lemma 4.2 has been established, Equation (22) should be replaced by

T 1 / 2 ( π ^ π 0 ) = { 1 T t = 1 T ( 1 p t 2 ( π 0 ) δ t δ t τ 1 p t ( π 0 ) D t ) } 1 T 1 / 2 t = 1 T 1 p t ( π 0 ) δ t . (23)

Theorem 4.3: It holds that

T 1 / 2 t = 1 T 1 p t ( π 0 ) δ t D N ( 0 , Var ( 1 p t ( π 0 ) δ t ) ) as T .

Proof: This may be presented.

Lemma 4.4: It holds that

1 T t = 1 T 1 p t 2 ( π 0 ) δ t δ t τ P Var ( 1 p t ( π 0 ) δ t ) as T .

Proof: This may be presented.

Lemma 4.5: It holds that

1 T t = 1 T 1 p t ( π 0 ) D t P O as T .

Proof: This may be presented.

5. Empirical Illustrations

This section serves in practice, when for special cases of TARMA models the maximum likelihood estimation takes place, so that (i) it is refreshed how to compute joint probabilities to insert in the likelihood (and estimate) and, more importantly, (ii) it is examined how well the estimators perform for moderate sample sizes: for both (i), (ii) particular interest lies in q 1 , when “naïve” moments estimates cannot be computed directly from the data, as opposed to the more traditional TAR cases. At this point, it is reminded that the TAR is a special case of the TARMA model with AR parameters only and explicit results that have been obtained for the Markov chains. The TARMA with the inclusion of MA parts is a parsimonious way to succeed the infinite order of a TAR model and where the paper interest lies.

Hence the groundwork equation

X t = I t ( 1 X t 1 ) ( 1 I t 1 ) + I t ( 1 | 0 ) X t 1 ( 1 I t 1 ) + I t ( 0 | 1 ) ( 1 X t 1 ) I t 1 + I t ( 1 | 1 ) X t 1 I t 1 (24)

will, in general, govern a series { X t } ~ TARMA ( 1,1,1 ) (with realizations in { 0,1 } ). By writing ψ ( i | j ) : = ( X t = i , I t = j ) , i , j = 0 , 1 , then from the parameters π = ( I t = 1 ) and π | v ( i | j ) = ( I t ( i | j ) = 1 | I t = v ) , ( i , j ) ( 0,0 ) , v = 0 , 1 ( π ( i | j ) = ( I t ( i | j ) = 1 ) π π | 1 ( i | j ) + ( 1 π ) π | 0 ( i | j ) ), [1] has contributed a methodology that eventually (under causality) computes the ψ as in

ψ ( 1 | 1 ) = { π [ ( 1 ( 1 π ) π | 0 ( 1 | 0 ) ) ( 1 π ( 1 π | 1 ( 0 | 1 ) ) ) ( 1 π ) π | 0 ( 0 | 1 ) π ( 1 π | 1 ( 1 | 0 ) ) ] } / { ( 1 + π π π | 1 ( 1 | 1 ) ) [ ( 1 ( 1 π ) π | 0 ( 1 | 0 ) ) ( 1 π ( 1 π | 1 ( 0 | 1 ) ) ) ( 1 π ) π | 0 ( 0 | 1 ) π ( 1 π | 1 ( 1 | 0 ) ) ] + ( π π π | 1 ( 1 | 0 ) ) [ ( 1 π ( 1 π | 1 ( 0 | 1 ) ) ) ( 1 π ) π | 0 ( 1 | 1 ) + ( 1 π ) π | 0 ( 0 | 1 ) π ( 1 π | 1 ( 1 | 1 ) ) ] + ( π π π | 1 ( 0 | 1 ) ) [ π ( 1 π | 1 ( 1 | 0 ) ) ( 1 π ) π | 0 ( 1 | 1 ) + ( 1 ( 1 π ) π | 0 ( 1 | 0 ) ) π ( 1 π | 1 ( 1 | 1 ) ) ] } , (25)

followed by

ψ ( 1 | 0 ) = ( 1 π ( 1 π | 1 ( 0 | 1 ) ) ) ( 1 π ) π | 0 ( 1 | 1 ) + ( 1 π ) π | 0 ( 0 | 1 ) π ( 1 π | 1 ( 1 | 1 ) ) ( 1 ( 1 π ) π | 0 ( 1 | 0 ) ) ( 1 π ( 1 π | 1 ( 0 | 1 ) ) ) ( 1 π ) π | 0 ( 0 | 1 ) π ( 1 π | 1 ( 1 | 0 ) ) ψ ( 1 | 1 ) (26)

and

ψ ( 0 | 1 ) = π ( 1 π | 1 ( 1 | 0 ) ) ( 1 π ) π | 0 ( 1 | 1 ) + ( 1 ( 1 π ) π | 0 ( 1 | 0 ) ) π ( 1 π | 1 ( 1 | 1 ) ) ( 1 ( 1 π ) π | 0 ( 1 | 0 ) ) ( 1 π ( 1 π | 1 ( 0 | 1 ) ) ) ( 1 π ) π | 0 ( 0 | 1 ) π ( 1 π | 1 ( 1 | 0 ) ) ψ ( 1 | 1 )

(and ψ ( 0 | 0 ) = 1 ψ ( 1 | 0 ) ψ ( 0 | 1 ) ψ ( 1 | 1 ) ). (27)

Additionally, after writing for h that ψ ( ( i 1 , , i h + 1 ) | j ) ( h ) : = ( X t = i 1 , , X t h = i h + 1 , I t = j ) (and ψ ( i | j ) ( 0 ) ψ ( i | j ) ), the methodology also contributes the recursive formulae

ψ ( ( 1 , 0 , i 2 , , i h + 1 ) | 1 ) ( h + 1 ) = π ψ ( ( 0 , i 2 , , i h + 1 ) | 0 ) ( h ) + π π | 1 ( 0 | 1 ) ψ ( ( 0 , i 2 , , i h + 1 ) | 1 ) ( h ) , (28)

ψ ( ( 1 , 1 , i 2 , , i h + 1 ) | 1 ) ( h + 1 ) = j = 0 , 1 π π | 1 ( 1 | j ) ψ ( ( 1 , i 2 , , i h + 1 ) | j ) ( h ) ,

ψ ( ( 1 , 0 , i 2 , , i h + 1 ) | 0 ) ( h + 1 ) = ( 1 π ) π | 0 ( 0 | 1 ) ψ ( ( 0 , i 2 , , i h + 1 ) | 1 ) ( h ) ,

ψ ( ( 1 , 1 , i 2 , , i h + 1 ) | 0 ) ( h + 1 ) = j = 0 , 1 ( 1 π ) π | 0 ( 1 | j ) ψ ( ( 1 , i 2 , , i h + 1 ) | j ) ( h ) ,

and the recursive formulae

ψ ( ( 0 , 0 , i 2 , , i h + 1 ) | 1 ) ( h + 1 ) = π ( 1 π | 1 ( 0 | 1 ) ) ψ ( ( 0 , i 2 , , i h + 1 ) | 1 ) ( h ) ,

ψ ( ( 0 , 1 , i 2 , , i h + 1 ) | 1 ) ( h + 1 ) = j = 0 , 1 π ( 1 π | 1 ( 1 | j ) ) ψ ( ( 1 , i 2 , , i h + 1 ) | j ) ( h ) ,

ψ ( ( 0 , 0 , i 2 , , i h + 1 ) | 0 ) ( h + 1 ) = ( 1 π ) ψ ( ( 0 , i 2 , , i h + 1 ) | 0 ) ( h ) + ( 1 π ) ( 1 π | 0 ( 0 | 1 ) ) ψ ( ( 0 , i 2 , , i h + 1 ) | 1 ) ( h ) ,

ψ ( ( 0 , 1 , i 2 , , i h + 1 ) | 0 ) ( h + 1 ) = j = 0 , 1 ( 1 π ) ( 1 π | 0 ( 1 | j ) ) ψ ( ( 1 , i 2 , , i h + 1 ) | j ) ( h ) , (29)

from which the probabilities

( X t = i 0 , X t 1 = i 1 , , X t ( h + 1 ) = i h + 1 ) = j = 0 , 1 ψ ( ( i 0 , i 1 , , i h + 1 ) | j ) ( h + 1 )

can be computed for i 0 , i 1 , , i h + 1 = 0 , 1 .

To generate X t , t = 1 , , T , it has been used that X t = Y t + i ( j = 0 i 1 ( D Y ) t j ) Y t i , Y t : = I t ( 0 | 1 ) I t 1 + I t ( 1 I t 1 ) , ( D Y ) t : = ( I t ( 1 | 1 ) I t ( 0 | 1 ) ) I t 1 + ( I t ( 1 | 0 ) I t ) ( 1 I t 1 ) , with the sum being held up to a finite constant c = 20 for t = 1 , and so on. The two presented conditions for causality and invertibility write together that

π + E | d f 1 ( I t ( 0 | 1 ) ) | , E | d f 1 ( I t ( 1 | 0 ) ) | + E | d 2 f 1 ( I t ( 1 | 1 ) ) | ,

max i , j = 0 , 1 { π ( i | j ) } + E | d f 1 ( I t ( 1 | 0 ) ) | , E | d f 1 ( I t ( 0 | 1 ) ) | + E | d 2 f 1 ( I t ( 1 | 1 ) ) | < 1

(the d h f 1 ( .. ) , h = 1 , 2 notation was re-introduced in the very beginning of Section 3). Hence by imposing the strong requirement max i , j = 0 , 1 { π ( i | j ) } < 1 / 6 , the series is secured to be where it should be (without worrying about the interdependence scenario).

Firstly the case π = 0.165 , π ( 1 | 0 ) = 0.152 , π ( 0 | 1 ) = 0.125 and π ( 1 | 1 ) = 0.108 (and under interindependence) has been studied, for a number of observations T = 6 , 10 , 14 (the maximum sample size 14 has been picked due to a purely technical reason of restoring in one vector 2 T + 1 probabilities for the last recursion, or otherwise a modification must take place in the code). A search of the four parameters 0 < π ( ... ) < 0.1667 , π ( ... ) = π ( ... ) + 0.01 has materialized and for each attempted value the probabilities (25), (26), (27) and recursions (28)...(29) must be computed: the likelihood is none other than one of these final probabilities indicated by the position of zero-one(s) in the sample series (this is the same position regardless of the parameter values under search). Then the attempted parameter values that offered the biggest of those probabilities are taken to be the maximum likelihood estimates.

Meanwhile, an opportunity of a comparison with the TAR model (of order p = 2 ) with same parameters ( π = 0.165 , π ( 10 ) = 0.152 , π ( 01 ) = 0.125 and π ( 11 ) = 0.108 ) has arisen, i.e., { X 1 , , X T } have now been born from

X t = I t ( 1 X t 1 ) ( 1 X t 2 ) + I t ( 10 ) X t 1 ( 1 X t 2 ) + I t ( 01 ) ( 1 X t 1 ) X t 2 + I t ( 11 ) X t 1 X t 2 .

Note that it can be derived X t = I t + n C t , n ( n 1 ) I t n with C t , 1 ( 0 ) : = D I t ( 10 ) I t ( 10 ) I t , C t , 2 ( 0 ) : = D I t ( 01 ) I t ( 01 ) I t , C t , ( 1 , 2 ) ( 0 ) : = D 2 I t ( 11 ) I t ( 11 ) I t ( 10 ) I t ( 01 ) + I t and for n

C t , n + 1 ( n ) : = C t , n + 1 ( n 1 ) + C t , n ( n 1 ) D I t n ( 10 ) + C t , ( n , n + 1 ) ( n 1 ) I t n ( 10 ) , C t , n + 2 ( n ) : = C t , n ( n 1 ) D I t n ( 01 ) , and C t , ( n + 1 , n + 2 ) ( n ) : = C t , n ( n 1 ) D 2 I t n ( 11 ) + C t , ( n , n + 1 ) ( n 1 ) D * I t n ( 11 ) , D * I t ( 11 ) : = I t ( 11 ) I t ( 10 ) .

Computing the probabilities ( X t = i , X t 1 = j ) , i , j = 0 , 1 (from the same methodology and under causality) is straightforward, while the condition for causality now requires [ ( 1 + β * m = 0 1 m 1 = 0 m 1 ) 2 1 ] < 1 or β * < 2 1 3 0.138 , where

β * : = max { π , max { E | D I t ( 10 ) | , E | D I t ( 01 ) | } , E | D 2 I t ( 11 ) | } :

so it might be that it is violated (the search is no different than for the TARMA above).

According to Table 1, it is indeed remarkable how the performances for the two models almost coincide. That is such an encouraging sign for the TARMA, as it tempts the researcher to dare include moving-average parts and benefit from the rare flexibility, if they are willing to take the extra computational burden of the recursions: exact likelihoods can be computed as functions of the parameters and, even for small sample sizes, the estimators seem to perform quite satisfactorily. It is highlighted that the precision here is as returned by the code with no formal justification attached to it (for a smaller R = 100 repetitions, the differences to the table were still minimal). From Table 1 it looks like there is a better bias (and MSE) performance of the π estimator over the other three ones, so this is investigated further.

Table 2 picture still favors the π estimator with a low absolute bias: the two π ( 0 | 1 ) , π ( 1 | 1 ) estimators score the highest bias, naturally resulting from the real values being quite high. Similarly, for both numerical cases the π ( 1 | 0 ) estimator, with the lowest real value, escapes the high bias values without outperforming the π in the same department though. The conclusions for the variance can be reversed. At this stage, it is highlighted that the easily obtainable (conditional

Table 1. Approximate (from R = 1000 replications) bias, variance and Mean Squared Error of the Maximum Likelihood Estimators based on T consecutive observations from the TARMA (1, 1) and TAR (2) models ( k = 1 , Bernoulli variables) with real values π = 0.165 , π ( 1 | 0 ) = 0.152 , π ( 0 | 1 ) = 0.125 and π ( 1 | 1 ) = 0.108 (under interindependence).

Table 2. Approximate (from R = 10000 replications) bias, variance and Mean Squared Error of the Maximum Likelihood Estimators based on T = 6 consecutive observations from the TARMA (1, 1) model ( k = 1 , Bernoulli variables) with real values, for the “Case 1.1”, π = 0.105 , π ( 1 | 0 ) = 0.081 , π ( 0 | 1 ) = 0.148 , π ( 1 | 1 ) = 0.152 , or for the “Case 1.2”, π = 0.165 , π ( 1 | 0 ) = 0.081 , π ( 0 | 1 ) = 0.152 and π ( 1 | 1 ) = 0.148 (both under interindependence).

likelihood) asymptotic result (for the estimators π ^ ( 1 ) , π ^ of π ( 1 ) : = ( X t = 1 | X t 1 = 1 ) , π : = ( X t = 1 | X t 1 = 0 ) , respectively) regarding the { X t } ~ TAR ( 1 ) model, i.e.,

T 1 / 2 ( π ^ ( 1 ) π ( 1 ) π ^ π ) D N ( ( 0 0 ) , ( π ( 1 ) ( 1 π ( 1 ) ) / E { X t } 0 0 π ( 1 π ) / E { 1 X t } ) )

has also been approved by simulation indications (for sample sizes T = 10 , 20 , 30 and R = 100 ), but those will not be presented here.

The investigation is not over yet, as it is desirable to smooth out the differences in the bias of estimators for the TARMA (1, 1) model, and the solution to that could be an “agreement” between the different I ( ... ) variables (at the same time t) via the presence of interdependence. Indeed according to Table 3 and for the only small sample size ( T = 6 ), “Case 2.1” exhibits low biases in all but the -0.11726 approximation from the 1000 estimates of π | 1 ( 0 | 1 ) ; similarly, for the “Case 2.2” and excluding the highest (absolute) bias for π (its true value has been set high), almost all other estimates are approximated to be near their real value. The variance results for both cases are very impressive too. Note that the conditions for causality-invertibility might not have been followed faithfully in Table 3 (hence the real value π = 0.25 in “Case 2.2”) and the “agreement” search has been set to ≥0.85 for π | 1 ( ... ) , or ≤0.15 for π | 0 ( ... ) (together with a search π 0.1667 that justifies the -0.11655 bias in “Case 2.2”).

Again it is reminded that the results obtained from fewer simulations (Table 2: R = 100 , 1000 , Table 3: R = 100 ) are hardly any different. The simulation indications of this section favor the addition of moving-average parts for the modelling of strict stationarity, if one is willing to slightly inconvenience themselves with a more sophisticated code for the likelihood computation. There

Table 3. Approximate (from R = 1000 replications) bias, variance and Mean Squared Error of the Maximum Likelihood Estimators based on T = 6 consecutive observations from the TARMA (1, 1) model ( k = 1 , Bernoulli variables) with real values, for the “Case 2.1”, π = 0.165 , π | 1 ( 1 | 0 ) = 0.95 , π | 0 ( 1 | 0 ) = 0.03 , π | 1 ( 0 | 1 ) = 0.98 , π | 0 ( 0 | 1 ) = 0.05 , π | 1 ( 1 | 1 ) = 0.966 , π | 0 ( 1 | 1 ) = 0.044 , or for the “Case 2.2”, π = 0.25 , π | 1 ( 1 | 0 ) = 0.95 , π | 0 ( 1 | 0 ) = 0.03 , π | 1 ( 0 | 1 ) = 0.98 , π | 0 ( 0 | 1 ) = 0.05 and π | 1 ( 1 | 1 ) = 0.966 , π | 0 ( 1 | 1 ) = 0.044 (the interdependence is implied for both).

have been no signs of distinction between the auto-regressive and moving-average estimators’ merits: not only does this strengthen the views of Sections 4.1 and 4.2, but also it brings to mind the classic Gaussian ARMA gem of [8], that the moving-average estimation transforms in theory to an auto-regression like situation.

6. Conclusions and Extending to the χ2 Test for Stationarity

Straight from (23), Theorem 4.3 and Lemmas 4.4, 4.5, there is the important derivation

T 1 / 2 ( π ^ π 0 ) D N ( 0 , Var ( 1 p t ( π 0 ) δ t ) 1 ) as T (30)

for the Maximum Likelihood Estimators of the parameters of a process that assigns its appearance to a causal and invertible TARMA model. The normal distribution convergence (30) implies the chi-square distribution convergence

T ( π ^ π 0 ) τ Var ( 1 p t ( π 0 ) δ t ) ( π ^ π 0 ) D χ d f 2 , (31)

where d f equals the size of the parameter vector.

The asymptotic result (31) can be a real asset when testing whether a sample series has been generated by a stationary ( -indexed) process: the null hypothesis will be of the form

H0: {Xt} is stationary with “distribution”...

where “distribution” is being used in the wide sense of a hyper-model that might specify as much information is allowed for the researcher (for example, marginal distribution, pairwise distribution,... or marginal and conditional distributions,... with or without knowledge of parameters etc). Under H0, a proper identification of k (this might not be necessary if the variables are discrete with “Bernoulli” being the best of cases), as well as p and q (some faint suggestions for setting p = 0 or q = 0 might be found in [1]) must take place to proceed. Identification issues must still be resolved in the case that the researcher is not testing an assumption, but opts for the alternative task of point and interval estimation. For either inference route taken, further research is mandatory, in order to approximate the variance matrix Var ( p t 1 δ t ) . The reader might wish to look for the special case of a causal TAR model, which due to its simplicity, is believed here that it should be supported as the most appropriate test for the non-parametric time series stationarity.

In a more structured context, an equivalent to the [14] ARMA methodology of identification, estimation, validation and forecasting can now become reality for the strictly stationary processes: any distribution (heavy-tailed, with extreme values, asymmetric or with levels of skewness) can be tamed by a proper categorization and then p and q need to be selected as well. This paper has contributed the complete guide for the second step with the best of all methods of estimation, by putting on the table all necessary properties of the TARMA maximum likelihood estimators.

In order to use the TARMA equation, the sample series needs to be produced by a stationary one. Any sign of persistent tendency, such as trends, seasonality or cycles forbid the use of TARMA: joint distributions (of equal “windows”) must look alike in time. For the second-order stationarity [15] give answers on how the series can be fixed to ARMA; or, there is a whole philosophy of modelling those tendencies directly. For the strict stationarity, one can still resort to (31), testing and computing whether the sample series is close enough to a hypothesis of hyper- dependence: a large value of the statistic, as compared to a chi-square value, will suggest that the hypothesis collapses. Otherwise, there are parametric/theoretical approaches that work under specific distributional assumptions.

To wrap it up, it is strongly believed that a precious link has been solidified between the “multiplicative linear” (as called in [1])—strictly stationary- times series and the non-parametric inferential statistics. The well-known χ 2 test for the distribution of a variable that has generated a random sample, can be extended now to the χ 2 test for the stationary-principled hyper-model, dominating a -indexed process that has generated a sample series: as in the former case, when extra (than the prespecified) parameters need to be estimated, it might be worth investigating the reduction of degrees of freedom in the statistic χ 2 distribution. Regardless of various such minor issues that might be studied further, the main achievement of this paper is that it has dealt quite satisfactorily with a complex problem: the inference for any stationary time series that can be clothed by a causal and invertible TARMA equation.

Acknowledgments

This is for the author’s beloved father (deceased).

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Dimitriou-Fakalou, C. (2019) The Table Auto-Regressive Moving-Average Model for (Categorical) Stationary Series: Statistical Properties (Causality; from the All Random to the Conditional Random). Journal of Nonparametric Statistics, 31, 31-63.
https://doi.org/10.1080/10485252.2018.1527912
[2] Jacobs, P.A. and Lewis, A.W. (1978) Discrete Time Series Generated by Mixtures II: Asymptotic Properties. Journal of the Royal Statistical Society: Series B, 40, 222-228.
https://doi.org/10.1111/j.2517-6161.1978.tb01667.x
[3] Möller, T.A. and Wei, C.H. (2020) Generalized Discrete Autoregressive Moving-Average Models. Applied Stochastic Models in Business and Industry, 36, 641-659.
https://doi.org/10.1002/asmb.2520
[4] Lomnicki, Z.A. and Zaremba, S.K. (1955) Some Applications of Zero-One Processes. Journal of the Royal Statistical Society: Series B, 17, 243-255.
https://doi.org/10.1111/j.2517-6161.1955.tb00198.x
[5] Joe, H. (1996) Time Series Models with Univariate Margins in the Convolution-Closed Infinitely Divisible Class. Journal of Applied Probability, 33, 664-677.
https://doi.org/10.1017/S0021900200100105
[6] Kedem, B. (1980) Estimation of the Parameters in Stationary Autoregressive Processes after Hard Limiting. Journal of the American Statistical Association, 75, 146-153.
https://doi.org/10.1080/01621459.1980.10477445
[7] Azzalini, A. (1983) Maximum Likelihood Estimation of Order m for Stationary Stochastic Processes. Biometrika, 70, 381-387.
https://doi.org/10.1093/biomet/70.2.381
[8] Hannan, E.J. (1973) The Asymptotic Theory of Linear Time Series Models. Journal of Applied Probability, 10, 130-145.
https://doi.org/10.1017/S0021900200042145
[9] Cui, Y. and Lund, R.B. (2009) A New Look at Time Series of Counts. Biometrika, 96, 781-792.
https://doi.org/10.1093/biomet/asp057
[10] Roitershtein, A. and Zhong, Z. (2013) On Random Coefficient INAR(1) Processes. Science China Mathematics, 56, 177-200.
https://doi.org/10.1007/s11425-012-4547-z
[11] Davis, R.A., Fokianos, K., Holan, S.H., Joe, H., Livsey, J., Lund, R., Pipiras, V. and Ravishanker, N. (2021) Count Time Series: A Methodological Review. Journal of the American Statistical Association, 116, 1533-1547.
https://doi.org/10.1080/01621459.2021.1904957
[12] Anderson, T.W. and Goodman, L.A. (1957) Statistical Inference about Markov Chains. The Annals of Mathematical Statistics, 28, 89-110.
https://doi.org/10.1214/aoms/1177707039
[13] Klotz, J. (1973) Statistical Inference in Bernoulli Trials with Dependence. The Annals of Statistics, 1, 373-379.
https://doi.org/10.1214/aos/1176342377
[14] Box, G.E.P. and Jenkins, G.M. (1970) Time Series Analysis: Forecasting and Control. Holden-Day, San Francisco.
[15] Brockwell, P.J. and Davis, R.A. (1991) Time Series: Theory and Methods. 2nd Edition, Springer-Verlag, New York.
https://doi.org/10.1007/978-1-4419-0320-4

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.