^{*}

^{*}

Certain locally optimal tests for deterministic components in vector time series have associated sampling distributions determined by a linear combination of Beta variates. Such distributions are nonstandard and must be tabulated by Monte Carlo simulation. In this paper, we provide closed form expressions for the mean and variance of several multivariate test statistics, moments that can be used to approximate unknown distributions. In particular, we find that the two-moment Inverse Gaussian approximation provides a simple and fast method to compute accurate quantiles and p-values in small and asymptotic samples. To illustrate the scope of this approximation we review some standard tests for deterministic trends and/or seasonal patterns in VARIMA and structural time series models.

Certain locally optimal tests for deterministic components in vector time series have associated sampling distributions determined by a linear combination of Beta variates. Such distributions are nonstandard and must be tabulated by Monte Carlo simulation. In this paper, we provide closed form expressions for the mean and variance of several multivariate test statistics, moments that can be used to approximate unknown distributions. In particular, we find that the two-moment Inverse Gaussian approximation provides a simple and fast method to compute accurate quantiles and p-values in small and asymptotic samples. To illustrate the scope of this approximation we review some standard tests for deterministic trends and/or seasonal patterns in VARIMA and structural time series models.

A wide class of test statistics for detecting the presence of deterministic components in univariate linear time series models can be derived following the [

Multivariate versions of these tests have been only derived in the framework of structural time series models by [

In this paper, we use the results of [

The paper is organized as follows. In Section 2, we summarize the main results of the [

Consider the multivariate linear regression model

, (1)

where Y and E are T × m random matrices, X is a T × k fixed design matrix, P is a k × m matrix of parameters, W (d) and S are T × T and m × m positive definite matrices, respectively, and d is the parameter of interest determining whether or not the columns of E are i.i.d. Tdimensional errors, i.e., W(d_{0}) = I_{T}. [_{0}: d = d_{0} against the one-sided alternative H_{1}: d > d_{0} has the following general expression

where tr is the trace operator, Ê = MY is the residual matrix in the ordinary least squares regression of Y on X, M = I_{T} – X(X¢X)^{–}^{1} X¢ and K is the first derivative d W (d)/dd evaluated at d = d_{0}. Invariance is defined against the group of transformations Y ® YP + XA for an arbitrary k × m matrix A and a positive definite m × m matrix P. Thus, without loss of generality, it can be assumed that S = I_{T}. From [_{0}: d = d_{0} against the two-sided alternative H_{1}: d ≠ d_{0} is given by (2) but with K being the second derivative d^{2}W(d)/dd^{2 }evaluated at d = d_{0}.

The null distribution of L can be characterized rewriting it as

where l_{t} are the non-null eigenvalues of the product matrix MK, e_{t} ~ N(0, I_{m}), B_{t} ~ Beta (m/2, (T – k – m)/2) and B_{1} + … + B_{T–k} = m, see, e.g., [

and

with c = 2m(T – k – m)/[(T – k – 1)(T – k + 2)] and

In the next section, we give formulae to compute tr (MK) and tr(MK)^{2} for some useful test statistics.

[

, , , (4)

where the vector time series y_{t} = (y_{1t}, …, y_{mt})¢ is decomposed into the sum of a vector seasonal random walk a_{t} = (a_{1t}, …, a_{mt})¢ plus a vector Gaussian white noise u_{t} = (u_{1t}, …, u_{mt})¢ ~ N(0, S), the vector Gaussian white noise v_{t} = (v_{1t}, …, v_{mt})¢ ~ N(0, rS) is assumed to be independent of u_{t}, and the parameter r > 0 quantifies the degree of stochasticity of a_{t}. Without loss of generality we assume that the seasonal period k is even and that the dataset is balanced, T = nk.

Defining the T × m matrices Y = [y_{1},…, y_{T}]¢, A = [a_{1},…, a_{T}]¢, U = [u_{1},…, u_{T}]¢ and V = [v_{1}, , v_{T}]¢, (4) can be written in matrix form as

, (5)

where A_{0} = [a_{–k+1}, …, a_{0}]¢ is a k × m matrix of initial conditions, Ä denotes the Kronecker or tensor product, D_{n} is an n × n lower bidiagonal matrix with 1s on the main diagonal and –1s on the first sub-diagonal, which can be horizontally partitioned as D_{n} = [d_{n}|Ñ_{n}]¢, being d_{n} = (1, 0, …, 0)¢ and Ñ_{n} the (n – 1) × n first-order differencing matrix. If A_{0} is assumed to be fixed, it follows that (4) is a special case of (1) with

, ,

where, i_{n} = C_{n}d_{n} and X is a T × k matrix of seasonal dummy variables.

The LBI test statistic for testing the null hypothesis of deterministic seasonality (H_{0}: r = 0) against the alternative of seasonal random walk (H_{1}: r > 0), denoted by RW_{m,k,n}, is given by (2) with

and Ê being the residual matrix in the multivariate regression of Y on the full set of k seasonal dummies. To compute the two first moments of, we find that the mean and mean-square of the eigenvalues of MK are given by

and

which suggest to correct the RW_{m,k,n} test statistic by a factor depending of the sample size so that it converges to a non-degenerate limiting distribution. Some candidates are n, mn or m (n – k). It should be noted that RW_{m,k,n}/mn has asymptotic mean 1/6 and variance 1/45 mk.

[_{t} in (4)

, , (7)

whose matrix form is given by

,

where B_{0} = [β_{–k+1}, …, β_{0}]¢ is a k × m fixed matrix of initial conditions for If A_{0} is fixed, (7) is a special case of (1) with X = [i_{n} Ä I_{k}, t_{n} Ä I_{k}], P = [A_{0}, B_{0}] and W(r) = I_{T} + r(C_{n} Ä I_{k}), where t_{n} = (1, 2, …, n)'. It is now clear that the inclusion of the vector seasonal drift only affects the mean vector of the sampling distribution of Y, , but not its covariance matrix. Therefore, the LBI statistic for testing the null hypothesis of vector deterministic seasonal linear trend (H_{0}: r = 0) against vector seasonal drifted random walk (H_{1}: r > 0) in (7), say DRW_{m,k,n}, is computed as RW_{m,k,n}, being now Ê the residual matrix in the multivariate regression of Y on k seasonal dummies and k seasonal lineal trends. We find for DRW_{m,k,n} that the mean and mean-square of the eigenvalues of MK are given by

and

and so DRW_{m,k,n} has asymptotic mean 1/15 and variance 11/6300 mk.

We also obtain another relevant modification of the RW_{m,k,n} test statistic by including the time index t as a regressor in (4)

, (8)

or in matrix form,

,

which is a special case of (1) with X = [i_{n} Ä I_{k}, t_{T}], P = [A_{0}, b_{0}] and W(r) as in (7). By the same token, the LBI statistic for testing H_{0}: r = 0 against H_{1}: r > 0 in (8), say TRW_{m,k,n}, is computed as RW_{m,k,n}, being now Ê the residual matrix in the multivariate regression of Y on k seasonal dummies and a regular lineal trend. We find for TRW_{m,k,n} that the mean and mean-square of the eigenvalues of MK are given by

and

It should be noted that DRW_{1,1,n }= TRW_{1,1,n} is the [_{1,1,n}/n are 1/15 and 11/6300, which agree with those obtained by [

Multivariate structural model (4) can be written as a vector seasonal IMA(1,1)_{k} process

, (9)

where B is the backshift operator such that B^{k}y_{t} = y_{t–k}, a_{t }= (a_{1t}, …, a_{mt})', W_{a} is a m × m positive definite matrix, and the parameter is restricted to be positive so that ρ = (1 – Θ)^{2}/Θ > 0. Process (9) is said to be invertible when Θ < 1 and strictly non-invertible when Θ = 1. In the last case, the cancellation of the matrix polynomials on both sides of the equation reveals the presence of deterministic seasonality. Noting that ρ(Θ) = ρ(1/Θ), the one-sided testing problem H_{0}: r = 0 versus H_{1}: r > 0 is equivalent to the two-sided one H_{0}: Θ = 1 versus H_{1}: Θ ≠ 1. Hence, the LBI test statistic RW_{m,k,n} for a null variance ratio in (4) is the LBIU test statistic for strict noninvertibility in (9). Note that RW_{1,k,n} is the [

Analogously, it can be proved that DRW_{m,k,n} and TRW_{m,k,n} are the LBIU test statistics of H_{0}: Θ = 1 versus H_{1}: Θ ≠ 1 in the reduced form of (7),

and (8)

respectively. Note that DWR_{1,k,n} is closely related to the [_{1,k,n} is the [_{k} model.

[

where x_{jt} (j = 1, …, k) are a full set of seasonal dummy variables, β_{jt} is either a time-varying parameter if d_{j} = 1 or a nuisance constant parameter if d_{j} = 0, u_{t} ~ N(0, S), v_{jt} ~ N(0, (r/k)S) are mutually and serially uncorrelated vector errors, and is here divided by k for comparison purposes given that (10) reduces to (4) when d_{1} + … + d_{K} = k. Assuming that the initial conditions β_{1,0}, …, β_{k,0} are fixed, (10) is a special case of (1) with X = [x_{1}, …, x_{k}]', P = [β_{1,0}, …, β_{k,0}]' and W(r') = I_{T} + r'(d_{}A_{1} + … + d_{k}A_{k}), where x_{j }= (x_{j1}, …, x_{jT})', A_{j} = x_{j}_{°}C_{T}C'_{T}_{°}x_{j}, and the operator denotes the Hadamard product. The LBI test statistic for testing the null hypothesis of deterministic seasonality (H_{0}: d_{1} + … + d_{k} = k) against the alternative hypothesis of mixed deterministic-stochastic seasonality (H_{1}: d_{1} + … + d_{k} = r < k), say SD_{m,k,n} (r), is given by (2) with K = (d_{}A_{1} + … + d_{k}A_{k})/k, which coincides with RW_{m,k,n} when r = k. Noting that MA_{j}MA_{i} = 0 for j ≠ i, we find that the eigenvalues of MK have mean and mean-square are given by

and

Analogously, when the explanatory variables in (10) are trigonometric seasonal variables (x_{1t} = 1, x_{jt} = cos(jtπ/k) for j even, and x_{jt} = sin[(j – 1)tπ/k] for j odd and j > 1), it is convenient to assume that v_{jt} ~ N(0, r_{j}S), where r_{j} = a_{j}r/k^{2} with a_{j} = 1 (j = 1, k) and a_{j} = 2 (j = 2, …, k – 1). Now, as before, (10) reduces to (4) when r = k. Here, we can focus our attention on testing the deterministic or stochastic nature of the local level β_{1t}, the (j/2)-th harmonic β_{jt}cos(πjt/k) + β_{j+1,t}sin(πjt/k) (j = 2, 4, …, k) or any combination of these k/2 harmonics. Taking as illustration the (j/2)-th harmonic, the LBI test statistic for testing the null hypothesis of deterministic seasonality (H_{0}: d_{1} + … + d_{k} = k) against the alternative of mixed deterministic-stochastic seasonality (H_{1}: d_{1} + … + d_{k} = 2) is given by (2) with K = a_{j} (A_{j} + A_{j+1})/k^{2} with A_{j} as defined before. We find that the eigenvalues of MK have mean and mean-square given by

and

where b_{j} = 0 (j = 1, k) and b_{j} = 1 (j = 2, …, k – 1). Note that these expressions are also valid to compute the mean and variance of the LBI test statistic for a deterministic level in presence of deterministic seasonality, H_{0}: r_{} = 0 versus H_{1}: r_{} > 0, which is closely related to the KPSS test with seasonal dummies proposed by [_{1,k,n}, for the dual testing problem of deterministic seasonality in presence of a deterministic level, which was generalized to the multivariate case by [_{2}A_{2} + … + a_{k}A_{k})/k^{2}. We find that the eigenvalues of MK have mean and mean-square given by

and

Following [_{m,k,n}/mn under testing H_{0}: r = 0 is

where ξ_{j} ~ iid. [_{1,1,n}/n follows the same limiting distribution as the Cramèr-von Mises goodness-of-fit test statistic. Hence, (11) is the average of mk copies of the Cramèr-von Mises distribution, denoted by CvM(mk)/mk. [^{3}/λ= r/45, we obtain that the fitted IG(r/6, 45r^{2}/6^{3}) distribution has third and fourth cumulants given by κ_{3} = 8r/900 and κ_{4} = 8r/1350, which seem to be quite close to those of the CvM (r) distribution. The accuracy of the IG approximation is illustrated in _{1,1,n}/n evaluated by the [

with x > 0, μ = 1/6 and λ = 45m/6^{3}. We can see that the IG approximation provides a very good fit on both tails

of the distribution. Therefore, accurate asymptotic pvalues for RW_{m,k,n}/mn can be computed from the cdf of the IG(µ, λ) distribution.

The goodness-of-fit in the asymptotic case take us to ask if the approximation will be also good in finite samples. To evaluate the exact null distribution of RW_{m,k,n}/mn from (3)-(6), we must determine the eigenvalues of the matrix MK. To this end, it is convenient to note that the projection matrix M can be alternatively written as M = [Ñ¢_{n}(Ñ_{n}Ñ¢_{n})^{–1}Ñ_{n}] Ä I_{k}. Hence, MK = (Ñ_{n}Ñ¢_{n})^{–1} Ä I_{k} and its eigenvalues are the reciprocals of those of the tridiagonal matrix Ñ_{n}Ñ¢_{n}, λ_{t} = [4sin^{2}(tπ/2n)]^{–1} (t = 1, 2, …, n – 1), each one with multiplicity k. In the case m = 1, (3)-(6) can be expressed as a ratio of quadratic forms in normal variables whose distribution was tabulated by [

To assess the accuracy of the IG approximation in small samples we simply compare the approximate pvalues with the nominal sizes for n = 10, 20, 30, 50 10, k = 1, 2, 3, and m = 1, 2, 3, 4, 5. In general, the approximate p-values agree closely with the nominal sizes even in small samples, being the mean absolute errors less than 0.003. Such discrepancies seem not to be relevant in practical applications. Similar results have been found for DRW_{m,k,n} and TV_{m,k,n}. The results of this simulation study are not presented here due to space restrictions but are available from authors at request.

We have presented seasonal extensions of the [

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[