_{1}

^{*}

This paper proposes a generalised Wald type tests to test the hypothesis of the nonlinear restrictions. We circumvent the problem of singularity of the covariance matrix associated with the usual Wald test by proposing a generalised inverse procedure, and an alternative simple procedure which can be approximated by a suitable chi-square distribution. New threshold value is derived to estimate the rank of the covariance matrix.

Analyses of economic data often entail the testing of hypotheses that imply complex nonlinear restrictions on subsets of parameters, and it is thus desirable to employ econometric methods that are flexible both in terms of their applications and implementation for typical data sets. In this paper, we are interested in testing the null hypothesis H 0 in the following form

H 0 : g ( θ ) = 0 (1)

against the alternative hypothesis

H 1 : g ( θ ) ≠ 0 (2)

where the vector parameter θ ∈ Θ ; the parameter space θ ∈ Θ is a k-dimensional compact subspace of ℝ k and g ( θ ) is a mapping from ℝ k to ℝ l continuously differentiable functions of θ .

In econometrics literature, the Wald tests are commonly used to test this null hypothesis since the generality of its formulation affords the testing of several interesting economic hypotheses which might present formidable difficulties for other procedures (see, e.g. [

Under regularity conditions, the null asymptotic distribution is chi-squared distribution, and this is the distribution that one usually uses to carry out hypothesis tests. On the other hand, under the nonregularity conditions, Wald tests fail to have limiting chi-squared distribution in general [

Another example is the Granger non-causality test. As shown in [

This paper develops an asymptotic theory for Wald test in the case where the asymptotic covariance matrix is degenerate. That case occurs when the functional restriction representing the hypothesis of interest fails to satisfy the regular conditions that its Jacobian matrix has full rank at the true parameter values and/or the asymptotic covariance matrix of the estimator parameters is singular.

[

It is well known that the rank of matrix is equal to the number of nonzero eigenvalues. [

In this paper, we investigate the asymptotic distribution of the eigenvalues and new threshold values are proposed to test if the eigenvalues are significantly different from zero or not. This permits to determine the rank of the covariance matrix. One important property of the eigenvalues is that they are sufficient statistics invariant with respect to the multiplication of the matrix from left and right by any nonsingular matrices.

The paper is organised as follows. Section 2 develops the general expression for the tests statistics. Section 3 proposes new threshold value for the eigenvalues and conclusions are given in Section 4.

In this section, we will develop general expressions for Wald test statistics for nonlinear restrictions. Let θ ^ be an estimator of θ based on a sample of size n. We make the following assumption regarding the estimator θ ^ .

Assumption (i) θ ^ = θ + O p ( n − 1 / 2 ) ,

(ii) n ( θ ^ − θ ) → d N k ( 0, Ω ) , where Ω is finite non zero but possibly singular,

(iii) a consistent estimator Ω ^ of Ω is available.

The asymptotic normality of θ ^ implies that under the null hypothesis, n g ( θ ^ ) is asymptotically normally distributed with a finite asymptotic variance matrix Σ .

n g ( θ ^ ) → d N l ( 0, Σ ) (3)

where Σ = G ( θ ) Ω G ( θ ) ′ is of dimension ( l × l ) and G ( θ ) = ∂ g ( θ ) ∂ θ ′ is the l × k Jacobian matrix of g = ( g 1 , ⋯ , g l ) ′ .

To establish the limiting distribution of the standard Wald test, we assume the following two regularity conditions.

C1. The Jacobian matrix G ( θ ) is of full rank for all θ in the parameter space.

C2. The asymptotic covariance matrix Ω is non singular, it is symmetric and positive definite.

Let Σ ^ be a consistent estimator of Σ , obtained by replacing G ( θ ) and Ω by their consistent estimator G ( θ ^ ) and Ω ^ respectively. Hence under the two regularity conditions C1 and C2, the Wald statistic for testing (1) is given by

W = n g ( θ ^ ) ′ Σ ^ − 1 g ( θ ^ ) (4)

and W is asymptotically distributed as chi-square with l degrees of freedom on H 0 .

Now, when the covariance matrix Σ is singular, i.e. if at least one of these regularity conditions (C1 and C2) is not satisfied, then the Wald statistic may not have an asymptotic chi-square distribution. The singularity of Σ comes from three possible situations. First the matrix of first-order partial derivatives of the restrictions, G ( θ ) has reduced rank. Secondly the asymptotic covariance Ω is a matrix degenerate and finally the combination of the two: r a n k ( G ( θ ) ) < l and Ω is singular.

If the function g ( θ ) involves products of the elements of θ then the matrix of first-order partial derivatives G ( θ ) is likely to have a reduced rank over part of the parameter space. Such functions are relatively common in time series analysis. For instance, impulse responses and related quantities of interest in a vector autoregressive (VAR) analysis involve products of the VAR coefficients. Similarly, such functions come up in analyzing multi-step causality in VAR models [

The singularity of the asymptotic covariance matrix of the estimator parameters occurs for instance in VAR processes. If the process is nonstationary, i.e. some variables are integrated or cointegrated, the multivariate least squares (LS) estimator of the coefficients has a singular asymptotic distribution. This singularity problem has also been noted or discussed in the context of the long-run impact matrix (see [

In that case, it is a usual practice to resort a generalized inverse procedure when we have inverted a singular matrix. That is, we have under the null hypothesis,

W + = n g ( θ ^ ) ′ Σ ^ + g ( θ ^ ) (5)

where Σ ^ + is the Moore-Penrose generalized inverse of a matrix Σ ^ . Under additional condition, rank ( Σ ^ ) = rank ( Σ ) = r < l , the Wald statistic W + still have an asymptotic chi-square distribution with r degrees of freedom. (see [

To overcome this difficulty, the Moore-Penrose generalized inverse can be obtained in using the spectral decomposition of Σ ^ + . Let λ i be the i-th eigenvalue of Σ and u i the corresponding i-th (orthonormal) eigenvector, satisfying u ′ i u j = δ i j , where δ i j is the usual Kronecker delta ( i , j = 1, ⋯ , l ) . The eigenvalue-eigenvector equation is Σ u i = λ i u i , then u ′ i Σ u i = λ i , i = 1 , ⋯ , l and collecting together we obtain U ′ Σ U = Λ , or Σ = U Λ U ′ , where Λ = diag ( λ 1 , ⋯ , λ l ) , with λ 1 ≥ λ 2 ≥ ⋯ ≥ λ l , is the ( l × l ) diagonal matrix of ordered eigenvalues and U = [ u 1 , u 2 , ⋯ , u l ] is the ( l × l ) matrix whose columns are the orthonormal eigenvectors of Σ . That is U ′ U = I l and U U ′ = I l where I l is the unity matrix of dimension l × l . If λ i = 0 , i = r + 1 , ⋯ , l , so that rank ( Σ ) = r , we can write

Σ = U 1 Λ 1 U ′ 1 , ( l × l )

U = [ U 1 , U 2 ] , ( l × r , l × l − r )

Λ = [ Λ 1 0 0 0 ]

where Λ 1 = diag ( λ 1 , ⋯ , λ r ) , with λ 1 ≥ λ 2 ≥ ⋯ ≥ λ r > 0 , is the ( r × r ) diagonal matrix of the ordered positive eigenvalues of Σ and the columns of U 1 , u i ( i = 1 , ⋯ , r ) are the corresponding eigenvectors to the positive eigenvalues.

The Moore-Penrose generalized inverse of Σ is Σ + = U 1 Λ 1 − 1 U ′ 1 , consistently estimated by Σ ^ + = U ^ 1 Λ ^ 1 − 1 U ^ ′ 1 which can be used to construct W + . Hence the Wald statistic

W + = n g ( θ ^ ) ′ U ^ 1 Λ ^ 1 − 1 U ^ ′ 1 g ( θ ^ ) (6)

is asymptotically chi-squared with r degrees of freedom on H 0 , and r = rank ( Σ ) .

This test statistic has fewer degrees of freedom than the usual Wald test since r = rank ( Σ ) ≤ l . Hence it has improved power, in particular if superfluous restrictions are removed. [

However the problem associated with this test statistic is the determination of the number of nonzero eigenvalues of the matrix Σ that is the rank of Σ , since in the construction of generalised Wald test statistics, the rank of the variance matrix estimate should be equal to the rank of the true variance matrix. In this section we propose a sequence pretest procedure. We develop the test procedures for

H ′ 0 : λ i = 0, i = r + 1, ⋯ , l (7)

against the alternative hypothesis

H ′ 1 : λ i > 0

where the λ i are the eigenvalues of the matrix Σ . The null hypothesis H ′ 0 is equivalent to the hypothesis rank ( Σ ) = r . To decide if the eigenvalues are nonzero or not, [

Let us consider the matrix perturbation theory

Σ ^ = Σ + ε B (8)

where ε B , with small ε , is the error matrix Σ ^ − Σ or the matrix perturbation. These results of the matrix perturbation theory indicate how much the eigenvalues and eigenvectors of the matrix Σ ^ can differ from those of Σ for small ε . Assume that the estimator Σ ^ of Σ is root-n consistent and has a limiting normal distribution, then the perturbation term ε B is of order O p ( n − 1 / 2 ) and can be seen as a zero mean Gaussian random matrix [

Let λ ^ 1 ≥ λ ^ 2 ≥ ⋯ ≥ λ ^ l be the ordered eigenvalues of matrix perturbed Σ ^ and u ^ 1 , u ^ 2 , ⋯ , u ^ l the corresponding eigenvectors, the eigenvalue-eigenvector equation

Σ ^ u ^ i = λ ^ i u ^ i , i = 1 , ⋯ , l

can be written as

( Σ + ε B ) u ^ i = λ ^ i u ^ i .

Following the spectral decomposition of Σ , the eigenvalue-eigenvector equation can be equivalently expressed as

( U Λ U ′ + ε B ) u ^ i = λ ^ i u ^ i .

Now, premultiplying by U ′ , we obtain

( Λ + ε U ′ B U ) U ′ u ^ i = λ ^ i ( U ′ u ^ i )

that is, ε U ′ B U is the perturbation matrix of the diagonal matrix Λ and λ ^ i is an eigenvalue of Λ + ε U ′ B U corresponding to the eigenvector U ′ u ^ i . The first-order approximation of λ ^ i is given by

λ ^ i = λ i + ε u ′ i B u i + O p ( ε 2 ) .

A similar result is given by ( [

λ ^ i = λ i + n − 1 / 2 u ′ i B u i + O p ( n − 1 ) . (9)

Hence for the zero eigenvalue of Σ that is λ i = 0 , we get

λ ^ i = n − 1 / 2 u ′ i B u i + O p ( n − 1 ) (10)

which corresponds to the i-th smallest eigenvalue of the estimated matrix Σ ^ .

The asymptotic distribution of λ ^ i is obtained from the following equation

n ( λ ^ i − λ i ) = u ′ i B u i + o p ( 1 ) . (11)

Since the elements of the perturbation term B are asymptotically normally distributed with mean zero, therefore u ′ i B u i which is a linear combination of the elements B will also be asymptotically normally distributed with mean zero, E ( u ′ i B u i ) = 0 ∀ i and finite variance σ i 2 = V a r ( u ′ i B u i ) = ( u ′ i ⊗ u ′ i ) C o v ( B ) ( u i ⊗ u i ) , where ⊗ denotes the Kronecker product. Moreover n ( λ ^ i − λ i ) has the same asymptotic distribution as u ′ i B u i , and it can immediately be established that n ( λ ^ i − λ i ) converge in distribution in normally variate with mean zero and finite variance σ i 2 .

n ( λ ^ i − λ i ) → d N ( 0, σ i 2 ) . (12)

In particular, under H ′ 0 : λ i = 0 , the smallest eigenvalue of Σ ^ , λ ^ i is asymptotically normally distributed with mean zero and finite asymptotic variance σ i 2 / n .

In [

Having fully defined the statistical properties of λ ^ i we shall propose a one-side test statistic to decide whether eigenvalue should be declared zero or not.

Let σ ^ i 2 be a consistent estimator of σ i 2 . According to the asymptotic distribution of λ ^ i we reject the null hypothesis H 0 ' if λ ^ i > z α σ ^ i / n where z α represents the percentile of the standard normal distribution.

We have defined a value of the threshold adequately. To assess the test statistic, approximate p-values are computed with reference to standard normal distribution.

The procedure of test is based on a sequential testing of the smallest eigenvalues of Σ ^ . If the null hypothesis H ′ 0 is rejected for the smallest l − r eigenvalues then the rank of Σ is found to be r. Then, in the construction of generalised Wald test statistics W + , we keep only the largest r eigenvalues of Σ ^ .

More precisely, the rank of Σ is detected sequentially using the pretest procedure. Since the eigenvalues of the estimated Σ ^ are ordered, and tests for individual eigenvalues are considered, it is more appropriate to use an upward testing procedure. Starting with the smallest eigenvalue λ ^ of Σ ^ , we carry out tests with progressively larger λ ^ until we find that the test reject the null hypothesis that the rank considered is correct.

In sum, the sequential testing procedure proposed in this paper consists of the following two steps:

Step 1: Determine the rank of Σ by testing H ′ 0 : λ i = 0 by above pre-testing procedure (upward testing procedure).

Step 2: If Σ is found to be full rank, test the null hypothesis H 0 : g ( θ ) = 0 with W (Equation (4)).

Otherwise, test the null H 0 with W + in using the decomposition spectral of Σ ^ + (Equation (6)).

In this paper, we have proposed an asymptotic theory for Wald type test of nonlinear restrictions. The Wald statistic is known to be asymptotically chi-square distributed under the null hypothesis, provided that the Jacobian of the restriction function describing the null hypothesis has full rank and the asymptotic covariance matrix of the estimator parameters is non-singular. There are, however, examples of important models and hypotheses in economics which do not satisfy theses regular conditions and the Wald statistic fails to have limiting chi-squared distribution. We derive a generalised Wald test statistics which guarantees the asymptotic chi-square distribution under nonregular conditions.

The limiting distribution of the Wald statistic for testing nonlinear restrictions depends substantially on the singularity or not of the asymptotic covariance matrix, and the determination of its rank is of great importance in this context. It is well known that the rank of the matrix is equal to the number of the eigenvalues nonzero. Using the perturbation theory, we derive the asymptotic distribution of the eigenvalues. We propose two new threshold values for the eigenvalues which permit to decide if the eigenvalues are nonzero or not.

Ratsimalahelo, Z. (2017) Generalised Wald Type Test of Nonlinear Restrictions. Open Access Library Journal, 4: e3923. https://doi.org/10.4236/oalib.1103923