Some Applications of Higher Moments of the Linear Gaussian White Noise Process

The Linear Gaussian white noise process is an independent and identically distributed (iid) sequence with zero mean and finite variance with distribution ( ) 2 0, N σ . Hence, if 1 2 , , , n X X X  is a realization of such an iid sequence, this paper studies in detail the covariance structure of 1 2 , , , , 1, 2, d d d n X X X d =   . By this study, it is shown that: 1) all powers of a Linear Gaussian White Noise Process are iid but, not normally distributed and 2) the higher moments (variance and kurtosis) of , 2,3, d t X d =  can be used to distinguish between the Linear Gaussian white noise process and other processes with similar covariance structure.


Introduction
The objective of estimation procedures is to produce residuals (the estimated noise sequence) with no apparent deviations from stationarity, and in particular with no dependence among these residuals. If there is no dependence among these residuals, then we can regard them as observations of independent random variables; there is no further modeling to be done except to estimate their mean and variance. If there is significant dependence among the residuals, then we need to look for the noise sequence that accounts for the dependence [1].
In this paper, we examine the covariance structure of powers of the noise sequence when the noise sequence is assumed to be independent and identically distributed normal (Gaussian) random variates with mean zero and finite va-the residuals and their powers.
The stochastic process , t X t T ∈ is said to be strictly stationary if the distribution function is time invariant. That is; ( ) ( ) That is, the probability measure for the sequence t X is the same as that for t k X + for all k. If a series satisfies the next three equations, it is said to be weakly or covariance stationary. 3.
If the process is covariance stationary, all the variances are the same and all the covariances depend on the difference between 1 t and 2 t . The moments ( )( ) ( ), 0, 1, 2, . where kk φ is known as the partial autocorrelation function. For large n, the The most popular test for (1.11) is the [4] portmanteau test which admits the following form where m is the so-called lag truncation number [5] and (typically) assumed to be fixed [6]. Under the assumption that Several values of m are often used and simulation studies suggest that the choice of ( ) ln m n ≈ provides better power performance [8].
Another Portmanteau test formulated by [9] can be used as a further test for iid hypothesis, since if the data are iid, then the squared data are also iid. It is based on the same statistic used for the Ljung-Box test as where the sample autocorrelations of the data are replaced by the sample autocorrelations of the squared data, According to [6], the methodology for testing for white noise can be roughly divided into two categories: time domain tests and frequency domain tests. Other time domain tests include the turning point test, the difference-sign test, the rank test [1]. Another time domain test is to fit an autoregressive model to the data and choosing the order which minimizes the AICC statistic. A selected order equal to zero suggests that the data is white noise [1].
be the normalized spectral density of , t X t Z ∈ . The normalized spectral density function for the linear Gaussian white noise process is The equivalent frequency domain expressions to H 0 and H 1 are In the frequency domain, [10] proposed test statistics based on the famous p U and p T processes [6], and a rigorous theoretical treatment of their limiting distributions was provided by [11]. Some contributions to the frequency domain tests can be found in [12] and [13], among others. This study will concentrate on the time domain approach only.
A stochastic process , t X t Z ∈ may have the covariance structure (1. where [16] [17] and by definition ( ) 2) Case II:

Variances of Powers of the Linear Gaussian White Noise Process
Theorem 2.2: Let , t X t Z ∈ be a linear Gaussian white noise process with mean zero and variance 2 0 given by Equation (2.1).
Case I:   Figure 1. From Figure 1, we note that for fixed σ , increase in d leads to an exponential increase in the standard deviation.
The specific objective of this paper is to investigate if powers of , t X t Z ∈ are also iid and to determine the distribution of The analytical proofs are provided in Section 2.3.    higher powers of ( )

Covariances of Powers of the Linear Gaussian White Noise Process
 are also white noise processes (iid) but not normally distributed. Proof: However, for The probability distribution function (p. and by one form of the fundamental theorem of calculus [17] ( )

Coefficient of Symmetry and Kurtosis for Powers of the Linear Gaussian White Noise Process
Non-normality of higher powers of , can also be confirmed by the coefficient of symmetry and kurtosis defined by The kurtosis for 1, 2,3, 4,5 d = and 6 are given in Table 2. A plot of ( ) ( ) ( )

Checking for Normality
If the noise process is Gaussian (that is, if all of its joint distributions are normal), then stronger conclusions can be drawn when a model is fitted to the data. We have shown that all powers of the linear Gaussian process are non-normal. The only reasonable test is the one that enables us to check whether the observations are from an iid normal sequence. The Jarque-Bera (JB) test [18] [19] [20] for normality can be used. The JB test is based on the assumption that the normal distribution (with any mean or variance) has skewness coefficient of zero, and a kurtosis coefficient of three. We can test if these two conditions hold against a suitable alternative and the JB test statistic is ( )

White Noise Testing
We have shown that the sample autocorrelations of 1 Mathematics are those of the white noise series if the sample autocorrelations of 1 2 , , , n X X X  are also iid. We will adopt the Ljung-Box test by replacing the sample autocorrelations of the data 1 2 , , , n X X X  with those of 1 The hypothesis of iid data is then rejected at level α if the observed  where m is the value of d used in the trend analysis and, 2 2 for the standard deviation ofˆf or the Kurtosis coefficient of Table 3 gives the accuracy measures for the trend analysis of the standard deviation of Table 4 gives detailed results for optimality.

Determining the Optimal Value of d
When 4 d = , the quadratic growth curve performs better than the exponential curve with minimal residual. Both curves fitted positive values at different data points. We also observed from Table 3 that   growth curve performs optimally than the exponential growth curve. The resulting quadratic curve yielded zero residual. The implication of the result is that we obtain a perfect fit for the data point when 3 d = for the quadratic curve only. Hence, the optimal value of d is 3 when we use the standard deviation curve. When 3 d = , the quadratic growth curve performs optimally than the exponential growth curve. The resulting quadratic curve yielded zero residual as that of the standard deviation curve. The implication of these results is that we obtain a perfect fit for the data point when 3 d = for the quadratic curve only. Hence, the optimal value of d is 3. Therefore, we recommend that in order to stop the variance from exploding, the order of the data points should not be raised to power greater that three.

On the Use of Higher Moment for the Acceptability of the Linear Gaussian White Noise Process
We have shown that if , t X t Z ∈ is a linear Gaussian white noise process,  Table 5 and Table   6 respectively. It is also clear from Equation (2.24) that the kurtosis itself is a function of variances. We, therefore, insist that for a stochastic process to be accepted as a linear Gaussian white noise process, the following variances must be true:   In view of these, we suggest that the two following null hypothesis be tested before a stochastic process is accepted as a linear Gaussian white noise process: ( )

Results
For an illustration, six (6) random digits were simulated using Minitab 16 series shown to be iid but not normally distributed (see Table 7).
I. S. Iwueze et al. The value of the chi-square test statistic for testing (3.12) and (3.13) are also shown in Table 7. We observed that the null hypothesis is rejected at level α equals 5% for two simulated series and is not rejected for the other four. The result clearly showed that testing the variance of higher moments for , 2,3 d t t Y X d = = is a necessary condition for accepting the linear Gaussian white noise process.

Conclusion
We have been able to show that if , t X t Z ∈ are iid then, all powers of , t X t Z ∈ are also iid but, non-normal. Hence, we computed the kurtosis of some higher powers of , t X t Z ∈ and established that an increase in the powers of , t X t Z ∈ leads to an exponential increase on the kurtosis. We recommend that stochastic processes (white noise processes) and processes with similar covariance structure should be considered for normality, white noise testing and for test of the variance of higher moments being equal to the theoretical values of Table 1