On the Use of Second and Third Moments for the Comparison of Linear Gaussian and Simple Bilinear White Noise Processes

Abstract

The linear Gaussian white noise process (LGWNP) is an independent and identically distributed (iid) sequence with zero mean and finite variance with distribution . Some processes, such as the simple bilinear white noise process (SBWNP), have the same covariance structure like the LGWNP. How can these two processes be distinguished and/or compared? If is a realization of the SBWNP. This paper studies in detail the covariance structure of . It is shown from this study that; 1) the covariance structure of is non-normal with distribution equivalent to the linear ARMA(2, 1) model; 2) the covariance structure of is iid; 3) the variance of can be used for comparison of SBWNP and LGWNP.

Share and Cite:

Arimie, C. , Iwueze, I. , Ijomah, M. and Onyemachi, E. (2018) On the Use of Second and Third Moments for the Comparison of Linear Gaussian and Simple Bilinear White Noise Processes. Open Journal of Statistics, 8, 562-583. doi: 10.4236/ojs.2018.83037.

1. Introduction

A stochastic process X t , t Z , where Z = { , 1 , 0 , 1 , } is called a white noise or purely random process, if with finite mean and finite variance, all the autocovariances are zero except at lag zero. In many applications, X t , t Z is assumed to be normally distributed with mean zero and variance, σ 2 < , and the series is called a linear Gaussian white noise process with the following properties [1] - [7] .

E ( X t ) = μ (1.1)

R ( 0 ) = var ( X t ) = E ( X t μ ) = σ 2 (1.2)

R ( k ) = cov ( X t , X t + k ) = E [ ( X t μ ) ( X t + k μ ) ] = { σ 2 , k = 0 0 , otherwise (1.3)

ρ ( k ) = corr ( X t , X t + k ) = R ( k ) R ( 0 ) = { 1 , k = 0 0 , otherwise (1.4)

ϕ k k = corr ( X t , X t + k / X t + 1 , X t + 2 , , X t + k 1 ) = 0 k (1.5)

where R(k) is the autocovariance function at lag k, rk is the autocorrelation function at lag k and ϕ k k is the partial autocorrelation function at lag k.

In other words, a stochastic process X t , t Z is called a linear Gaussian white noise if X t , t Z is a sequence of independent and identically distributed (iid) random variables with finite mean and finite variance. Under the assumption that the sample X 1 , X 2 , , X n is an iid sequence, we compute the sample autocorrelations as

ρ ^ X ( k ) = t = 1 n ( X t X ¯ ) ( X t + k X ¯ ) t = 1 n ( X t X ¯ ) 2 (1.6)

where

X ¯ = 1 n i = 1 n X t (1.7)

The iid hypothesis is always tested with the Ljung and Box [8] statistic

Q L B ( m ) = n ( n + 2 ) k = 1 m ( [ ρ ^ X ( k ) ] 2 n k ) (1.8)

where Q L B ( m ) is asymptotically a chi-squared random variable with m degree of freedom.

Several values of m are often used and simulation studies suggest that the choice of m ln ( n ) provides better power performance [9] .

If the data are iid, the squared data X 1 2 , X 2 2 , , X n 2 are also iid [10] . Another portmanteau test formulated by Mcleod and Li [10] is based on the same statistic used for the Ljung and Box [8]

Q M L ( m ) = n ( n + 2 ) k = 1 m ( [ ρ ^ X 2 ( k ) ] 2 n k ) (1.9)

where the sample autocorrelations of the data are replaced by the sample autocorrelations of the squared data, ρ ^ X 2 ( k ) .

As noted by Iwueze et al. [11] , a stochastic process X t , t Z may have the covariance structure (1.1) through (1.5) even when it is not the linear Gaussian white noise process. Iwueze et al. [11] provided additional properties of the linear Gaussian white noise process for proper identification and characterization from other processes with similar covariance structure (1.1) through (1.5).

Let Y t = X t d , d = 1 , 2 , 3 , where X t , t Z , be the linear Gaussian white noise process, the mean [ E ( Y t ) = E ( X t d ) ] , the variance [ var ( Y t ) = var ( X t d ) ] , autocovariances [ R y ( k ) = cov ( Y t Y t k ) = cov ( X t d X t k d ) ] were obtained to be [11]

E ( Y t ) = E ( X t d ) = { σ 2 m ( 2 m 1 ) ! ! , d = 2 m , m = 1 , 2 , 0 , d = 2 m + 1 , m = 0 , 1 , 2 , (1.10)

V a r ( Y t ) = V a r ( X t d ) = { σ 4 m [ k = 1 2 m ( 2 k 1 ) ( k = 1 m ( 2 k 1 ) ) 2 ] , d = 2 m σ 2 ( 2 m + 1 ) k = 1 2 m + 1 ( 2 k 1 ) , d = 2 m + 1 (1.11)

R Y ( k ) = R X t d ( l ) = { σ 4 m [ k = 1 2 m ( 2 k 1 ) ( k = 1 m ( 2 k 1 ) ) 2 ] , d = 2 m , l = 0 σ 2 ( 2 m + 1 ) k = 1 2 m + 1 ( 2 k 1 ) , d = 2 m + 1 , l = 0 0 , l 0 (1.12)

where

( 2 m 1 ) ! ! = k = 1 m ( 2 k 1 ) (1.13)

It is clear from (1.12) that when X t , t Z are iid, the powers Y t = X t d , d = 1 , 2 , 3 , of X t , t Z are also iid. Iwueze et al. [11] also showed the probability density function (pdf) of Y t = X t 2 to be the pdf of a gamma distribution with parameters

α = 1 2 , β = 2 σ 2 . That is, Y t = X t 2 ~ G ( α , β ) , α = 1 2 , β = 2 σ 2 .

when X t ~ N ( 0 , σ 2 ) and [11] concluded that all powers of a linear Gaussian white noise process are iid but not normally distributed.

Using the coefficient of symmetry and kurtosis, Iwueze et al. [11] confirmed the non-normality of Y t = X t d , d = 2 , 3 , . Table 1 gives the mean, variance, the coefficient of symmetry ( β 1 ) and kurtosis ( β 2 ) defined as follows

β 1 = μ 3 ( d ) ( μ 2 ( d ) ) 3 / 2 (1.14)

β 2 = μ 4 ( d ) ( μ 2 ( d ) ) 2 (1.15)

where

μ 2 ( d ) = E [ ( X t d E ( X t d ) ) 2 ] = var ( X t d ) (1.16)

Table 1. Mean, Variance, Coefficient of symmetry ( β 1 ) and kurtosis ( β 2 ) for Y t = X t d , d = 1 , 2 , 3 , , 6 ,when X t ~ N ( 0 , σ 2 ) .

Source: Iwueze et al. (2017).

μ 3 ( d ) = E [ ( X t d E ( X t d ) ) 3 ] (1.17)

μ 4 ( d ) = E [ ( X t d E ( X t d ) ) 4 ] (1.18)

Using the standard deviations when σ 2 = 1 and the kurtosis of Y t = X t d , d = 1 , 2 , 3 , , Iwueze et al. [11] determined the optimal value of d to be three ( d = 3 ). Hence, for effective comparison of the linear Gaussian white noise process with any stochastic process with similar covariance structure, Y t = X t d , d = 1 , 2 , 3 must be used.

The most commonly used white noise process is the linear Gaussian white noise process. The process is one of the major outcomes of any estimation procedure which is used in checking the adequacy of fitted models. The linear Gaussian white noise process also plays significant role as a basic building block in the construction of linear and non-linear time series models. However, the major problem is that there are many non-linear processes that exhibit the same covariance structure (Equation (1.1) through Equation (1.5)) as the linear Gaussian white noise process. One of such non-linear models is the bilinear models.

The study of bilinear models was introduced by Granger and Andersen [12] and Subba Rao [13] . Granger and Andersen [14] established that all series generated by the simple bilinear model

X t = β X t k e t j + e t , k > j (1.19)

appear to be second order white noise where β is a constant and e t , t Z is an independent identically distributed normal random variable with E ( e t ) = 0 , E ( e t 2 ) = σ 2 < . Guegan [15] studied the existence problem of a simple bilinear process X t , t Z satisfying

X t = β X t 2 e t 1 + e t (1.20)

Martins [16] obtained the autocorrelation function of the process X t 2 , t Z for the simple bilinear model defined by (1.19) when e t , t Z is iid with a Gaussian distribution. Again, Martins [16] studied the third order moment structure of (1.19) with non-independent shocks. Recently, properties of the simple bilinear model (1.19) were addressed by Malinski and Bielinska [17] , Malinski and Figwer [18] and Malinski [19] . Iwueze [20] studied the more general bilinear white noise model

X t = ( j = 1 m β j X t q j ) e t q + e t (1.21)

where e t , t Z is as defined in (1.19). Iwueze [20] was able to show the following.

1) The series X t , t Z satisfying (1.21) is strictly stationary, ergodic and unique.

2) The series X t , t Z satisfying (1.21) is invertible.

3) The series X t , t Z satisfying (1.21) has the same covariance structure as the linear Gaussian white noise processes.

4) Obtained the covariance structure of (1.21) to be

μ = E ( X t ) = 0 (1.22)

R ( k ) = { σ 2 1 j = 1 m σ 2 β j 2 , k = 0 0 , otherwise (1.23)

5) The series satisfying (1.21) is invertible if

2 j = 1 m β j 2 σ 2 < 1 (1.24)

For the simple bilinear model (1.19), it follows that

R ( k ) = { 1 1 σ 2 β 2 , σ 2 β 2 < 1 0 , otherwise (1.25)

and the invertibility condition is

σ 2 β 2 < 1 2 (1.26)

It is worthy to note that the stationarity condition

σ 2 β 2 < 1 (1.27)

is structure (k, n) independent [19] for model (1.19) and our study in this paper will concentrate on model (1.20). The purpose of this paper is to meet the following goals for the simple bilinear model satisfying (1.20).

1) Determine V a r ( X t d ) , d = 2 , 3 for the simple bilinear model (1.20).

2) Determine the covariance structure of X t d , d = 2 , 3 , when X t , t Z satisfies (1.20).

3) Determine for what values of β the simple bilinear white noise process will be identified as a Linear Gaussian white noise process.

4) Determine for what values of β the simple bilinear model will be normally distributed.

This paper is further divided into four sections in order to establish and achieve these goals. Section 2 discusses the covariance structure of Y t = X t d , d = 1 , 2 , 3 when X t = β X t 2 e t 1 + e t , e t ~ i i d N ( 0 , σ 2 ) , Section 3 presents the methodology, Section 4 is the results and discussion while, Section five is the conclusion.

2. Covariance Structure of Y t = X t d , d = 1 , 2 , 3 , When X t = β X t 2 e t 1 + e t , e t ~ i i d N ( 0 , σ 2 )

Theorem 2.1.

Let e t , t Z be the linear Gaussian white noise process with E ( e t ) = 0 and E ( e t 2 ) = σ 2 < . Suppose there exists a stationary and invertible process X t , t Z satisfying X t = β X t 2 e t 1 + e t for every t Z for some constant β , then Y t = X t 2 has the following properties:

E ( Y t ) = μ Y = σ 2 1 σ 2 β 2 ; σ 2 β 2 < 1 (2.1)

R Y ( k ) = cov ( Y t , Y t k ) = { 2 σ 4 ( 1 σ 2 β 2 ) 2 ( 1 3 σ 4 β 4 ) , σ 2 β 2 < 1 3 , k = 0 2 σ 6 β 2 ( 1 σ 2 β 2 ) 2 , σ 2 β 2 < 1 , k = 1 σ 2 β 2 R Y ( k 2 ) , k = 2 , 3 , (2.2)

ρ Y ( k ) = R Y ( k ) R Y ( 0 ) = { 1 , k = 0 σ 2 β 2 ( 1 3 σ 4 β 4 ) , k = 1 σ 2 β 2 ρ Y ( k 2 ) , k = 2 , 3 , (2.3)

Y t = X t 2 , t Z has the same covariance structure as the linear ARMA(2, 1) process (2.4)

X t = λ + ϕ 1 X t 1 + ϕ 2 X t 2 + θ 1 a t 1 + a t , ϕ 1 = 0 (2.4)

where a t is the sequence of independent and identically distributed random variable with E ( a t ) = 0 and V a r ( a t ) = σ 1 2 < .

Proof:

Let

Y t = X t 2 = ( β X t 2 e t 1 + e t ) 2 = β 2 X t 2 2 e t 1 2 + e t 2 + 2 β X t 2 e t 1 e t

E ( Y t ) = E ( X t 2 ) = β 2 E ( X t 2 2 ) E ( e t 1 2 ) + E ( e t 2 ) + 2 β E ( X t 2 ) E ( e t 1 ) E (t)

E ( Y t ) = E ( X t 2 ) = β 2 E ( X t 2 ) E ( e t 2 ) + E ( e t 2 ) = σ 2 β 2 E ( X t 2 ) + σ 2

( 1 σ 2 β 2 ) E ( X t 2 ) = σ 2

μ Y = E ( X t 2 ) = σ 2 1 σ 2 β 2 ; σ 2 β 2 < 1 (2.5)

V a r ( Y t ) = V a r ( X t 2 ) = E ( X t 4 ) [ E ( X t 2 ) ] 2

X t 4 = β 4 X t 2 4 e t 1 4 + 4 β 3 X t 2 3 e t 1 3 e t + 6 β 2 X t 2 2 e t 1 2 e t 2 + 4 β X t 2 e t 1 e t 3 + e t 4

E ( X t 4 ) = 3 σ 4 β 4 E ( X t 4 ) + 6 σ 4 β 2 E ( X t 2 ) + 3 σ 4

( 1 3 σ 4 β 4 ) E ( X t 4 ) = 6 σ 6 β 2 1 σ 2 β 2 + 3 σ 4

E ( X t 4 ) = 3 σ 4 ( 1 + σ 2 β 2 ) ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) , σ 4 β 4 < 1 3 (2.6)

Now,

V a r ( Y t ) = V a r ( X t 2 ) = E ( X t 4 ) [ E ( X t 2 ) ] 2 = 3 σ 4 ( 1 + σ 2 β 2 ) ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) ( σ 2 1 σ 2 β 2 ) 2 = 3 σ 4 ( 1 + σ 2 β 2 ) ( 1 σ 2 β 2 ) σ 4 ( 1 3 σ 4 β 4 ) ( 1 σ 2 β 2 ) 2 ( 1 3 σ 4 β 4 ) (2.7)

Hence,

R Y ( 0 ) = V a r ( Y t ) = V a r ( X t 2 ) = 2 σ 4 ( 1 σ 2 β 2 ) 2 ( 1 3 σ 4 β 4 ) , σ 2 β 2 < 1 3 (2.8)

R Y ( k ) = E [ Y t Y t l ] μ y 2 = E [ X t 2 X t l 2 ] μ x 2 , k = 0 , 1 , 2 ,

Y t Y t 1 = X t 2 X t 1 2 = β 2 X t 2 2 X t 1 2 e t 1 2 + 2 β X t 2 X t 1 2 e t 1 e t + X t 1 2 e t 2

E [ Y t Y t 1 ] = β 2 E [ X t 2 2 X t 1 2 e t 1 2 ] + σ 2 E ( X t 1 2 )

E [ Y t Y t 1 ] = β 2 E [ X t 1 2 X t 2 e t 2 ] + σ 2 E ( X t 2 )

X t 1 2 X t 2 e t 2 = X t 1 2 ( β 2 X t 2 2 e t 1 2 + 2 β X t 2 e t 1 e t + e t ) e t 2

X t 1 2 X t 2 e t 2 = β 2 X t 2 2 X t 1 2 e t 1 2 e t 2 + 2 β X t 2 X t 1 2 e t 1 e t 3 + X t 1 2 e t 4

By the assumption of stationarity,

E [ X t 1 2 X t 2 e t 2 ] = σ 2 β 2 E [ X t 1 2 X t 2 e t 2 ] + 3 σ 4 E ( X t 2 )

( 1 σ 2 β 2 ) E [ X t 1 2 X t 2 e t 2 ] = 3 σ 4 ( σ 2 1 σ 2 β 2 )

E [ X t 1 2 X t 2 e t 2 ] = 3 σ 6 ( 1 σ 2 β 2 ) 2 , σ 2 β 2 < 1 (2.9)

E [ Y t Y t 1 ] = β 2 [ 3 σ 6 ( 1 σ 2 β 2 ) 2 ] + σ 2 ( σ 2 1 σ 2 β 2 ) = σ 4 ( 1 + 2 σ 2 β 2 ) ( 1 σ 2 β 2 ) 2 (2.10)

Hence,

R y ( 1 ) = E ( Y t Y t 1 ) = E 2 ( Y t ) = σ 4 ( 1 + 2 σ 2 β 2 ) ( 1 σ 2 β 2 ) 2 ( σ 2 1 σ 2 β 2 ) 2 = 2 σ 6 β 2 ( 1 σ 2 β 2 ) 2 (2.11)

Y t Y t 2 = X t 2 X t 2 2 = ( β 2 X t 2 2 e t 1 2 + 2 β X t 2 e t 1 e t + e t 2 ) X t 2 2

Y t Y t 2 = β 2 X t 2 4 e t 1 2 + 2 β X t 2 3 e t 1 e t + X t 2 2 e t 2

E [ Y t Y t 2 ] = σ 2 β 2 E ( X t 2 4 ) + σ 2 E ( X t 2 2 )

E [ Y t Y t 2 ] = σ 2 β 2 E ( Y t 2 2 ) + σ 2 E (Yt)

E [ Y t Y t 2 ] = σ 2 β 2 E ( Y t 2 ) + σ 2 μ y

R y ( 2 ) + μ y 2 = σ 2 β 2 [ R y ( 0 ) + μ y 2 ] + σ 2 μ y (2.12)

R y ( 2 ) = σ 2 β 2 R y ( 0 ) + σ 2 β 2 μ y 2 + σ 2 μ y μ y 2 = σ 2 β 2 R y ( 0 ) + σ 2 μ y μ y 2 ( 1 σ 2 β 2 )

Note that

μ Y = E ( Y t ) = E ( X t 2 ) = σ 2 1 σ 2 β 2

( 1 σ 2 β 2 ) μ Y = σ 2

1 σ 2 β 2 = σ 2 μ Y (2.13)

Hence

R Y ( 2 ) = σ 2 β 2 R y ( 0 ) + σ 2 μ y μ y 2 ( σ 2 μ y ) = σ 2 β 2 R y ( 0 ) + σ 2 μ y σ 2 μ y = σ 2 β 2 R y ( 0 ) (2.14)

We have shown that

σ 2 β 2 μ y 2 + σ 2 μ y μ y 2 = 0 (2.15)

Similarly;

Y t Y t 3 = X t 2 X t 3 2 = ( β 2 X t 2 2 e t 1 2 + 2 β X t 2 e t 1 e t + e t 2 ) X t 3 2

Y t Y t 3 = β 2 X t 3 2 X t 2 2 e t 1 2 + 2 β X t 3 2 X t 2 e t 1 e t + X t 3 2 e t 2

E [ Y t Y t 3 ] = σ 2 β 2 E [ X t 2 2 X t 1 2 ] + σ 2 E ( X t 2 ) = σ 2 β 2 E [ Y t Y t 1 ] + σ 2 E (Yt)

R y ( 3 ) + μ y 2 = σ 2 β 2 [ R y ( 1 ) + μ y 2 ] + μ y 2 = σ 2 β 2 R y ( 1 ) + σ 2 β 2 μ y 2 + σ 2 μ y μ y 2 = σ 2 β 2 R y ( 1 ) (2.16)

Generally;

R Y ( k ) = σ 2 β 2 R Y ( k 2 ) , k = 2 , 3 , (2.17)

Hence,

R Y ( k ) = { 2 σ 4 ( 1 σ 2 β 2 ) 2 ( 1 3 σ 4 β 4 ) , σ 2 β 2 < 1 3 , k = 0 2 σ 6 β 2 ( 1 σ 2 β 2 ) 2 , σ 2 β 2 < 1 , k = 1 σ 2 β 2 R Y ( k 2 ) , k = 2 , 3 , (2.18)

and

ρ Y ( k ) = { 1 , k = 0 σ 2 β 2 ( 1 3 σ 4 β 4 ) , k = 1 σ 2 β 2 ρ Y ( k 2 ) , k = 2 , 3 , (2.19)

With this result, it is clear that when X t , t Z is defined by (1.20), Y t = X t 2 has the same covariance structure as the linear ARMA(2, 1) process. Its linear equivalence is

Y t = λ + ϕ 1 X t 1 + ϕ 2 Y t 2 + θ 1 a t 1 + a t , ϕ 1 = 0 (2.20)

where a t is the purely random process with E ( a t ) = 0 and V a r ( a t ) = σ 1 2 < . Table 2 compares Y t = X t 2 with its linear ARMA(2, 1) equivalence.

Theorem 2.2.:

Let e t , t Z be the linear Gaussian white noise process with E ( e t ) = 0 and E ( e t 2 ) = σ 2 < . Suppose there exists a stationary and invertible process X t , t Z satisfying X t = β X t 2 e t 1 + e t for every t Z and some constant β , then the mean and variance of Y t = X t 3 , t Z are

E ( Y t ) = μ Y = 0 (2.21)

Table 2. Covariance analysis of Y t = X t 2 when X t = β X t 2 e t 1 + e t , e t ~ N ( 0 , σ 2 ) and its linear ARMA(2, 1) equivalence.

R Y ( k ) = { 15 σ 6 ( 1 + 2 σ 2 β 2 + 6 σ 4 β 4 + 3 σ 6 β 6 ) ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) ( 1 15 σ 6 β 6 ) , σ 2 β 2 < 1 15 3 , k = 0 0 , k 0 (2.22)

ρ k ( k ) = { 1 , k = 0 0 , k 0 (2.23)

The covariance structure of Y t = X t 3 , t Z is that of the linear white noise process.

Proof:

Let Y t = X t 3 = ( β X t 2 e t 1 + e t ) 3 = β 3 X t 2 3 e t 1 3 + 3 β 2 X t 2 2 e t 1 2 e t + 3 β X t 2 e t 1 e t 2 + e t 3 (2.24)

E ( Y t ) = E ( X t 3 ) = μ y = β 3 E ( X t 2 3 e t 1 3 ) + 3 σ 2 β 2 E ( X t 2 e t 1 ) = β 3 E ( X t 1 3 e t 3 ) + 3 σ 2 β 2 E ( X t 1 e t ) = 0 (2.25)

Y t 2 = X t 6 = ( β X t 2 e t 1 + e t ) 6 = β 6 X t 2 6 e t 1 6 + 6 β 5 X t 2 5 e t 1 5 e t + 15 β 4 X t 2 4 e t 1 4 e t 2 + 20 β 3 X t 2 3 e t 1 3 e t 3 + 15 β 2 X t 2 2 e t 1 2 e t 4 + 6 β X t 2 e t 1 e t 5 + e t 6 (2.26)

E ( Y t 2 ) = β 6 E ( X t 2 6 e t 1 6 ) + 6 β 5 E ( X t 2 5 e t 1 5 e t ) + 15 β 4 E ( X t 2 4 e t 1 4 e t 2 ) + 20 β 3 E ( X t 2 3 e t 1 3 e t 3 ) + 15 β 2 E ( X t 2 2 e t 1 2 e t 4 ) + 6 β E ( X t 2 e t 1 e t 5 ) + E ( e t 6 ) = β 6 E ( X t 2 6 e t 1 6 ) + 15 σ 2 β 4 E ( X t 2 4 e t 1 4 ) + 45 σ 4 β 2 E ( X t 2 2 e t 1 2 ) + 15 σ 6 = 15 σ 6 β 6 E ( X t 6 ) + 45 σ 6 β 4 E ( X t 4 ) + 45 σ 6 β 2 E ( X t 2 ) + 15 σ 6 = 15 σ 6 β 6 E ( Y t 2 ) + 45 σ 6 β 4 [ 3 σ 4 ( 1 + σ 2 β 2 ) ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) ] + 45 σ 6 β 2 ( σ 2 1 σ 2 β 2 ) + 15 σ 6

( 1 15 σ 6 β 6 ) E ( Y t 2 ) = 45 σ 6 β 4 [ 3 σ 4 ( 1 + σ 2 β 2 ) ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) ] + 45 σ 6 β 2 ( σ 2 1 σ 2 β 2 ) + 15 σ 6 = 1 ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) [ 45 σ 6 β 4 [ 3 σ 4 ( 1 + σ 2 β 2 ) ] + 45 σ 6 β 2 [ σ 2 ( 1 3 σ 4 β 4 ) ] + 15 σ 6 ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) ]

= 1 ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) [ 135 σ 10 β 4 + 135 σ 12 β 6 + 45 σ 8 β 2 135 σ 12 β 6 + 15 σ 6 45 σ 10 β 4 15 σ 8 β 2 + 45 σ 12 β 6 ] = 1 ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) [ 90 σ 10 β 4 + 30 σ 8 β 2 + 15 σ 6 + 45 σ 12 β 6 ] = 15 σ 6 ( 1 + 2 σ 2 β 2 + 6 σ 4 β 4 + 3 σ 6 β 6 ) ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) , σ 2 β 2 < 1 15 3 (2.27)

E ( Y t 2 ) = R y ( 0 ) + μ y 2 (2.28)

V a r ( Y t ) = V a r ( X t 3 ) = R y ( 0 ) = E ( Y t 2 ) μ y 2 = 15 σ 6 ( 1 + 2 σ 2 β 2 + 6 σ 4 β 4 + 3 σ 6 β 6 ) ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) ( 1 15 σ 6 β 6 ) , σ 2 β 2 < 1 15 3 (2.29)

Some Results

E ( X t 1 X t e t ) = σ 2 E ( X t ) = 0

Proof:

X t 1 X t e t = X t 1 [ β X t 2 e t 1 + e t ] e t = β X t 2 X t 1 e t 1 e t + X t 1 e t 2

E ( X t 1 X t e t ) = σ 2 E ( X t 1 ) = σ 2 E ( X t ) = 0

E ( X t 1 X t 2 e t ) = 2 σ 2 β E ( X t 1 X t e t ) = 0

Proof:

X t 1 X t 2 e t = X t 1 [ β 2 X t 2 2 e t 1 2 + 2 β X t 2 e t 1 e t + e t 2 ] e t = β 2 X t 2 2 X t 1 e t 1 2 e t + 2 β X t 2 X t 1 e t 1 e t 2 + e t 3

E ( X t 1 X t 2 e t ) = 2 β σ 2 E ( X t 2 X t 1 e t 1 ) = 2 β σ 2 E ( X t 1 X t e t ) = 0

E ( X t 1 2 X t e t 2 ) = σ 2 β E ( X t 1 X t 2 e t ) = 0

Proof:

X t 1 2 X t e t 2 = X t 1 2 [ β X t 2 e t 1 + e t ] e t 2 = β X t 2 X t 1 2 e t 1 + X t 1 2 e t 3

E ( X t 1 2 X t e t 2 ) = σ 2 β E ( X t 2 X t 1 2 e t 1 ) = σ 2 β E ( X t 1 X t 2 e t ) = 0

E ( X t 1 X t 3 e t ) = 3 σ 2 β 2 E ( X t 1 2 X t e t 2 ) = 0

Proof:

X t 1 X t 3 e t = X t 1 [ β 3 X t 2 3 e t 1 3 + 3 β 2 X t 2 2 e t 1 2 e t + 3 β X t 2 e t 1 e t 2 + e t 3 ] e t = β 3 X t 2 3 X t 1 e t 1 3 e t + 3 β 2 X t 2 2 X t 1 e t 1 2 e t 2 + 3 β X t 2 X t 1 e t 1 e t 3 + X t 1 e t 4

E ( X t 1 X t 3 e t ) = 3 σ 2 β 2 E ( X t 2 2 X t 1 e t 1 2 ) = 3 σ 2 β 2 E ( X t 1 2 X t e t 2 ) = 0

ÞNow

Y t Y t 1 = X t 3 X t 1 3 = [ β 3 X t 2 3 e t 1 3 + 3 β 2 X t 2 2 e t 1 2 e t + 3 β X t 2 e t 1 e t 2 + e t 3 ] X t 1 3 = β 3 X t 2 3 X t 1 3 e t 1 3 + 3 β 2 X t 2 2 X t 1 3 e t 1 2 e t + 3 β X t 2 X t 1 3 e t 1 e t 2 + X t 1 3 e t 3

E ( Y t Y t 1 ) = β 3 E ( X t 2 3 X t 1 3 e t 1 3 ) + 3 σ 2 β E ( X t 2 X t 1 3 e t 1 ) = β 3 E ( X t 1 3 X t 3 e t 3 ) + 3 σ 2 β E ( X t 1 X t 3 e t ) = β 3 E ( X t 1 3 X t 3 e t 3 )

X t 1 3 X t 3 e t 3 = X t 1 3 [ β 3 X t 2 3 e t 1 3 + 3 β 2 X t 2 2 e t 1 2 e t + 3 β X t 2 e t 1 e t 2 + e t 3 ] e t 3 = β 3 X t 2 3 X t 1 3 e t 1 3 e t 3 + 3 β 2 X t 2 2 X t 1 3 e t 1 2 e t 4 + 3 β X t 2 X t 1 3 e t 1 e t 5 + X t 1 3 e t 6

E ( X t 1 3 X t 3 e t 3 ) = 3 β 2 ( 3 σ 4 ) E ( X t 2 2 X t 1 3 e t 1 2 ) = 9 σ 4 β 2 E ( X t 2 2 X t 1 3 e t 1 2 ) = 9 σ 4 β 2 E ( X t 1 2 X t 3 e t 2 )

Hence,

E ( Y t Y t 1 ) = β 3 [ 9 σ 4 β 2 E ( X t 1 2 X t 3 e t 2 ) ] = 9 σ 4 β 5 E ( X t 1 2 X t 3 e t 2 )

Now

X t 1 2 X t 3 e t 2 = X t 1 2 [ β 3 X t 2 3 e t 1 3 + 3 β 2 X t 2 2 e t 1 2 e t + 3 β X t 2 e t 1 e t 2 + e t 3 ] e t 2 = β 3 X t 2 3 X t 1 2 e t 1 3 e t 2 + 3 β 2 X t 2 2 X t 1 2 e t 1 2 e t 3 + 3 β X t 2 X t 1 2 e t 1 e t 4 + X t 1 2 e t 5

E ( X t 1 2 X t 3 e t 2 ) = σ 2 β 3 E ( X t 2 3 X t 1 2 e t 1 3 ) + 3 β ( 3 σ 4 ) E ( X t 2 X t 1 2 e t 1 ) = σ 2 β 3 E ( X t 1 3 X t 2 e t 3 ) + 9 σ 4 β E ( X t 1 X t 2 e t ) = σ 2 β 3 E ( X t 1 3 X t 2 e t 3 )

But,

E ( Y t Y t 1 ) = 9 σ 4 β 5 E ( X t 1 2 X t 3 e t 2 ) = 9 σ 4 β 5 ( σ 2 β 3 E ( X t 1 3 X t 2 e t 3 ) ) = 9 σ 6 β 8 E ( X t 1 3 X t 2 e t 3 )

Now,

X t 1 3 X t 2 e t 3 = X t 1 3 [ β 2 X t 2 2 e t 1 2 + 2 β X t 2 e t 1 e t + e t 2 ] e t 3 = β 2 X t 2 2 X t 1 3 e t 1 2 e t 3 + 2 β X t 2 X t 1 3 e t 1 e t 4 + X t 1 3 e t 5

E ( X t 1 3 X t 2 e t 3 ) = 2 β ( 3 σ 4 ) E ( X t 2 X t 1 3 e t 1 ) = 6 σ 4 β E ( X t 1 X t 3 e t ) = 0

Hence,

E ( Y t Y t 1 ) = 9 σ 6 β 8 [ 6 σ 4 β E ( X t 1 X t 3 e t ) ] = 54 σ 10 β 9 E ( X t 1 X t 3 e t ) = 0

R Y ( 1 ) = 0 , when Y = X t 3 .

Y t Y t 2 = X t 3 X t 2 3 = [ β 3 X t 2 3 e t 1 3 + 3 β 2 X t 2 2 e t 1 2 e t + 3 β X t 2 e t 1 e t 2 + e t 3 ] X t 2 3 = β 3 X t 2 6 e t 1 3 + 3 β 2 X t 2 5 e t 1 2 e t + 3 β X t 2 4 e t 1 e t 2 + X t 2 3 e t 3

E ( Y t Y t 2 ) = 0

R Y ( 2 ) = 0 , when Y = X t 3 .

Generally, R Y ( k ) = 0 k 0 , when Y = X t 3 .

Therefore, given X t = β X t 2 e t 1 + e t , e t ~ N ( 0 , σ 2 ) and Y t = X t 3 , the following are true E ( Y t ) = E ( X t 3 ) = 0 .

R Y ( k ) = { 15 σ 6 ( 1 + 2 σ 2 β 2 + 6 σ 4 β 4 + 3 σ 6 β 6 ) ( 1 σ 2 β 2 ) ( 1 3 σ 4 β 4 ) ( 1 15 σ 6 β 6 ) , σ 2 β 2 < 1 15 3 , k = 0 0 , k 0

ρ k ( k ) = { 1 , k = 0 0 , k 0

The covariance structure of Y t = X t 3 , t Z identifies the process as linear white noise.

3. Methodology

3.1. Normality Checking

The Jarque-Bera (JB) test [21] [22] [23] will be used to determine for which values of β a simple bilinear model (1.20) is normally distributed or not. The JB test statistic is

JB = n ( γ ^ 1 2 6 + ( γ ^ 2 3 ) 2 24 ) (3.1)

where

γ ^ 1 = 1 n t = 1 n ( X t X ¯ ) 3 ( 1 n t = 1 n ( X t X ¯ ) 2 ) 3 / 2 (3.2)

γ ^ 2 = 1 n t = 1 n ( X t X ¯ ) 4 ( 1 n t = 1 n ( X t X ¯ ) 2 ) 2 (3.3)

n is the sample size while, γ ^ 1 and γ ^ 2 are the sample skewness and kurtosis coefficients. The asymptotic null distribution of JB is χ 2 with 2 degrees of freedom.

3.2. White Noise Test

The modified Ljung-Box test statistic [11] given by

Q * ( m ) = n ( n + 2 ) k = 1 m ( [ ρ ^ X d ( k ) ] 2 n k ) (3.4)

is used to test the iid hypothesis for X t d , d = 1 , 2 , 3 for the simple bilinear model (1.20). It is important to note from Theorem 2.1 that X t 2 has ARMA(2, 1) structure while from Theorem 2.2, X t 3 is iid. This test will look for β values where both X t 2 and X t 3 are jointly iid. That will help determine the values of β for which the simple bilinear model (1.20) is not distinguishable from the linear Gaussian white noise process (LGWNP). Then, the hypothesis of iid data is rejected at level α if the observed Q * ( m ) is larger than the 1 α 2 quartile of the χ 2 ( m ) distribution, where m ln ( n ) [9] .

3.3. Use of Chi-Square Test for Comparison of the Simple Bilinear White Noise Process and the Linear Gaussian White Noise Process

From Theorem 2.3, the third power of the simple bilinear process is iid. A test is needed to confirm that the simple bilinear process (1.20) is not a linear Gaussian white noise process (LGWNP). For the LGWNP X t , t T ; E ( X t ) = μ , var ( X t ) = σ 2 < and var ( X t 3 ) = 15 σ 6 . To show that the simple bilinear process (1.20) is not LGWNP, we need to test the hypothesis;

H 0 : σ X t 3 2 = 15 σ X t 6 (3.5)

against the alternative hypothesis

H 0 : σ X t 3 2 15 σ X t 6 (3.6)

The chi-square test [24] [25] can be used to perform the test. The chi-square test statistic is

χ c a l 2 = ( n 1 ) S X t 3 2 15 σ X t 6 (3.7)

where S X t 3 2 is the sample variance of X t 3 ; X t , t Z that follows (1.20), σ X t 2 is an estimate of the true variance of the simple bilinear process (1.20) and n is the number of observations of the series. The null hypothesis is rejected at level α if the observed value of χ c a l 2 is larger than 1 α 2 quartile of the chi-square distribution with n 1 .degree of freedom. It should be noted that this test works well when the underlying original population X t , t Z is normally distributed.

4. Results and Discussion

One thousand random digits e t , t Z that met the condition e t ~ N ( 0 , 1 ) were simulated using Minitab 16 series software. Only one random digit, shown in Appendix I, was used for demonstration in the study because of want of space. The estimates of the descriptive statistics (mean, variance, skewness ( γ 1 ) and kurtosis ( γ 2 )) and other tests (Jarque Bera (JB) test, modified Ljung Box test (Q*) and chi-square calculated test statistic) for the powers e t d , d = 1 , 2 , 3 of the random digit are shown in Table 3. The results obtained using the JB, Q* and the chi-square test indicated e t , t Z as a LGWNP at 5% level of significance.

The LGWNP were used to simulate the SBWNP X t = B X t 2 e t 1 + e t , e t ~ N ( 0 , 1 ) for 0.60 β 0.60 satisfying the existence of E ( X t 3 ) using Fortran 77 program. The estimates of the descriptive statistic and that for the test statistic (JB, Q* and the chi-square calculated test statistic) are shown in Table 4. The values of the JB statistic show that the SBWNP are normally distributed for 0.56 β 0.60 . Similarly, the values of Q* and the chi-square calculated test statistic ( H 0 ) show that the SBWNP is iid and can be identified as a LGWNP for some β values. The values of β where the SBWNP will be identified as an LGWNP are summarized in Table 5.

5. Conclusion

We have been able to establish the covariance structure for X t d , d = 1 , 2 , 3 ; t Z

Table 3. Descriptive Statistics and estimate of the test statistic for rejecting the null hypothesis of equality of the variance of higher moment for the simulated series, X t = e t , e t ~ N ( 0 , 1 ) , as linear Gaussian white noise process.

Table 4. Descriptive statistics and estimate of the test statistic for simulated bilinear series X t = β X t 2 e t 1 + e t , e t ~ N ( 0 , 1 ) and 0.60 β 0.60 .

Table 5. Values of β for comparison of SBWNP as a LGWNP at 0.05 and 0.10 α levels.

satisfying (1.20). We have also determined the values of β for which the simple bilinear model (1.20) is normally distributed and in which the process can be determined as a LGWNP or not. We recommend that for proper comparison of SBWNP with LGWNP, the SBWNP should be considered for normality, white noise test and test of equality of variance of its third moment being equivalent to the theoretical values of the LGWNP.

Appendix I

Simulated Random Digits; e t , e t ~ N ( 0 , 1 ) (Read Across).

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Nagpaul, P.S. (2005) Time Series Analysis in Win IDAMS, New Delhi, India.
https://pdfs.semanticscholar.org/ddb0/14582fd074d682aec17151ff4d0833aa9b10.pdf
[2] Greene, W.H. (2005) Econometric Analysis. 5th Edition, Pearson Education Inc., London.
[3] Brooks, C. (2013) Introductory Econometrics for Finance. 3rd Edition, Cambridge.
[4] Chatfield, C. (2004) Time Series Forecasting. Chapman & Hall/CRC, New York.
[5] Pollock, D.S.G. (2008) Stationary Stochastic Processes. Econometric Theory.
http://www.le.ac.uk/users/dsgp1/COURSES/MESOMET/ECMETXT/11mesmet.PDF
[6] Granger, C.W.J. and Newbold, P. (1977) Forecasting Economic Time Series. Academic Press, New York.
[7] Brockwell, P. and Davies, R.A. (2002) Introduction to Time Series and Forecasting. 2nd Edition, Springer, New York.
https://doi.org/10.1007/b97391
[8] Ljung, G.M. and Box, G.E.P. (1978) On a Measure of Lack of Fit in Time Series Model. Biometrika, 65, 297-303.
https://doi.org/10.1093/biomet/65.2.297
[9] Tsay, R.S. (2002) Analysis of Financial Time Series. John Willey & Sons, New York.
https://doi.org/10.1002/0471264105
[10] McLeod, A.I. and Li, W.K. (1983) Diagnostic Checking ARMA Time Series Models Using Squared-Residual Autocorrelations. Journal of Time Series Analysis, 4, 269-273.
https://doi.org/10.1111/j.1467-9892.1983.tb00373.x
[11] Iwueze, I.S., Arimie, C.O., Iwu, H.C. and Onyemachi, E. (2017) Some Applications of Higher Moments of the Linear Gaussian White Noise Process. Applied Mathematics, 8, 1918-1938.
https://doi.org/10.4236/am.2017.812136
[12] Granger, C.W.J. and Andersen, A.P. (1978a) An Introduction to Bilinear Time Series Models. Vanderhoeck and Reprecht, Gottingen.
[13] Subba Rao, T. (1978) The Estimation of Parameters of Bilinear Time Series Model. Technical Report, No 79, Department of Mathematics, University of Manchester, Manchester.
[14] Granger, C.W.J. and Andersen, A. (1978) On the Invertibility of Time Series Models. Stochastic Processes and Their Applications, 8, 87-92.
https://doi.org/10.1016/0304-4149(78)90069-8
[15] Guegan, D. (1981) Etude d’un Modèle non Linéaire, le Modèle Superdiagonal d’Ordre 1. Comptes Rendus de l’Académie des Sciences—Series I, 293, 95-98.
[16] Martins, C.M. (1999) A Note on the Third-Order Moment Structure of a Bilinear Model with Non-Independent Shocks. Portugaliae Mathematica, 56, 115-125.
[17] Malinski, L. and Bielinska, E. (2010) Statistical Analysis of Minimum Prediction Error Variance in the Identification of a Simple Bilinear Time Series Model. Advances in System Science, 9, 183-188.
[18] Malinski, L. and Figwer, J. (2011) On Stationarity of Elementary Bilinear Time-Series. 2011 16th International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, 22-25 August 2011, 157-161.
[19] Malinski, L. (2016) Identification of Stable Elementary Bilinear Time-Series Model. Archives of Control Sciences, 26, 577-595.
[20] Iwueze, I.S. (1988) Bilinear White Noise Processes. Nigerian Journal of Mathematics and Applications, 1, 51-63.
[21] Jarque, C.M. and Bera, A.K. (1980) Efficient Tests for Normality, Homoscedasticity and Serial Independence of Regression Residuals. Economics Letters, 6, 255-259.
https://doi.org/10.1016/0165-1765(80)90024-5
[22] Jarque, C.M. and Bera, A.K. (1981) Efficient Tests for Normality, Homoscedasticity and Serial Independence of Regression Residuals: Monte Carlo Evidence. Economics Letters, 7, 313-318.
https://doi.org/10.1016/0165-1765(81)90035-5
[23] Jarque, C.M. and Bera, A.K. (1987) A Tests for Normality of Observations and Regression Residuals. International Statistical Review, 55, 163-172.
https://doi.org/10.2307/1403192
[24] Snedecor, G.W. and Cochran, W.G. (1989) Statistical Methods. 8th Edition, Iowa State University Press, Ames.
[25] Milton, J.S. and Jesse, C.A. (1995) Introduction to Probability and Statistics: Principles and Applications for Engineering and the Computing Sciences. McGraw-Hill Inc., New York.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.