The Rate of Asymptotic Normality of Frequency Polygon Density Estimation for Spatial Random Fields

Abstract

This paper is to investigate the convergence rate of asymptotic normality of frequency polygon estimation for density function under mixing random fields, which include strongly mixing condition and some weaker mixing conditions. A Berry-Esseen bound of frequency polygon is established and the convergence rates of asymptotic normality are derived. In particularly, for the optimal bin width , it is showed that the convergence rate of asymptotic normality reaches to  when mixing coefficient tends to zero exponentially fast.

Share and Cite:

Yang, S. , Yang, X. , Xing, G. and Li, Y. (2018) The Rate of Asymptotic Normality of Frequency Polygon Density Estimation for Spatial Random Fields. Open Journal of Statistics, 8, 962-973. doi: 10.4236/ojs.2018.86064.

1. Introduction

Denote the integer lattice points in the N-dimensional Euclidean space by Z N for N 1 . Let { X i : i Z N } be a strictly stationary random field with common density f ( x ) on the real line R. Throughout this paper, let i = ( i 1 2 + i 2 2 + + i N 2 ) 1 / 2 , i ^ = i 1 i 2 i N , i j denote i k j k ( 1 k N ) for i = ( i 1 , i 2 , , i N ) Z N and j = ( j 1 , j 2 , , j N ) Z N , and 1 = ( 1,1, ,1 ) Z N . The limit process n denotes

min { n i ; 1 i N } and n i / n j C ( 1 i , j N )

for some constant C > 0 .

For a set of sites S Z N , B ( S ) = B ( X i ; i S ) denotes the σ-field generated by the random variables ( X i ; i S ) . Card ( S ) denotes the cardinality of S, and dist ( S , S ) denotes the Euclidean distance between S and S , that is dist ( S , S ) = min { i j ; i S , j S } . We will use the following mixing coefficient

α ( B ( S ) , B ( S ) ) = sup { | P ( A B ) P ( A ) P ( B ) | , A B ( S ) , B B ( S ) } C h ( Card ( S ) , Card ( S ) ) φ ( dist ( S , S ) ) , (1)

where C is some positive constant, φ ( u ) 0 as u , and h ( n , m ) is a symmetric positive function nondecreasing in each variable.

If h 1 , then { X i : i Z N } is called strongly mixing. In Carbon et al. [1] , it is assumed that h satisfies either

h ( n , m ) min { n , m } , (2)

or

h ( n , m ) ( n + m + 1 ) k ˜ (3)

where k ˜ 1 . Conditions (2) and (3) are also used by Neaderhouser [2] and Takahata [3] , respectively and are weaker than the strong mixing condition.

In recent years, there is a growing interest in statistical problem for random fields, because spatial data are modeled as finite observations of random fields. For asymptotic properties of kernel density estimators for spatial random fields, one can refer to Tran [4] , Hallin et al. [5] [6] , Cheng et al. [7] , El Machkouri [8] [9] , Wang and Woodroofe [10] , among others. For spatial regression models, see, Biau and Cadre [11] , Lu and Chen [12] , Hallin et al. [13] , Gao et al. [14] , Carbon et al. [15] , Dabo-Niang and Yao [16] .

The purpose of this paper is going to investigate the convergence rate of asymptotic normality of frequency polygon estimation of density function for mixing random fields. The frequency polygon has the advantage to be conceptually and computationally simple. Furthermore, Scott [17] showed that the rate of convergence of frequency polygon is superior to the histogram for smooth densities, and similar to those of kernel estimators. In recent years, frequency polygon estimator is given increasing attention. For example, key references that can be found for non-spatial random variables are Scott [17] , Beirlant et al. [18] , Carbon et al. [19] , Yang [20] , Xin et al. [21] , etc. For spatial random fields, the references on frequency polygon are Carbon [11] , Carbon et al [1] , Bensad and Dabo-Niang [22] and El Machkouri [23] . For continuous indexed random fields, Bensad and Dabo-Niang [22] derived the integrated mean squared error of frequency polygon and the optimal uniform strong rate of convergence. For discretely indexed random fields, Carbon [24] obtained the optimal bin width based on asymptotically minimize integrated error and the rate of uniform convergence, Carbon [1] derived the asymptotic normality of frequency polygon under the mixing conditions that the function h in (1.1) satisfies (2) or (3), El Machkouri [23] established the asymptotic normality of frequency polygon for strongly mixing coefficients (that is, h 1 ). However, the convergence rate of asymptotic normality of frequency polygon has not been discussed in these literature. In this paper, we will prove a Berry-Esseen bound of frequency polygon and the convergence rate of asymptotic normality under weaker mixing conditions, which include strongly mixing condition.

This paper is organized as follows: Next section presents the main results. Section 3 gives some lemmas, which will be used later. Section 4 provides the proofs of theorems. Throughout this paper, the letter C will be used to denote positive constants whose values are unimportant and may vary, but not dependent on n .

2. Main Results

Suppose that we observe { X n } on a rectangular region { i : 1 i n } . Consider a partition < x 2 < x 1 < x 0 < x 1 < x 2 of the real line into equal intervals I k = [ ( k 1 ) b n , k b n ) of length b n , where b n is the bin width and k = 0 , ± 1 , ± 2 , . For x [ ( k 0 1 / 2 ) b n , ( k 0 + 1 / 2 ) b n ) , consider the two adjacent histogram bins I k 0 and I k 0 + 1 . Denote the number of observations falling in these intervals respectively by v k 0 and v k 0 + 1 . Then the values of the histogram in these previous bins are given by

f k 0 = v k 0 / ( n ^ b n ) , f k 0 + 1 = v k 0 + 1 / ( n ^ b n ) . (4)

Thus the frequency polygon estimation of the density function f ( x ) is defined as follows

f n ( x ) = ( 1 2 + k 0 x b n ) f k 0 + ( 1 2 k 0 + x b n ) f k 0 + 1 (5)

for x [ ( k 0 1 / 2 ) b n , ( k 0 + 1 / 2 ) b n ) .

We know that the curve estimated by the frequency polygon is a non-smooth curve, but it tends to be a smooth density curve as the interval length b n of interpolation gradually tends to zero. So we always assume that b n tends to zero as n . In addition, we need the following basic assumptions.

Assumption (A1) The density f ( x ) with bounded derivative. For all i , j and some constant M > 0 ,

| f j | i ( y | x ) | M ,

where f j | i ( y | x ) is the conditional density of X j given X i .

Assumption (A2) The random field { X i : i Z N } satisfies (1) with φ ( u ) = O ( u θ ) for some θ > 2 N .

Under Assumption (A2), we can take β such that N θ 1 < β < 1 / 2 , then i = 1 i N 1 φ β ( i ) < . Carefully checking the proof of Theorem 3.1 in Carbon et al [1] , we find that the conditions (2) and (3) are not used, in fact, it only uses the positive constant h ( 1,1 ) . Therefore, by Theorem 3.1 in Carbon et al. [1] , we obtain the following result on asymptotic variance.

Proposition 1 Suppose that Assumption (A1) and (A2) are satisfied. Then, for x [ ( k 0 1 / 2 ) b n , ( k 0 + 1 / 2 ) b n ) , we have

n b n V a r ( f n ( x ) ) = σ n 2 ( x ) + o ( 1 ) , (6)

where

σ n 2 ( x ) = [ 1 2 + 2 ( k 0 x b n ) 2 ] f ( x ) . (7)

It should be reminded that, as in Remark 3 in El Machkouri (2013), it should be ( 1 / 2 + 2 ( k 0 x / b n ) 2 ) f ( x ) instead of ( 1 / 2 + ( 2 k 0 x / b n ) 2 ) f ( x ) for the asymptotic variance σ n 2 ( x ) .

Let S n = ( n ^ b n ) 1 / 2 [ f n ( x ) E f n ( x ) ] σ n 1 ( x ) , F S n ( u ) = P ( S n < u ) and Φ ( u ) denote the distribution function of N ( 0,1 ) . Now we give our main results as follow.

Theorem 1. Suppose that Assumption (A1) and (A2) hold. Assume that there exist integers p = p n and q = q n such that

τ 1, n 0, τ 2, n 0, τ 3, n 0 (8)

where τ 1, n = q p 1 , τ 2, n = ( n ^ b n ) 1 / 2 p N and τ 3, n = ( n ^ b n 1 ) 1 / 2 q θ h ( n ^ , p N ) . Then, for x such that f ( x ) > 0 and as n , we have

sup u R | F S n ( u ) Φ ( u ) | = O ( τ n ) (9)

where τ n = τ 1, n 1 / 3 + τ 2, n + τ 3 , n 1 / 2 + τ 4 , n 1 / 3 and τ 4 , n = b n 1 p N q θ .

Remark 1. In the theorem above, it does not need to assume that τ 4 , n 0 because 0 τ 4 , n C τ 2, n τ 3 , n 0 from (8).

Theorem 1 provides a general result for Berry-Esseen bound of frequency polygon estimation. Some specific bounds can be obtained by choosing different b n , p and q.

Theorem 2. Suppose that Assumption (A1) and (A2) hold. Let b n = C n ^ ν for some ν ( 0,1 ) . Denote that η 1 = ( 1 + ν ) N ( 1 ν ) ε + 2 ( 1 ε ) N ( 1 + 3 N ) ε , η 2 = η 1 + ( ε + 3 N ) N ( 1 + 3 N ) ε and η 3 = η 1 + 4 N k ˜ ( 1 ν ) ε for some ε ( 0,1 ) .

1) If h 1 and

θ max { 2 N , η 1 } , (10)

2) or if (2) is satisfied and

θ max { 2 N , η 2 } , (11)

3) or if (3) is satisfied and

θ max { 2 N , η 3 } , (12)

then, for x such that f ( x ) > 0 and as n , we have

sup u R | F S n ( u ) Φ ( u ) | = O ( n ^ ( 1 ν ) ( 1 ε ) 2 ( 1 + 3 N ) ) (13)

Carbon [24] proved that the optimal bin width for asymptotical mean square error

b o p t = 2 ( 15 49 R 2 ( f ) ) 1 / 5 n ^ 1 / 5 (14)

where R 2 ( f ) = [ f ( x ) ] 2 d x , when θ > 2 N + 3 / 2 . For the optimal bin width, it is ease to get the following result by Theorem 2.

Corollary 1. Suppose that Assumption (A1) and (A2) hold and h 1 . Let b n = C n ^ 1 / 5 . 1) If

θ max { 2 N , 3 N 2 ε + 2 ( 1 ε ) N ( 1 + 3 N ) ε } (15)

for some ε ( 0,1 ) , then, for x such that f ( x ) > 0 ,

sup u R | F S n ( u ) Φ ( u ) | = O ( n ^ 2 ( 1 ε ) 5 ( 1 + 3 N ) ) . (16)

2) If φ ( u ) tends to zero exponentially fast as u tends to infinity, then, for x such that f ( x ) > 0 ,

sup u R | F S n ( u ) Φ ( u ) | = O ( n ^ 2 5 ( 1 + 3 N ) ) . (17)

Remark 2. The asymptotic normality of frequency polygon under the strongly mixing conditions established by Carbon [1] and El Machkouri [23] . As far as we know, however, the convergence rate of asymptotic normality has not been studied. Our conclusions make an effort in this respect.

3. Lemmas

In the later proof, we need to estimate the upper bounds of covariance and variance of dependent variables. The following two lemmas give the upper bounds of covariance and variance respectively.

Lemma 1. Roussas and Ioannides [25] suppose that ξ and η are B ( S ) - measurable and B ( S ) -measurable random variables, respectively. If | ξ | C 1 a.s. and | η | C 2 a.s., then

| E ( ξ η ) ( E ξ ) ( E η ) | 4 C 1 C 2 α ( d i s t ( S , S ) ) . (18)

Let

Y i , k = I ( ( k 1 ) b n X i < k b n ) , Y ˜ i , k = Y i , k E Y i , k . (19)

Lemma 2. Gao et al. [26] let assumption (A1) and (A2) be satisfied. Suppose that the integer vectors a = ( a 1 , a 2 , , a N ) , m = ( m 1 , m 2 , , m N ) and n = ( n 1 , n 2 , , n N ) satisfy 0 a i < a i + m i n i for 1 i N . Then there exists a positive constant C, which is no depending on n , a and m , such that

E ( a i a + m Y ˜ i , k ) 2 C m ^ b n . (20)

Lemma 3. Lemma 3.7 in Yang [27] suppose that { ζ n : n 1 } and { η n : n 1 } are two random variable sequences, { γ n : n 1 } is a positive constant sequence, and γ n 0 . If

sup u | F ζ n ( u ) Φ ( u ) | = O ( γ n ) , (21)

then for any ε > 0 ,

sup u | F ζ n + η n ( u ) Φ ( u ) | = O ( γ n + ε + P ( | η n | ε ) ) . (22)

4. Proofs

Proof of Theorem 1 We will use the methodology of using “small” and “big” blocks which is similar to that of Carbon et al. [1] . For x ( ( k 0 1 / 2 ) b n , ( k 0 + 1 / 2 ) b n ) , define

Z i , k 0 = b n 1 / 2 { ( 1 2 + k 0 x b n ) Y i , k 0 + ( 1 2 k 0 + x b n ) Y i , k 0 + 1 } . (23)

and Z i , k 0 = Z i , k 0 E Z i , k 0 . Then

S n ( x ) = n ^ 1 / 2 1 i n Z ˜ i , k 0 . (24)

Now we divide S n ( x ) into the sum of large blocks and the sum of small blocks. According to the block size method, we assume q < p and p , q satisfy (8). Assume for some integer vector r = ( r 1 , r 2 , , r N ) , we have n 1 = r 1 ( p + q ) , , n N = r N ( p + q ) . If it is not this case, there will be a remainder term in the splitting block, but it will not change the proof much. For 1 j r , let

U ( 1 , n , j ) = n ^ 1 / 2 i k = ( j k 1 ) ( p + q ) + 1 ; 1 k N ( j k 1 ) ( p + q ) + p Z ˜ i , k 0

U ( 2 , n , j ) = n ^ 1 / 2 i k = ( j k 1 ) ( p + q ) + 1 ; 1 k N 1 ( j k 1 ) ( p + q ) + p i N = ( j N 1 ) ( p + q ) + p + 1 j N ( p + q ) Z ˜ i , k 0

U ( 3 , n , j ) = n ^ 1 / 2 i k = ( j k 1 ) ( p + q ) + 1 ; 1 k N 2 ( j k 1 ) ( p + q ) + p i N 1 = ( j N 1 1 ) ( p + q ) + p + 1 j N 1 ( p + q ) i N = ( j N 1 ) ( p + q ) + 1 ( j N 1 ) ( p + q ) + p Z ˜ i , k 0

U ( 4 , n , j ) = n ^ 1 / 2 i k = ( j k 1 ) ( p + q ) + 1 ; 1 k N 2 ( j k 1 ) ( p + q ) + p i N 1 = ( j N 1 1 ) ( p + q ) + p + 1 j N 1 ( p + q ) i N = ( j N 1 ) ( p + q ) + p + 1 j N ( p + q ) Z ˜ i , k 0

an so on. Note that

U ( 2 N 1, n , j ) = n ^ 1 / 2 i k = ( j k 1 ) ( p + q ) + p + 1 ; 1 k N 1 j k ( p + q ) i N = ( j N 1 ) ( p + q ) + 1 ( j N 1 ) ( p + q ) + p Z ˜ i , k 0 .

Finally

U ( 2 N , n , j ) = n ^ 1 / 2 i k = ( j k 1 ) ( p + q ) + p + 1 ; 1 k N j k ( p + q ) Z ˜ i , k 0 .

For each integer i [ 1,2 N ] , define

T ( i , n ) = 1 j r U ( i , n , j ) (25)

and B n = i = 2 2 N T ( i , n ) . Then

S n ( x ) = T ( 1, n ) + B n . (26)

Enumerate the random variables { U ( 1 , n , j ) : 1 j r } in an arbitrary manner and refer to them as V 1 , V 2 , , V r ^ . Note that | V i | C n ^ 1 / 2 p N b n 1 / 2 . Using Theorem 4 in Rio [28] or Lemma 4.5 in Carbon et al. [29] [30] , there exists V ˜ 1 , V ˜ 2 , , V ˜ r ^ , independent random variables, independent of V 1 , V 2 , , V r ^ with the same law verifying

E | V i V ˜ i | C n ^ 1 / 2 p N b n 1 / 2 h ( ( r ^ 1 ) p N , p N ) φ ( q ) C n ^ 1 / 2 p N b n 1 / 2 h ( n ^ , p N ) φ ( q ) . (27)

Let T ˜ ( 1, n ) = i = 1 r ^ V ˜ i and A n = T ( 1, n ) T ˜ ( 1, n ) . Thus

S n ( x ) = T ˜ ( 1, n ) + A n + B n . (28)

By Lemma 3, it is sufficient to show that

P ( | A n | > τ 3 , n 1 / 2 ) = O ( τ 3 , n 1 / 2 ) , (29)

P ( | B n | > τ 1 , n 1 / 3 + τ 4 , n 1 / 3 ) = O ( τ 1 , n 1 / 3 + τ 4 , n 1 / 3 ) , (30)

and

sup u R | F T ˜ ( 1, n ) ( u ) Φ ( u ) | = O ( τ 2, n ) . (31)

Obviously, from (27)

P ( | A n | > τ 3 , n 1 / 3 ) C τ 3 , n 1 / 2 i = 1 r ^ E | V i V ˜ i | C τ 3 , n 1 / 2 r ^ n ^ 1 / 2 p N b n 1 / 2 h ( n ^ , p N ) φ ( q ) C τ 3 , n 1 / 2 ( n ^ b n 1 ) 1 / 2 q θ h ( n ^ , p N ) = C τ 3 , n 1 / 2 , (32)

it follows (29). Now consider that

P ( | B n | > τ 1 , n 1 / 3 + τ 4 , n 1 / 3 ) i = 2 2 N P ( | T ( i , n ) | > τ 1 , n 1 / 3 + τ 4 , n 1 / 3 ) C ( τ 1 , n 1 / 3 + τ 4 , n 1 / 3 ) 2 i = 2 2 N E T 2 ( i , n ) . (33)

Note that

E T 2 ( 2 , n ) = r ^ E U 2 ( 2 , n , j ) + 1 j , j r , j j Cov ( U ( 2 , n , j ) , U ( 2 , n , j ) ) = : Λ 1 + Λ 2 (34)

By Lemma 2,

Λ 1 C r ^ n ^ 1 b n 1 p N 1 q b n C p 1 q = C τ 1, n . (35)

Define J ( 2, n , j ) = { i : ( j k 1 ) ( p + q ) + 1 i k ( j k 1 ) ( p + q ) + p , 1 k N 1, ( j N 1 ) ( p + q ) + p + 1 i N j N ( p + q ) } . By Lemma 1,

| Λ 2 | n ^ 1 1 j , j r , j j i J ( 2, n , j ) , i J ( 2, n , j ) | C o v ( Z ˜ i , k 0 , Z ˜ i , k 0 ) | C n ^ 1 b n 1 1 j , j r , j j i J ( 2, n , j ) , i J ( 2, n , j ) φ ( i i ) C n ^ 1 b n 1 1 j , j r , j j i J ( 2, n , j ) , i J ( 2, n , j ) φ ( j j q ) C n ^ 1 b n 1 p 2 N 2 q 2 1 j , j r , j j ( j j q ) θ

C n ^ 1 b n 1 p 2 N 2 q 2 r ^ q θ 1 j r j θ C n ^ 1 b n 1 p 2 N r ^ q θ ( q p 1 ) 2 C b n 1 p N q θ = C τ 4, n . (36)

Combining (34)-(36), we have

E T 2 ( 2, n ) C ( τ 1 , n + τ 4 , n ) . (37)

similarly, E T 2 ( i , n ) C ( τ 1 , n + τ 4 , n ) for 3 i 2 N . Thus, we obtain (30) from (33).

Finely, to show that (31). Clearly,

Var ( T ( 1, n ) ) = r ^ Var ( V 1 ) + 1 t , t r ^ , t t Cov ( V t , V t ) (38)

Define J ( 1, n , j ) = { i : ( j k 1 ) ( p + q ) + 1 i k ( j k 1 ) ( p + q ) + p ,1 k N } . Recalling (36), we have

| 1 t , t r ^ , t t Cov ( V t , V t ) | 1 j , j r , j j | Cov ( U ( 1, n , j ) , U ( 1, n , j ) ) | = n ^ 1 1 j , j r , j j i J ( 2, n , j ) , i J ( 2, n , j ) Cov ( Z ˜ i , k 0 , Z ˜ i , k 0 ) C n ^ 1 b n 1 1 j , j r , j j i J ( 2, n , j ) , i J ( 2, n , j ) φ ( i i )

C n ^ 1 b n 1 1 j , j r , j j i J ( 2, n , j ) , i J ( 2, n , j ) φ ( j j q ) C n ^ 1 b n 1 p 2 N 1 j , j r , j j ( j j q ) θ C n ^ 1 b n 1 p 2 N r ^ q θ 1 j r j θ C n ^ 1 b n 1 p 2 N r ^ q θ (39)

and by Lemma 2

r ^ Var ( V 1 ) = r ^ Var ( U ( 1, n , 1 ) ) C r ^ n ^ 1 p N C . (40)

Combining (38)-(40) yields that Var ( T ( 1, n ) ) = r ^ Var ( V 1 ) + o ( 1 ) and Var ( T ( 1, n ) ) C , so that Var ( S n ) = Var ( T ( 1, n ) + B n ) = Var ( T ( 1, n ) ) + o ( 1 ) from E B n 2 0 . Hence

Var ( T ˜ ( 1, n ) ) = r ^ Var ( V ˜ 1 ) = r ^ Var ( V 1 ) = Var ( T ( 1, n ) ) + o ( 1 ) = Var ( S n ) + o ( 1 ) = σ n 2 ( x ) + o ( 1 ) . (41)

Let Δ n { Var ( T ˜ ( 1, n ) ) } 3 / 2 i = 1 r ^ E | V ˜ i | 3 . Note that Var ( T ˜ ( 1, n ) ) σ n 2 ( x ) / 2 f ( x ) / 4 for x ( ( k 0 1 / 2 ) b n , ( k 0 + 1 / 2 ) b n ) . From (40), we have

Δ n C i = 1 r ^ E | V ˜ i | 3 C ( n ^ b n ) 1 / 2 p N r ^ Var ( V ˜ 1 ) C ( n ^ b n ) 1 / 2 p N 0, (42)

yields (31) by Berry-Esseen theorem. Complete the proof.

Proof of Theorem 2 In Theorem 1, take p = [ n ^ ρ ] and q = [ n ^ τ ] where ρ = ( 1 ν ) ( ε + 3 N ) 2 N ( 1 + 3 N ) and τ = ( 1 ν ) ε 2 N for 0 < ν < 1 and 0 < ε < 1 . Notes that b n = C n ^ ν . Then

τ 1, n = q p 1 = n ^ 3 ( 1 ν ) ( 1 ε ) 2 ( 1 + 3 N ) , (43)

τ 2, n = ( n ^ b n ) 1 / 2 p N = n ^ ( 1 ν ) ( 1 ε ) 2 ( 1 + 3 N ) , (44)

τ 4, n = b n 1 p N q θ = n ^ [ θ τ ν ( 1 ν ) ( ε + 3 N ) 2 ( 1 + 3 N ) ] . (45)

First consider the case (1), that is that h 1 and the condition (10) holds. At this time, we have

τ 3, n = ( n ^ b n 1 ) 1 / 2 q θ h ( n ^ , p N ) = n ^ ( 1 ν ) ε θ ( 1 + ν ) N 2 N . (46)

The condition (10) implies that θ ( 1 + ν ) N ( 1 ν ) ε + 2 ( 1 ε ) N ( 1 + 3 N ) ε . Combining this with (45) and (46), we can get that

τ 3 , n 1 / 2 = O ( n ^ ( 1 ν ) ( 1 ε ) 2 ( 1 + 3 N ) ) , (47)

τ 4 , n 1 / 3 = O ( n ^ ( 1 ν ) ( 1 ε ) 2 ( 1 + 3 N ) ) . (48)

From (43),(44), (47) and (48), it is ease to know that

τ n = τ 1 , n 1 / 3 + τ 2 , n + τ 3 , n 1 / 2 + τ 4 , n 1 / 3 = O ( n ^ ( 1 ν ) ( 1 ε ) 2 ( 1 + 3 N ) ) . (49)

It follows the desired result (13). For the case (2) and the case (3), the proving methods are similar to the method used to prove the case (1). Complete the proof.

5. Conclusion

The frequency polygon estimation has the advantage of simple calculation. It can save calculation cost in the face of large data, so it is a valuable and worth studying method. In the existing literature, the asymptotic normality of the frequency polygon estimation has been studied, but its convergence rate has not been established. This paper proves a Berry-Esseen bound of the frequency polygon and derives the convergence rate of asymptotic normality under weaker mixing conditions. In particularly, for the optimal bin width b o p t = C n ^ 1 / 5 , it is showed that the convergence rate of asymptotic normality reaches to n ^ 2 / 5 ( 1 + 3 N ) when mixing coefficient tends to zero exponentially fast. These conclusions show that the asymptotic normality of the frequency polygon estimator also has a good convergence rate under the dependent samples. Therefore, when the sample size is large, the normal distribution can be used to give a better confidence interval estimation.

Acknowledgements

This research was supported by the Natural Science Foundation of China (11461009) and the Scientific Research Project of the Guangxi Colleges and Universities (KY2015YB345).

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Carbon, M., Francq, C. and Tran, L.T. (2010) Asymptotic Normality of Frequency Polygons for Random Fields. Journal of Statistical Planning and Inference, 140, 502-514.
https://doi.org/10.1016/j.jspi.2009.07.028
[2] Neaderhouser, C.C. (1980) Convergence of Block Spins Defined on Random Fields. Journal of Statistical Physics, 22, 673-684.
https://doi.org/10.1007/BF01013936
[3] Takahata, H. (1983) On the Rates in the Central Limit Theorem for Weakly Dependent Random Fields. Zeitschrift fur Wahrscheinlichkeitstheorie und verwandte Gebiete, 62, 477-480.
[4] Tran, L.T. (1990) Kernel Density Estimation on Random Fields. Journal of Multivariate Analysis, 34, 37-53.
https://doi.org/10.1016/0047-259X(90)90059-Q
[5] Hallin, M., Lu, Z. and Tran, L.T. (2001) Density Estimation for Spatial Linear Processes. Bernoulli, 7, 657-668.
https://doi.org/10.2307/3318731
[6] Hallin, M., Lu, Z. and Tran, L.T. (2004) Kernel Density Estimation for Spatial Processes: The L1 Theory. Journal of Multivariate Analysis, 88, 61-75.
https://doi.org/10.1016/S0047-259X(03)00060-5
[7] Cheng, T.L., Ho, H.C. and Lu, X. (2008) A Note on Asymptotic Normality of Kernel Estimation for Linear Random Fields on Z2. Journal of Theoretical Probability, 21, 267-286.
https://doi.org/10.1007/s10959-008-0146-x
[8] El Machkouri, M. (2011) Asymptotic Normality for the Parzen-Rosenblatt Density Estimator for Strongly Mixing Random Fields. Statistical Inference for Stochastic Processes, 14, 73-84.
https://doi.org/10.1007/s11203-011-9052-4
[9] El Machkouri, M. (2014) Kernel Density Estimation for Stationary Random Fields. ALEA—Latin American Journal of Probability and Mathematical Statistics, 11, 259-279.
[10] Wang, Y. and Woodroofe, M. (2014) On the Asymptotic Normality of Kernel Density Estimators for Causal Linear Random Fields. Journal of Multivariate Analysis, 123, 201-213.
https://doi.org/10.1016/j.jmva.2013.09.008
[11] Biau, G. and Cadre, B. (2004) Nonparametric Spatial Prediction. Statistical Inference for Stochastic Processes, 7, 327-349.
https://doi.org/10.1023/B:SISP.0000049116.23705.88
[12] Lu, Z. and Chen, X. (2004) Spatial Kernel Regression Estimation: Weak Concistency. Statistics & Probability Letters, 68, 125-136.
https://doi.org/10.1016/j.spl.2003.08.014
[13] Hallin, M., Lu, Z. and Tran, L.T. (2004) Local Linear Spatial Regression. Annals of Statistics, 32, 2469-2500.
https://doi.org/10.1214/009053604000000850
[14] Gao, J., Lu, Z. and Tjostheim, D. (2006) Estimation in Semi-Parametric Spatial Regression. Annals of Statistics, 34, 1395-1435.
https://doi.org/10.1214/009053606000000317
[15] Carbon, M., Francq, C. and Tran, L.T. (2007) Kernel Regression Estimation for Random Fields. Journal of Statistical Planning and Inference, 37, 778-798.
https://doi.org/10.1016/j.jspi.2006.06.008
[16] Dabo-Niang, S. and Yao, A.F. (2007) Kernel Regression Estimation for Continuous Spatial Processes. Mathematical Methods of Statistics, 16, 298-317.
https://doi.org/10.3103/S1066530707040023
[17] Scott, D.W. (1985) Frequency Polygons: Theory and Application. Journal of the American Statistical Association, 80, 348-354.
https://doi.org/10.1080/01621459.1985.10478121
[18] Beirlant, J., Berlinet, A. and Gyorfi, L. (1999) On Piecewise Linear Density Estimators. Statistica Neerlandica, 53, 287-308.
https://doi.org/10.1111/1467-9574.00113
[19] Carbon, M., Garel, B. and Tran, L.T. (1997) Frequency Polygons for Weakly Dependent Processes. Statistics & Probability Letters, 33, 1-13.
https://doi.org/10.1016/S0167-7152(96)00104-6
[20] Yang, X. (2015) Frequency Polygon Estimation of Density Function for Dependent Samples. Journal of the Korean Statistical Society, 44, 530-537.
https://doi.org/10.1016/j.jkss.2015.01.006
[21] Xing, G.D., Yang, S.C. and Liang, X. (2015) On the Uniform Consistency of Frequency Polygons for ψ-Mixing Samples. Journal of the Korean Statistical Society, 44, 179-186.
https://doi.org/10.1016/j.jkss.2014.07.001
[22] Bensad, N. and Dabo-Niang, S. (2010) Frequency Polygons for Continuous Random Fields. Statistical Inference for Stochastic Processes, 13, 55-80.
https://doi.org/10.1007/s11203-009-9038-7
[23] El Machkouri, M. (2013) On the Asymptotic Normality of Frequency Polygons for Strongly Mixing Spatial Processes. Statistical Inference for Stochastic Processes, 16, 193-206.
https://doi.org/10.1007/s11203-013-9086-x
[24] Carbon, M. (2006) Polygone des fréquences pour des champs aléatoires. Comptes Rendus Mathematique, 342, 693-696.
https://doi.org/10.1016/j.crma.2006.02.019
[25] Roussas, G.G. and Ioannides, D.A. (1987) Moment Inequalities for Mixing Sequences of Random Variables. Stochastic Analysis and Applications, 5, 61-120.
https://doi.org/10.1080/07362998708809108
[26] Gao, J., Lu, Z. and Tjostheim, D. (2008) Moment Inequalities for Spatial Processes. Statistics and Probability Letters, 78, 687-697.
https://doi.org/10.1016/j.spl.2007.09.032
[27] Yang, S.C. (2003) Uniformly Asymptotic Normality of the Regression Weighted Estimator for Negatively Associated Samples. Statistics and Probability Letters, 62, 101-110.
https://doi.org/10.1016/S0167-7152(02)00427-3
[28] Rio, E. (1995) The Functional Law of the Iterated Logarithm for Stationary Strongly Mixing Sequences. Annals of Probability, 23, 1188-1203.
https://doi.org/10.1214/aop/1176988179
[29] Carbon, M., Tran, L.T. and Wu, B. (1997) Kernel Density Estimation for Random Fields (Density Estimation for Random Fields). Statistics & Probability Letters, 36, 115-125.
https://doi.org/10.1016/S0167-7152(97)00054-0
[30] Carbon, M., Hallin, M. and Tran, L.T. (1996) Kernel Density Estimation for Random Fields: The L1 Theory. Journal of Nonparametric Statistics, 6, 157-170.
https://doi.org/10.1080/10485259608832669

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.