Limit Distribution of the φ-Divergence Based Change Point Estimator

Abstract

The assumption of stationarity is too restrictive especially for long time series. This paper studies the change point problem through a change point estimator based on the φ-divergence which provides a rich set of distance like measures between pairs of distributions. The change point problem is considered in the following sub-fields: the problem of divergence estimation, testing for the homogeneity between two samples as well as estimating the time of change. The asymptotic distribution of the change point estimator is estimated by the limiting distribution of a stochastic process within given bounds through asymptotic theory surrounding the likelihood theory. The distribution is found to converge to that of a standardized Brownian bridge process.

Share and Cite:

Susan, M. , Waititu, A. , Mwita, P. and Wamwea, C. (2021) Limit Distribution of the φ-Divergence Based Change Point Estimator. Open Journal of Statistics, 11, 337-350. doi: 10.4236/ojs.2021.113020.

1. Introduction

Many real world data are made up of consecutive regimes that are separated by abrupt changes [1]. Statistical research works have shown that with time, the underlying data generating processes undergo occasional sudden changes [2]. As a result the assumption of stationarity is often too strong and more often violated. Stationarity in the strict sense, implies time-invariance of the distribution underlying the process. The overall behavior of observations can change over time due to internal systemic changes in distribution dynamics or due to external factors. Modeling time series processes using stationary methods to capture their time-evolving dependence aspects will most likely result in a crude approximation as abrupt changes fail to be accounted for [3]. Reviewed literature reveals that the use of one model may not be appropriate to model a non-stationary series and as such various parametric and non-parametric change-point estimation methods have been proposed [4] [5] [6] [7] [8]. However, they are limited in different ways and their suitability depend on the underlying assumptions. A point of interest in all aspects of life would be to detect and estimate this changes as their implication is crucial. A time series data containing change points is assumed to be piece-wise stationary implying that some characteristics of the process change abruptly at unknown points in time. Parametric tests for change point are mainly based on the likelihood ratio statistics and estimation based on the maximum likelihood method whose general results can be found in [4]. In its simplest form, change-point detection is the name given to the problem of estimating the point at which the statistical properties of a sequence of observations change [9]. Changing point problems can be classified as off-line which deals with only a fixed sample or on-line which considers new information as it observed. Off-line change point problems deal with fixed sample sizes which are first observed and then detection and estimation of change points are done. [10] introduced the change point problem within the off-line setting. Since this pioneering work, methodologies used for change point detection have been widely researched on with methods extending to techniques for higher order moments within time series data. Change point analysis methods are applicable in a wide range of fields including but not limited to climate (climate change), quality management, medicine, finance, and genetics.

For a given set of data x 1 , , x n a change point is said to occur when there exists a time τ 1 , , n 1 such that the statistical properties of x 1 , , x τ and x ( τ + 1 ) , , x n are different. If τ is known then the two samples only need to be compared. However, if τ is unknown then it has to be analyzed through change point analysis that entails both detection and estimation of the change point/change time. The null hypothesis of no change against the alternative that there exists a time when the distribution characteristics of the series changed is then tested. Considering a change in model parameters the problem would be stated as

H 0 : θ 1 = θ 2 = = θ n = θ 0 ( unknown ) versus H 1 : θ 1 = = θ τ θ τ + 1 = = θ n (1)

where τ is unknown and needs to be estimated.

If τ < n then the process distribution has changed and τ is referred to as the change point. Assume that there exists λ ( 0,1 ) such that τ satisfies

τ = λ n (2)

i.e. λ is a fraction that divides the data process at the change point and n is the number of observations in a given data set. Then hypothesis 1 can be restated as

H 0 : τ = n , ( λ = 1 ) H 1 : τ < n , ( 0 < λ < 1 ) (3)

At a given level of significance, if the null hypothesis is rejected, then the process X is said to be locally piecewise-stationary and can be approximated by a sequence of stationary processes that may share certain features such as the general functional form of the distribution. With the assumption that change time is unknown, [4] gives eight limiting conditions that yields the null distribution of the likelihood ratio test statistic as the supremum of a standardized Brownian bridge. [11] applied these results within a non-parametric framework and obtained similar results. [12] apply the likelihood ratio test within a parametric framework on assumption that data are drawn from extreme value distributions. Through assumption of the Von Misses condition, their test statistic weakly converges in distribution to the supremum of a squared standardized Brownian bridge.

The rest of this paper is organized as follows: Section 2 gives an overview of the change point estimator based on a the ϕ -divergence. Section 3 provides key results for the limit distribution of the divergence based change point estimator, Section 4 gives some simulation results and finally Section 5 gives the conclusion.

2. Single Change Point Detection and Estimation

The change point problem is addressed by using a ‘distance’ function between distributions. Given a distance function, a test statistic is constructed to guarantee a distance (≥0) between any two distributions based on a sample size n. Consider a given parametric model f θ : θ Θ where Θ is the parameter space defined on a data set of size n. Let X 1 , , X n be random variables and have probability densities f ( x ; θ 1 ) , , f ( x ; θ n ) with respect to σ-finite measure μ with F ( x ; θ ) generating distinct measures if θ Θ

Definition 2.1 (ϕ-divergence). Let F θ 1 and F θ 2 be two probability distributions. Define the ϕ-divergence between the two distributions as

D ϕ ( F θ 1 , F θ 2 ) = D ϕ ( θ 1 , θ 2 )

The broader family of f-divergences ( ϕ -divergences) take the general form

D ϕ ( F θ 1 , F θ 2 ) = D ϕ ( θ 1 , θ 2 ) = ϕ ( d F θ 1 / d F θ 2 ) d F θ 2 = x f θ 2 ( x ) ϕ ( f θ 1 ( x ) f θ 2 ( x ) ) d μ ( x ) = E θ 2 [ ϕ ( f θ 1 ( X ) f θ 2 ( X ) ) ] , ϕ Φ (4)

where Φ is the class of all convex functions ϕ ( t ) , t > 0 satisfying ϕ ( 1 ) = 0 . To avoid indeterminate expressions at any point t = 0 , the following assumptions in relation to the functions ϕ involved in the general definition of ϕ -divergence statistics are given in [13].

0 ϕ ( 0 0 ) = 0 0 ϕ ( p 0 ) = lim u ϕ ( u ) u (5)

Assumption 1. The function ϕ Φ : [ 0, ) ( , + ) is convex and continuous. The restriction on [ 0, ) is finite, twice continuously differentiable with ϕ ( 1 ) = ϕ ( 1 ) = 0 , ϕ ( 1 ) = 1 .

Different choices of ϕ result in many divergences that play important roles in statistics. D ϕ ( θ 1 , θ 2 ) D ϕ ( θ 2 , θ 1 ) hence divergence measures are not distance measures but give some difference between two probability measures hence the term “pseudo-distance”. More generally a divergence measure is a function of two probability density (or distribution) functions, which has non-negative values and takes the value zero only when the two arguments (distributions) are the same. A divergence measure grows larger as two distributions are further apart. Hence, a large divergence implies departure from the null hypothesis.

Based on the divergence 4 a change point estimator can be constructed as;

D ( λ ) = arg max a < τ < b ( λ ( 1 λ ) ) 2 ϕ ( 1 ) D ϕ ( θ ^ 1 , θ ^ 2 ) (6)

To test for the possibility of having a change in distribution of X it is natural to compare the distribution function of the first τ observations to that of the last ( n τ ) since the location of the change time is unknown. When τ is near the boundary points, an estimation calculated on a correct large number of observations ( n τ ) is compared to an estimation from a small number of observations τ . This may result to an erratic behavior of the test statistic due to instability of the estimators of the parameters [6]. [14] provides the following result.

Theorem 1. Suppose that λ maximizes the test statistic over [ 0,1 ] then under the null hypothesis,

sup λ [ ε ,1 ε ] D ( λ ) = O p ( 1 ) ε sup λ [ 0,1 ] D ( λ ) p (7)

for proof of theorem 1 see [14].

If λ is not bounded away from zero and one the test statistic does not converge in distribution. However, fixed critical values can be obtained for increasing sample sizes when λ is bounded away from zero and one and yields significant power gains if the change point is in Λ . Let ε > 0 be small enough such that λ ( ε ,1 ε ) .

Suppose there exists constants a , b such that the unique maximum likelihood estimates θ ^ 1 , θ ^ 2 exist for all a τ b . Then the test statistic is maximized over λ such that

D n τ = arg max ε n < τ < ( 1 ε ) n ( τ n ( 1 τ n ) ) 2 ϕ ( 1 ) D ϕ ( θ ^ 1 , θ ^ 2 ) (8)

where a = ε n and b = ( 1 ε ) n

The trimming parameter is usually taken to satisfy 0 < ε < 0.5 [15].

Let N ( ε ) = { n ε , n ε + 1, , ( 1 ε ) n } be the set of all values over which the test statistic 8 is maximized. A change time τ is estimated by the least value of τ that maximizes the test statistic 8.

τ ^ = min { τ : D n τ = arg max N ( ε ) ( τ n ( 1 τ n ) ) 2 ϕ ( 1 ) D ϕ ( θ ^ 1 , θ ^ 2 ) } (9)

with θ ^ 1 , θ ^ 2 being parameter estimates of θ 1 and θ 2 respectively and that they are dependent on the change point τ . θ ^ 1 represents the parameter estimates before the change point and θ ^ 2 gives the parameter estimates after the change point. The difference between the two estimators θ ^ 1 , θ ^ 2 give an idea of the difference between the two samples hence departure from the null hypothesis.

3. Main Result

Consider a second order Taylor expansion of D ( θ ^ 1 , θ ^ 2 ) about the true parameter values θ 1 , θ 2 .

For i = 1, , d

D ( θ ^ 1 , θ ^ 2 ) = D ( θ 1 , θ 2 ) + i = 1 d D ϕ ( θ 1 , θ 2 ) θ 1 i ( θ ^ 1 i θ 1 i ) + i = 1 d D ϕ ( θ 1 , θ 2 ) θ 2 i ( θ ^ 2 i θ 2 i ) + 1 2 i = 1 d j = 1 d 2 D ϕ ( θ 1 , θ 2 ) θ 1 i θ 1 j ( θ ^ 1 i θ 1 i ) ( θ ^ 1 j θ 1 j ) + 1 2 i = 1 d j = 1 d 2 D ϕ ( θ 1 , θ 2 ) θ 2 i θ 2 j ( θ ^ 2 i θ 2 i ) ( θ ^ 2 j θ 2 j ) + i = 1 d j = 1 d 2 D ϕ ( θ 1 , θ 2 ) θ 1 i θ 2 j ( θ ^ 1 i θ 1 i ) ( θ ^ 2 j θ 2 j ) + o ( θ ^ 1 θ 1 2 ) + o ( θ ^ 2 θ 2 2 ) (10)

D ( θ 1 , θ 2 ) = 0 D ϕ ( θ 1 , θ 2 ) θ 1 i = ϕ ( f θ 1 ( x ) f θ 2 ( x ) ) f θ 1 ( x ) θ 1 i d μ ( x ) = ϕ ( 1 ) f θ 1 ( x ) θ 1 i d μ ( x ) = ϕ ( 1 ) f θ 1 ( x ) θ 1 i d μ ( x ) = ϕ ( 1 ) θ 1 i f θ 1 ( x ) d μ ( x ) = 0 (11)

This is by assumption 1 and that f θ 1 ( x ) is a pdf.

D ϕ ( θ 1 , θ 2 ) θ 2 i = ϕ ( f θ 1 ( x ) f θ 2 ( x ) ) f θ 2 ( x ) θ 2 i f θ 2 ( x ) ϕ ( f θ 1 ( x ) f θ 2 ( x ) ) ( f θ 1 ( x ) f θ 2 ( x ) 2 ) = ϕ ( 1 ) f θ 2 ( x ) θ 2 i ϕ ( 1 ) = 0 (12)

2 D ϕ ( θ 1 θ 2 ) θ 1 i θ 1 j = f θ 1 ( x ) θ 1 i ϕ ( f θ 1 ( x ) f θ 2 ( x ) ) f θ 1 ( x ) θ 1 j 1 f θ 2 ( x ) d μ ( x ) since θ 1 = θ 2 = ϕ ( 1 ) f θ 1 ( x ) θ 1 i f θ 1 ( x ) θ 1 j 1 f θ 1 ( x ) d μ ( x ) , by assumption 1 = f θ 1 ( x ) θ 1 i f θ 1 ( x ) θ 1 j 1 f θ 1 ( x ) d μ ( x )

2 D ϕ ( θ 1 θ 2 ) θ 2 i θ 2 j = ϕ ( f θ 2 ( x ) f θ 1 ( x ) ) f θ 2 ( x ) θ 2 i f θ 2 ( x ) θ 2 j 1 f θ 1 ( x ) d μ ( x ) = ϕ ( 1 ) f θ 2 ( x ) θ 2 i f θ 2 ( x ) θ 2 j 1 f θ 1 ( x ) d μ ( x ) = f θ 2 ( x ) θ 2 i f θ 2 ( x ) θ 2 j 1 f θ 1 ( x ) d μ ( x ) , by assumption 1 (13)

2 D ϕ ( θ 1 θ 2 ) θ 1 i θ 2 j = ϕ ( f θ 1 ( x ) f θ 2 ( x ) ) f θ 1 ( x ) θ 1 j f θ 2 ( x ) θ 2 j 1 ( f θ 2 ( x ) ) 2 f θ 1 ( x ) d μ ( x ) = ϕ ( 1 ) f θ 1 ( x ) θ 1 j f θ 2 ( x ) θ 2 j 1 f θ 1 ( x ) d μ ( x ) = { 2 D θ ( θ 1 θ 2 ) θ 1 i θ 1 j } (14)

By definition of the Fisher information matrix,

2 D ϕ ( θ 1 , θ 2 ) θ 1 i θ 1 j = 2 D ϕ ( θ 1 , θ 2 ) θ 2 i θ 2 j = I ( θ ) 2 D ϕ ( θ 1 , θ 2 ) θ 1 i θ 2 j = I ( θ ) (15)

Equation (10) reduces to

1 2 ( θ ^ 1 θ 1 ) I ( θ 0 ) ( θ ^ 1 θ 1 ) + 1 2 ( θ ^ 2 θ 2 ) I ( θ 0 ) ( θ ^ 2 θ 2 ) ( θ ^ 1 θ 1 ) I ( θ 0 ) ( θ ^ 2 θ 2 ) + o ( θ ^ 1 θ 1 2 ) + o ( θ ^ 2 θ 2 2 ) (16)

Further,

2 ϕ ( 1 ) D ϕ ( θ ^ 1 , θ ^ 2 ) = ( θ ^ 1 θ 1 ) I ( θ 0 ) ( θ ^ 1 θ 1 ) + ( θ ^ 2 θ 2 ) I ( θ 0 ) ( θ ^ 2 θ 2 ) 2 ( θ ^ 1 θ 1 ) I ( θ 0 ) ( θ ^ 2 θ 2 ) + o ( θ ^ 1 θ 1 2 ) + o ( θ ^ 2 θ 2 2 ) = ( θ ^ 1 θ ^ 2 ) I ( θ 0 ) ( θ ^ 1 θ ^ 2 ) + o ( θ ^ 1 θ 1 2 ) + o ( θ ^ 2 θ 2 2 ) (17)

From Equation (17) we obtain

τ ( n τ ) n { ( θ ^ 1 θ ^ 2 ) I ( θ 0 ) ^ ( θ ^ 1 θ ^ 2 ) + o ( θ ^ 1 θ 1 2 ) + o ( θ ^ 2 θ 2 2 ) } (18)

From 8 and Equations (10)-(18) then the test statistic can be expressed as

D n τ = max τ N ( ε ) τ ( n τ ) n { ( θ ^ 1 θ ^ 2 ) I ( θ 0 ) ^ ( θ ^ 1 θ ^ 2 ) + o ( θ ^ 1 θ 1 2 ) + o ( θ ^ 2 θ 2 2 ) } (19)

Let,

max τ N ( ε ) τ ( n τ ) n ( θ ^ 1 θ ^ 2 ) I ( θ 0 ) ^ ( θ ^ 1 θ ^ 2 ) = W n τ

Then,

max τ N ( ε ) D n τ = max τ N ( ε ) W n τ + o ( θ ^ 1 θ 1 2 ) + o ( θ ^ 2 θ 2 2 ) (20)

Consider the second and third terms on the RHS.

θ ^ 1 θ 1 2 = ( θ ^ 1 θ 1 ) ( θ ^ 1 θ 1 ) θ ^ 2 θ 2 2 = ( θ ^ 2 θ 2 ) ( θ ^ 2 θ 2 ) (21)

for n

o ( θ ^ 1 θ 1 2 ) = o p ( 1 ) , o ( θ ^ 2 θ 2 2 ) = o p ( 1 ) (22)

The distribution of D n τ is similar to that of W n τ since the second and third terms of 20 are o p ( 1 ) .

The change point estimator is reduced to a trimmed maximal Wald type test statistic. Consider the following conditions [16] [17].

(C1) Regularity: Interchanges of derivative and integral operations be valid so that,

E { θ log f ( x ; θ ) } = 0 (23)

E { 2 θ 2 log f ( x ; θ ) } = I ( θ ) (24)

(C2) for i , j = 1, , d

θ i f ( x ; θ ) , 2 θ i θ j f ( x ; θ )

exist almost everywhere such that

| θ i f ( x ; θ ) | H i ( x ) , | 2 θ i θ j f ( x ; θ ) | G i j ( x )

where

d H i ( x ) < , d G i j ( x ) <

i.e. the first and second partial derivatives of f ( x ; θ ) with respect to θ are bounded by functions with finite integrals.

(C3) for i , j = 1, , d

θ i log f ( x ; θ ) , 2 θ i θ j log f ( x ; θ )

exist almost everywhere and are such that the information matrix I ( θ ) exists and is positive definite throughout Θ and is continuous in θ . Hence

I 1 ( θ ) = I 1 / 2 ( θ ) I 1 / 2 ( θ )

for some non-singular matrix I 1 / 2 ( θ )

A positive definite matrix is always non-singular (determinant ≠ 0) and the determinant is always positive implying that the matrix is invertible i.e. I 1 ( θ ) = Σ (variance-covariance matrix).

(C4) There are constants a , b such that we can find unique θ ^ 1 , θ ^ 2 for each a τ b .

(C5) There is an open subset Θ 0 Θ containing θ 0 such that g i ( x ; θ ) , g i j ( x ; θ ) ,1 i , j , k d exist and are continuous in θ for all x d and θ Θ 0 .

(C6) as δ 0

E { sup h : | h | δ | 2 θ 2 log f ( x ; θ + h ) 2 θ 2 log f ( x ; θ ) | } = υ δ 0 (25)

Theorem 2. Under the null hypothesis and that conditions C1-C6 hold then the asymptotic distribution of the test statistic is given by,

W n τ D S p , ε (26)

as n , p = dim θ . where

S p , ε = sup t [ ε , 1 ε ] B p 2 ( t )

for B p ( t ) = 1 t ( 1 / 2 ) ( 1 t ) ( 1 / 2 ) { ( W p ( t ) ) ( W p ( t ) ) } ( 1 / 2 ) such that W p ( t ) is a p-dimensional Brownian bridge process.

The following results hold for approximation of the distribution function using the inverted Laplace transformation,

P ( sup ε < t < 1 ε B d 2 ( t ) u ) = 1 Γ ( d / 2 ) ( u 2 ) d / 2 e u / 2 ( log ( ( 1 ε ) 2 ε 2 ) ( 1 d u ) + 2 u + O ( u 2 ) ) (27)

for proof of this see [18]. Rather than considering a fixed trimming value ( ε ) for all sample sizes, the approach of [4] [11] is followed such that the trimming parameter is s function of the sample size n.

Critical Values

At any given level of significance, the asymptotic critical values of the test can then be estimated. Depending on the dimension of the parameter space (d), the critical values can be estimated. For a bi-variate parameter space i.e. d = 2, α ( 10 % ,5 % ,1 % ) , the asymptotic critical values are presented in Table 1 for different sample sizes such that n { 50,200,500,1000 } .

Table 1. Asymptotic critical values.

4. Simulation Study

In this section change point estimation for the generalized Pareto distribution is considered. For any given finite set of data, at least one of the following is likely at any given change point τ: ξ changes by a non-zero quantity; σ changes by a non-zero quantity; both ξ and σ change by non-zero quantities. A simple change point problem can be formulated in the following way;

H 0 : X t ~ G P ( ξ 1 , σ 1 ) : for t = 1 , , n H 1 : X t ~ G P ( ξ 1 , σ 1 ) : for t τ X t ~ G P ( ξ 2 , σ 2 ) : for t > τ (28)

where ξ 1 ξ 2 , σ 1 σ 2

Assumption: This work assumes that both parameters of the GP distribution (shape and scale) change at the same time.

Let ϕ ( t ) = log ( t ) . The resulting divergence is the Kullback Leibler (KL) divergence. The KL divergence between two GP distributions is given as

D { K L } ( θ ^ 2 ; θ ^ 1 ) = log ( σ ^ 1 σ ^ 2 ) ( 1 + ξ ^ 2 ) + ( 1 σ ^ 1 + ξ ^ 1 σ ^ 1 ) ( 1 + ξ ^ 2 σ ^ 2 x ) 1 ξ ^ 2 ( 1 + ξ ^ 1 σ ^ 1 x ) 1 (29)

The KL divergence is a function of the parameters of two densities (before and after the change point).

The change point estimator thus becomes

D τ , n = max ϵ < τ < ( 1 ϵ ) n ( τ n ( 1 τ n ) ) 2 ϕ ( 1 ) D K L ( θ ^ 1 ; θ ^ 2 ) (30)

For the simulation study, the following model is considered under the alternative hypothesis;

H 1 : X t ~ G P ( 1,0.1 ) for t τ X t ~ G P ( 3,0.35 ) for t > τ (31)

The change point τ is fixed at n/2 for n = 100 , 200 , 500 and 1000. For a 5% level of significance the change point is estimated and the results are presented in Figures 1-3. The change point process D n , τ takes a hill shape with the peak being observed at the point where the detected change point lies. To estimate the change time, the estimated critical value is superimposed on the graph and the change points taken as the maximum value exceeding the critical bound (critical value at level of significance). The estimated time of change is also superimposed on the respective plots of the time series data.

Figure 1 shows the evolution of the change point test statistic on the left panel and the time series data on the right panel. The graph of hypothesis testing gives D { n τ } = 24.86981 . D { n τ } > C 0.05 = 12.835 . H0 is therefore rejected and it is concluded that a change point exists in the time series data. The largest divergence is estimated at τ ^ = 88 against the actual true change point τ ^ = 100 . Figure 2 of hypothesis testing has D { n τ } = 18.525111 . D { n τ } > C 0.05 = 13.3777 . H0 is therefore rejected and a change point is declared in the time series. The largest divergence is estimated at τ ^ = 245 against the actual true change point of τ = 250 . Figure 3 has the evolution of the change point process on the top panel and the time series data on the bottom panel. D n τ = 19.42708 which rejects H0 at α = 5 % level of significance. The estimated change point is at τ ^ = 494 against the true change point τ = 500 .

Figure 1. Change point test process (left panel) and time series data (right panel) for n = 200 and τ = n/2.

Figure 2. Change point test process (left panel) and time series data (right panel) for n = 500 and τ = n/2.

Figure 3. Change point test process (left panel) and time series data (right panel) for n = 1000 and τ = n/2.

The change point estimator is examined when the true change point is no longer located in the middle of the sample size but is fixed towards the boundary points at n/3 for n = 200, 500, 1000. The results are shown in Figures 4-6.

Figure 4 shows the evolution of the change point test statistic on the left panel and the time series data on the right panel. The graph of hypothesis testing gives D n τ = 13.9438 > C 0.05 = 12.835 . H0 is therefore rejected and it is concluded that a change point exists at the respective level of significance. The largest divergence is estimated at τ ^ = 70 against the actual true change point τ = 66 . Figure 5 of hypothesis testing has D n τ = 16.7829 > C 0.05 . H0 is therefore rejected and a change point is declared. The largest divergence is estimated at τ ^ = 164 against the actual true change point of τ = 166 . Figure 6 has D n τ = 14.6752 which rejects H0 at 5% level of significance. The estimated change point is at τ ^ = 330 against the true change point τ = 333 .

The asymptotic power of the change point test is examined. The most commonly used criteria for checking the optimality of a statistical test involves fixing the false alarm probability (type I error) and maximizing the detection probability (minimizing the type II error). The power of the test at a given level against a particular alternative is defined as the probability of rejecting the null hypothesis when the alternative is actually true.

p ( α ) = P ( D n τ > C α | H 1 ) (32)

For power or sample-size computation, not only the distribution of the test statistic under the null hypothesis needs to be obtained but also its distribution under the alternative hypothesis. This is beyond the scope of this work hence reliance on simulation results. D n τ is estimated for 1000 replicates of simulated data using defined sample size n = 500 . The behavior of the test as the change point approaches the data boundary points is analyzed using the power function such that.

Figure 4. Change point test process (left panel) and time series data (right panel) for n = 200 and τ = n/3.

Figure 5. Change point test process (left panel) and time series data (right panel) for n = 500 and τ = n/3.

Figure 6. Change point test process (left panel) and time series data (right panel) for n = 1000 and τ = n/3.

p ^ ( α ) = ( 1 + # ( D n τ > C α ) ) / ( 1 + N ) (33)

The results in Table 2 indicate that the change point test is most powerful when the change point is located at the middle of the data set and less powerful

Figure 7. The 95% power function for n = 500 .

Table 2. Change point power estimates of the test with n = 500.

when the change point is located towards the boundary points. This behavior is attributed to comparing an estimate computed from a small sample (say the first τ observations) to one computed from a larger sample (say the last n τ observations). Small sample sizes result to erratic behavior of the test statistic due to instability of the parameter estimates. This implies that the test is more likely to incorrectly reject a change point when is it located towards the boundary points of any given data set (This is shown in Figure 7 that is the power function graph).

5. Conclusion

In this work, a divergence based (pseudo-distance) estimator has been used for estimating change in the parameters of any given parametric distribution. The change-point estimator is the first point at which there is maximal sample evidence of a change characterized by maximum divergence exceeding a critical bound. By application of the likelihood standard regularity conditions the distribution of the pseudo-metric based estimator is found to converge to that of a Brownian bridge process on a given interval. The distribution of the pseudo-distance based change point process is found to be similar to that of a maximal trimmed Wald-type test statistic under the null hypothesis of no change point. The distribution does not depend on the choice of the function ϕ and this is therefore applicable within a parametric framework when using other choices of the function ϕ for other statistical divergences. Further work can be done on the theoretical power properties of this particular change point estimator.

Acknowledgements

The first author thanks the Pan-African University Institute of Basic Sciences, Technology and Innovation (PAUSTI) for funding this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Truong, C., Oudre, L., & Vayatis, N. (2018). A Review of Change Point Detection Methods. arXiv preprint arXiv:1801.00718.
[2] Brodsky, E. and Darkhovsky, B.S. (2013) Nonparametric Methods in Change Point Problems. Vol. 243, Springer Science & Business Media, Berlin.
[3] Korkas, K.K. and Fryzlewicz, P. (2017) Multiple Change Point Detection for Non-Stationary Time Series Using Wild Binary Segmentation. Statistica Sinica, 27, 287-311.
https://doi.org/10.5705/ss.202015.0262
[4] Csorgo, M. and Horvárth, L. (1997) Limit Theorems in Change-Point Analysis. Vol. 18, John Wiley & Sons, Hoboken.
[5] Cheng, L., AghaKouchak, A., Gilleland, E. and Katz, R.W. (2014) Non-Stationary Extreme Value Analysis in a Changing Climate. Climate Change, 127, 353-369.
https://doi.org/10.1007/s10584-014-1254-5
[6] Jarusková, D. and Rencová, M. (2008) Analysis of Annual Maximal and Minimal Temperatures for Some European Cities by Change Point Methods. Environmentrics, 19, 221-223.
https://doi.org/10.1002/env.865
[7] Dupuis, D., Sun, Y. and Wang, H.J. (2015) Detecting Change Point in Extremes. Statistics and Its Interface, 8, 19-31.
https://doi.org/10.4310/SII.2015.v8.n1.a3
[8] Dette, H. and Wu, W. (2018) Change Point Analysis in Non-Stationary Processes—A Mass Excess Approach. arXiv: 1801.09874.
[9] Killick, R. and Ekcley, I (2014) An R Package for Change Point Analysis. Journal of Statistical Software, 58, 1-19.
https://doi.org/10.18637/jss.v058.i03
[10] Page, E. (1955) A Test for a Change in a Parameter Occurring at an Unknown Point. Biometrika, 42, 523-527.
https://doi.org/10.1093/biomet/42.3-4.523
[11] Gichuh, A.W. (2008) Nonparametric Change Point Analysis for Bernoulli Random Variables Based on Neural Networks.
[12] Dierckx, G. and Teugels, J.L. (2010) Change Point Analysis of Extremes. Environmentrics, 21, 661-686.
https://doi.org/10.1002/env.1041
[13] Pardo, L. (2018) Statistical Inference Based on Divergence Measures. Chapman and Hall/CRC, New York.
https://doi.org/10.1201/9781420034813
[14] Andrews, D.W. (1993) Tests for Parameter Instability and Structural Change with Unknown Change Point. Econonmentrica, 61, 821-856.
https://doi.org/10.2307/2951764
[15] Horváth, L., Miller, C. and Rice, G. (2019) A New Class of Change Point Test Statistics of Rényi Type. Journal of Business & Economic Statistics, 38, 570-579.
https://doi.org/10.1080/07350015.2018.1537923
[16] Hawkins Jr., D.L. (1983) Sequential Detection Procedures for Autoregressive Processes. Technical Report, North Carolina State University, Raleigh.
[17] Batsidis, A., Martin, N., Pardo, L. and Zogfaros, K. (2016) φ-Divergence Based Procedure for Parametric Change Point Problems. Methodology and Computing in Applied Probability, 18, 21-35.
https://doi.org/10.1007/s11009-014-9398-3
[18] Estrella, A. (2003) Critical Values and P Values of Bessel Processes Distributions: Computation and Application to Structural Break Tests. Econometric Theory, 19, 1128-1143.
https://doi.org/10.1017/S0266466603196107

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.