^{1}

^{*}

^{2}

In this article, we study the Kolmogorov-Smirnov type goodness-of-fit test for the inhomogeneous Poisson process with the unknown translation parameter as multidimensional parameter. The basic hypothesis and the alternative are composite and carry to the intensity measure of inhomogeneous Poisson process and the intensity function is regular. For this model of shift parameter, we propose test which is asymptotically partially distribution free and consistent. We show that under null hypothesis the limit distribution of this statistic does not depend on unknown parameter.

One of the central themes of statistical theory and practice is the problem of the quality of goodness-of-fit tests. The problems of constructing the quality of goodness-of-fit tests in the case of i.i.d. are well studied in [

In classical mathematical statistics, [

[

In applications, the hypotheses to be tested are often of a more complex nature. The first works on the problems of goodness-of-fit testing of composite hypotheses concerning classical statistics are due to [

It follows that the critical values change from one null hypothesis to another. Different values of the parameter result in different critical values, often within the same parametric family. The distribution free character is therefore crucial in applications since the critical values are calculated only once for any distribution defined under the hypothesis to be tested. To work around this problem, [

The martingale approach of [

We will consider the same model as [

Suppose that we observe n independents inhomogeneous Poisson processes X ( n ) = ( X 1 , ⋯ , X n ) where X j = { X j ( t ) , t ∈ ℝ } , j = 1, ⋯ , n are trajectories of the Poisson processes with the mean function Λ ( t ) = E X j ( t ) = ∫ − ∞ t λ ( s ) d s . Here λ ( ⋅ ) ≥ 0 is the corresponding intensity function.

Let us remind the construction of GoF test of Kolmogorov-Smirnov type in the case of simple null hypothesis. The class of tests ( Ψ ¯ n ) n ≥ 1 of asymptotic size ε ∈ ( 0,1 ) is

K ε = { Ψ ¯ n : l i m n → ∞ E 0 Ψ ¯ n = ε } .

Suppose that the basic hypothesis is simple, say, H 0 : Λ ( ⋅ ) = Λ 0 ( ⋅ ) , where Λ 0 ( ⋅ ) is a known function which is continuous and differentiable, and satisfies Λ 0 ( ∞ ) < ∞ . The alternative is composite (non parametric) H 1 : Λ ( ⋅ ) ≠ Λ 0 ( ⋅ ) . Then we can introduce the Kolmogorov-Smirnov (K-S) type statistic

Γ ˜ n = n Λ 0 ( ∞ ) sup t ∈ ℝ | Λ ^ n ( t ) − Λ 0 ( t ) | ,

where Λ ^ n ( t ) = 1 n ∑ j = 1 n X j ( t ) is the empirical mean of the Poisson process. It can be verified that under H 0 , this statistic converges to the following limit:

Γ ˜ n ⇒ Γ ≡ sup 0 ≤ s ≤ 1 | W ( s ) | ,

where W ( s ) , 0 ≤ s ≤ 1 is a standard Wiener process. Therefore the K-S type test Ψ ˜ n ( X n ) = 1 l { Γ ˜ n > c ε } with the threshold c ε defined by the equation

ℙ ( Γ > c ε ) = ε belongs to K ε . This test is asymptotically distribution free (ADF) (see, e.g., [

Let us consider the case of the parametric null hypothesis. It can be formulated as follows. We have to test the null hypothesis

H 0 : Λ ( ⋅ ) ∈ L ( Θ ) = { Λ ( t ) = Λ 0 ( t − ϑ ) , ϑ ∈ Θ , t ∈ ℝ }

against the alternative H 1 : Λ ( ⋅ ) ∉ L ( Θ ) . Here Λ 0 ( ϑ , ⋅ ) is a known mean function of the Poisson process depending on some finite-dimensional unknown parameter ϑ ∈ Θ ⊂ ℝ . Note that under H 0 there exists the true value ϑ 0 ∈ Θ such that the mean of the observed Poisson process Λ ( t ) = Λ ( ϑ 0 , t ) , t ∈ ℝ .

The K-S type GoF test can be constructed by a similar way. Introduce the normalized process u ^ n ( t ) ≡ u n ( ϑ ^ n , t ) = n ( Λ ^ n ( t ) − Λ 0 ( ϑ ^ n , t ) ) , t ∈ ℝ , where ϑ ^ n is the maximum likelihood estimator of the unknown parameter ϑ which is (under hypothesis H 0 ) consistent and asymptotically normal n ( ϑ ^ n − ϑ 0 ) ⇒ ξ .

Therefore if we propose a goodness of fit test based on this statistic, say, Φ n ( X n ) = 1 l { Γ ¯ n > c α } then to find the threshold c α such that Φ n ∈ K ε we have to solve the equation ℙ ϑ 0 ( Γ > c ε ) = ε . The goal of this work is to show that if the unknown parameter ϑ , when ϑ ∈ Θ is the shift parameter, then it is possible to construct a test statistic Γ ^ n whose limit distribution does not depend on ϑ 0 . The test will be uniformly consistent against another class of alternatives

H 1 ρ : Λ ( ⋅ ) ∈ L ρ = { Λ ( ⋅ ) : inf ϑ ∈ Θ sup t ∈ ℝ | Λ ( t ) − Λ 0 ( t − ϑ ) | > ρ } .

Here ρ > 0 is some given number.

The mean function under null hypothesis is

Λ 0 ( ϑ , t ) = ∫ − ∞ t λ 0 ( s − ϑ ) d s , t ∈ ℝ .

the proposed test statistic is

Γ ^ n = n Λ 0 ( ϑ ^ n , ∞ ) sup t ∈ ℝ | Λ ^ n ( t ) − Λ 0 ( ϑ ^ n , t ) | .

We show that Γ ^ n ⇒ Γ , where Γ = Γ ( Λ 0 ) , i.e. the distribution of the random variable Γ ( Λ 0 ) does not depend on ϑ 0 . Remind that the function Λ 0 ( t ) , t ∈ ℝ is known and therefore the solution c ε = c ε ( Λ 0 ) can be calculated before the experiment using, say, numerical simulations.

We are given n independent observations X ( n ) = ( X 1 , ⋯ , X n ) of inhomogeneous Poisson processes X j = { X j ( t ) , t ∈ ℝ } with the mean function Λ ( t ) = E X j ( t ) , t ∈ ℝ . We have to construct a GoF test in the hypothesis testing problem with parametric null hypothesis H 0 . More precisely, we suppose that under H 0 , the mean function Λ ( t ) is absolutely continuous: Λ ˙ ( t ) = λ 0 ( ϑ 0 , t ) . Here ϑ 0 is the true value, and the intensity function is λ 0 ( ϑ 0 , t ) = λ 0 ( t − ϑ 0 ) , ϑ ∈ Θ ∈ ℝ . The set Θ = ( α , β ) , 0 < α < β < ∞ . Therefore if we denote Λ 0 ( t ) = ∫ − ∞ t λ 0 ( ν ) d ν , t ∈ ℝ , then the mean function under null hypothesis is Λ ( t ) = Λ 0 ( ϑ 0 , t ) = Λ 0 ( t − ϑ 0 ) .

It is convenient to use two different functions Λ 0 ( ϑ , t ) and Λ 0 ( t ) and we hope that such notation will not be misleading.

Therefore, we have the parametric null hypothesis

H 0 : Λ ( ⋅ ) ∈ L ( Θ )

where the parametric family is

L ( Θ ) = { Λ ( ⋅ ) : Λ ( t ) = Λ 0 ( t − ϑ ) , t ∈ ℝ , ϑ ∈ Θ }

Here Λ 0 ( ⋅ ) is a known absolutely continuous function with properties: Λ 0 ( − ∞ ) = 0 , Λ 0 ( ∞ ) < ∞ .

In this work, we denote by f ˙ ( ϑ , t ) the derivative with respect to ϑ of any function f ( ϑ , t ) ( ϑ ∈ Θ , t ∈ ℝ ) .

We consider the class of tests of asymptotic level ε :

K ε = { Ψ ¯ n : l i m n → ∞ E ϑ Ψ ¯ n = ε , ϑ ∈ Θ } .

The test studied in this work is based on the following statistic of K-S type:

Γ ^ n = n sup t ∈ ℝ | Λ ^ n ( t ) − Λ 0 ( t − ϑ ^ n ) |

when ϑ ^ n is the MLE.

As we use the asymptotic properties of the MLE ϑ ^ n , we need some regularity conditions.

Conditions C

● C 1 The function λ 0 ( ⋅ ) ∈ L 2 ( ℝ ) is strictly positive and three times continuously differentiable.

● C 2 Its derivatives belong to L 2 ( ℝ ) . The Fisher information

I n ( ϑ ) = n ∫ − ∞ + ∞ λ ˙ 0 2 ( t − ϑ ) λ 0 ( t − ϑ ) d t = n ∫ − ∞ + ∞ λ ˙ 0 2 ( s ) λ 0 ( s ) d s ≡ n I 0

I 0 > 0 does not depend on ϑ .

● C 3 The derivative λ ˙ 0 ( ⋅ ) ∈ L 1 ( ℝ ) .

● C 4 For any ν > 0 we have

inf | ϑ − ϑ 0 | > ν ‖ λ 0 ( ⋅ − ϑ ) − λ 0 ( ⋅ − ϑ 0 ) ‖ ϑ > 0.

Here ‖ ⋅ ‖ ϑ is the usual L ∞ ( ℝ ) norm define as ‖ f ( ⋅ ) ‖ ∞ = sup t ∈ ℝ | f ( t ) | .

Note that, by these conditions, the MLE ϑ ^ n is consistent, asymptotically normal

n ( ϑ ^ n − ϑ ) ⇒ N ( 0, I 0 − 1 )

and the moments converge: for any p > 0

n p / 2 E ϑ | ϑ ^ n − ϑ | p → E | ζ | p , ζ ∼ N ( 0, I 0 − 1 )

Moreover, it admits the representation (see [

ϑ ^ n = ϑ − 1 n I 0 − 1 ∫ − ∞ + ∞ λ ˙ 0 ( t − ϑ ) λ 0 ( t − ϑ ) d W n ( t ) + O ( n − 3 / 4 ) (2.1)

where W n ( t ) = n ( Λ ^ n ( t ) − Λ 0 ( t − ϑ ) ) . For the proofs see [

Let us introduce the following random variable

Γ 0 = sup t ∈ ℝ | W ( Λ 0 ( t ) ) − λ 0 ( t ) I 0 − 1 ∫ − ∞ + ∞ λ ˙ 0 ( s ) λ 0 ( s ) d W ( Λ 0 ( s ) ) |

where W ( ⋅ ) is a standard Wiener process.

The main result of this work is the following theorem.

Theorem 3.1. Let the conditions C are fulfilled. Then, the test

Φ ^ n ( X ( n ) ) = 1 l { Γ ^ n > c ε } belongs to the class K ε

Proof.

Let us consider n independent observations X ( n ) = ( X 1 , ⋯ , X n ) of inhomogeneous Poisson processes X j = { X j ( t ) , t ∈ ℝ } .

We have to show that lim n → ∞ E ϑ Ψ ^ n ( X n ) = ε , ϑ ∈ Θ .

We have

E ϑ Ψ ^ n ( X n ) = E ϑ 1 l { Γ ^ n > c ε } = ℙ ϑ [ sup t ∈ ℝ | n ( Λ ^ n ( t ) − Λ 0 ( t − ϑ ^ n ) ) | > c ε ] = ℙ ϑ [ sup t ∈ ℝ | u n ( t ) | > c ε ]

where we put u n ( t ) = n ( Λ ^ n ( t ) − Λ 0 ( t − ϑ ^ n ) ) .

The parametric empirical process defined by

u n ( t ) = n ( Λ ^ n ( t ) − Λ 0 ( t − ϑ ^ n ) ) = n ( Λ ^ n ( t ) − Λ 0 ( t − ϑ 0 ) + Λ 0 ( t − ϑ 0 ) − Λ 0 ( t − ϑ ^ n ) ) = n ( Λ ^ n ( t ) − Λ 0 ( t − ϑ 0 ) ) − n ( Λ 0 ( t − ϑ ^ n ) − Λ 0 ( t − ϑ 0 ) ) = W n ( t ) − n ( Λ 0 ( t − ϑ ^ n ) − Λ 0 ( t − ϑ 0 ) ) . (3.2)

Since the function Λ 0 ( t − ϑ ) is differentiable on Θ , according to the formula of finite increments applied to Λ 0 on [ ϑ 0 , ϑ ^ n ] , we have:

Λ 0 ( t − ϑ ^ n ) − Λ 0 ( t − J 0 ) = Λ ˙ 0 ( t − ϑ ˜ n ) ⋅ ( ϑ ^ n − ϑ 0 ) + o ( Λ ˙ 0 ( t − ϑ ˜ n ) ( ϑ ^ n − ϑ 0 ) ) .

where J ˜ n is an intermediate point between ϑ 0 and ϑ ^ n .

According to (3.2), we have the representation

u n ( t ) = W n ( t ) − Λ ˙ 0 ( t − ϑ ˜ n ) ⋅ ( ϑ ^ n − ϑ 0 ) n − o ( Λ ˙ 0 ( t − ϑ ˜ n ) n ( ϑ ^ n − ϑ 0 ) ) = W n ( t ) + Λ ˙ 0 ( t − ϑ ˜ n ) ∫ − ∞ ∞ h ( s − ϑ 0 ) d W n ( s ) + r n ( t ) , (3.3)

where

r n ( t ) = O ( n − 1 / 4 ⋅ Λ ˙ 0 ( t − ϑ ˜ n ) ) − o ( Λ ˙ 0 ( t − ϑ ˜ n ) n ( ϑ ^ n − ϑ 0 ) )

is the remainder.

Let us put h ( v ) = I 0 − 1 λ ˙ 0 ( v ) λ 0 ( v ) , W n ( t ) = n ( Λ ^ n ( t ) − Λ 0 ( t − ϑ 0 ) ) and denote by ϑ 0 the true value. Then relation (2.1) becomes

ϑ ^ n − ϑ 0 = − 1 n ( ∫ − ∞ ∞ h ( t − ϑ 0 ) d W n ( t ) + O ( n − 1 / 4 ) )

and we have

n ( ϑ ^ n − ϑ 0 ) = − ∫ − ∞ ∞ h ( t − ϑ 0 ) d W n ( t ) + O ( n − 1 / 4 ) .

Therefore,

u n ( t ) = W n ( t ) + Λ ˙ 0 ( t − ϑ ˜ n ) ⋅ v n + r n ( t ) , (3.4)

where we have set v n = ∫ − ∞ ∞ h ( s − ϑ 0 ) d W n ( s ) . Since ϑ ˜ n is himself an estimator of ϑ 0 therefore ϑ ˜ n converges to ϑ 0 . Also r n ( t ) converge in probability to 0. Under these considerations we can rewrite u n ( t ) as follow

u n ( t ) = W n ( t ) + Λ ˙ 0 ( t − ϑ 0 ) ⋅ v n , (3.5)

Furthermore, we put

u ^ n ( t ) = W n ( t ) + Λ ˙ 0 ( t − ϑ ^ n ) ⋅ v ^ n , (3.6)

where v ^ n = ∫ − ∞ ∞ h ( s − ϑ ^ n ) d W n ( s ) .

The intensity function λ ( ϑ 0 , t ) = λ 0 ( t − ϑ 0 ) is strictly positive. Therefore it was shown that the process W n ( ⋅ ) is asymptotically the composition of a Brownian motion (in the sense of the weak convergence) with Λ ( ϑ 0 , t ) whitch we note W ( Λ ( ϑ 0 , t ) ) , Λ ( ϑ 0 , t ) ∈ [ 0, Λ ( ϑ 0 , + ∞ ) ] . In the other words W n ( t ) converge weakly to the process W ( Λ ( t − ϑ 0 ) ) in the space D [ 0, Λ 0 ( + ∞ ) ] .

We introduce the stochastic process

u ^ ( t ) = W ( Λ 0 ( t − ϑ 0 ) ) + Λ ˙ 0 ( t − ϑ 0 ) ∫ − ∞ ∞ h ( s − ϑ 0 ) d W ( Λ 0 ( s − ϑ 0 ) ) . (3.7)

It is easy to see that, if we change the variables t − ϑ 0 = u and s − ϑ 0 = v in the integrals then we obtain the following equality

sup t ∈ ℝ | u ^ ( t ) | = sup u ∈ ℝ | W ( Λ 0 ( u ) ) + Λ ˙ 0 ( u ) ∫ − ∞ ∞ h ( v ) d W ( Λ 0 ( v ) ) | = sup u ∈ ℝ | W ( Λ 0 ( u ) ) − λ 0 ( u ) I 0 − 1 ∫ − ∞ + ∞ λ ˙ 0 ( v ) λ 0 ( v ) d W ( Λ 0 ( v ) ) | = Γ 0 .

The proof of the theorem is based on the proof of the following fundamental lemma.

Lemma 3.2. Let the conditions C are satisfied. The process u ^ n ( t ) , t ∈ ℝ converges weakly in the space D ( − ∞ , ∞ ) to the process u ^ ( t ) as n → ∞ . Since T ( ⋅ ) is a continuous function in D ( − ∞ , ∞ ) in sense of the Skorohod distance, the random variable T ( u ^ n ) = sup t ∈ ℝ | u ^ n ( t ) | converges weakly to the random variable T ( u ^ ) = sup t ∈ ℝ | u ^ ( t ) | . In other words, we have

Γ n = sup t ∈ ℝ | u ^ n ( t ) | ⇒ sup t ∈ ℝ | u ^ ( t ) | = Γ 0 .

To prove the Lemma 3.2, we need the following lemmas.

Lemma 3.3. Let the conditions C are satisfied. Then the following convergence hold

u ^ n ( t ) − u n ( t ) = o ℙ ( 1 ) .

Proof of Lemma 3.3. For this, we need two relations

sup t ∈ ℝ | Λ ˙ 0 ( t − ϑ ^ n ) − Λ ˙ 0 ( t − ϑ 0 ) | = o ℙ ( 1 ) , (3.8)

∫ − ∞ ∞ [ h ( t − ϑ ^ n ) − h ( t − ϑ 0 ) ] d W n ( t ) = o ℙ ( 1 ) . (3.9)

Indeed, for the first relation, since the consistent estimator ϑ ^ n converges to the true value ϑ 0 and Λ ˙ ( ⋅ ) is a continuous function for all t ∈ ℝ , then Λ ˙ 0 ( t − ϑ ^ n ) converges in probability to Λ ˙ 0 ( t − ϑ 0 ) for all t ∈ ℝ . Hence

sup t ∈ ℝ | Λ ˙ 0 ( t − ϑ ^ n ) − Λ ˙ 0 ( t − ϑ 0 ) | = o ℙ ( 1 )

Furthermore by the condition C 1 , the function Λ ˙ 0 ( t − ϑ 0 ) is also bounded. Hence, we can easily obtain the relation (3.8).

Further, for the second relation, we have

E ϑ 0 ( ∫ − ∞ ∞ ( h ( t − ϑ ^ n ) − h ( t − ϑ 0 ) ) d W n ( t ) ) 2 = ∫ − ∞ ∞ E ϑ 0 ( h ( t − ϑ ^ n ) − h ( t − ϑ 0 ) ) 2 d Λ 0 ( t − ϑ 0 ) = ∫ − ∞ ∞ E ϑ 0 ( ( ϑ ^ n − ϑ 0 ) ⋅ h ˙ ( s − ϑ ˜ n ) ) 2 d Λ 0 ( t − ϑ 0 ) = C 2 E ϑ 0 ( ϑ ^ n − ϑ 0 ) 2 ⋅ ∫ − ∞ ∞ d Λ 0 ( t − ϑ 0 )

Remind that E ϑ 0 | n ( ϑ ^ n − ϑ 0 ) | 2 = 2 π Γ ( 3 2 ) + o ℙ ( 1 ) , Λ 0 ( − ∞ ) = 0 and Λ 0 ( ∞ ) < ∞ , therefore

C 2 E ϑ 0 ( ϑ ^ n − ϑ 0 ) 2 ⋅ ∫ − ∞ ∞ d Λ 0 ( t − ϑ 0 ) → n → + ∞ ℙ 0.

Hence

E ϑ 0 ( ∫ − ∞ ∞ ( h ( t − ϑ ^ n ) − h ( t − ϑ 0 ) ) d W n ( t ) ) 2 → n → + ∞ ℙ 0.

which gives the proof of relation (3.9).

Now we can evaluate the difference u ^ n ( t ) − u n ( t ) .

We have

u ^ n ( t ) − u n ( t ) = W n ( t ) + Λ ˙ 0 ( t − ϑ ^ n ) ⋅ v ^ n − W n ( t ) − Λ ˙ 0 ( t − ϑ 0 ) ⋅ v n = Λ ˙ 0 ( t − ϑ ^ n ) ⋅ v ^ n − Λ ˙ 0 ( t − ϑ 0 ) ⋅ v n = Λ ˙ 0 ( t − ϑ ^ n ) ⋅ v ^ n − Λ ˙ 0 ( t − ϑ ^ n ) ⋅ v n + Λ ˙ 0 ( t − ϑ ^ n ) ⋅ v n − Λ ˙ 0 ( t − ϑ 0 ) ⋅ v n = Λ ˙ 0 ( t − ϑ ^ n ) ⋅ [ v ^ n − v n ] + [ Λ ˙ 0 ( t − ϑ ^ n ) − Λ ˙ 0 ( t − ϑ 0 ) ] ⋅ v n .

Since Λ ˙ ( ⋅ − ϑ ^ n ) is a uniformly consistent estimator of Λ ˙ ( ⋅ − ϑ 0 ) on ℝ , then Λ ˙ ( t − ϑ ^ n ) − Λ ˙ ( t − ϑ 0 ) = o ℙ ( 1 ) .

Further the relation (3.9) allows

v ^ n − v n = ∫ − ∞ ∞ [ h ( s − ϑ ^ n ) − h ( s − ϑ 0 ) ] d W n ( s ) = o ℙ ( 1 ) .

The function Λ ˙ 0 ( t − ϑ ^ n ) = Λ ˙ 0 ( t − ϑ 0 ) + o ℙ ( 1 ) < ∞ , implies that Λ ˙ 0 ( t − ϑ ^ n ) = O ℙ ( 1 ) , and

E ϑ 0 ( v n ) 2 = E ϑ 0 ( ∫ − ∞ ∞ h ( s − ϑ 0 ) d W n ( s ) ) 2 = ∫ − ∞ ∞ E ϑ 0 h ( s − ϑ 0 ) 2 d Λ 0 ( s − ϑ 0 ) < ∞ ,

implies also that v n = O ℙ ( 1 ) .

Therefore the Lemma 3.3 is proved.

Lemma 3.4. Let the conditions C are satisfied, then the finite dimensional distributions of the process u ^ n ( t ) , t ∈ ℝ converge to the finite dimensional distributions of the process u ^ ( t ) , t ∈ ℝ as n → ∞ .

Proof of the Lemma 3.4. The proof of the Lemma is based on the Central Limit theorem for stochastic integrals (see, e.g., Kutoyants [

They are defined as following

ϕ n ( μ ) = E ϑ 0 exp { i μ u n ( t ) } = E ϑ 0 exp { i μ W n ( t ) + i μ Λ ˙ 0 ( t − ϑ ^ n ) v ^ n } (3.10)

ϕ 0 ( μ ) = E ϑ 0 exp { i μ u ^ ( t ) } = E ϑ 0 exp { i μ W ( Λ 0 ( t − ϑ 0 ) ) + i μ Λ ˙ 0 ( t − ϑ 0 ) ∫ − ∞ + ∞ h ( s − ϑ 0 ) d W ( Λ 0 ( s ) ) } . (3.11)

Indeed, we have

W n ( t ) = n ( Λ ^ n ( t ) − Λ 0 ( t − ϑ 0 ) ) = n ( 1 n ∑ j = 1 n X j ( t ) − Λ 0 ( t − ϑ 0 ) ) = 1 n ∑ j = 1 n [ X j ( t ) − Λ 0 ( t − ϑ 0 ) ] = 1 n ∑ j = 1 n ∫ − ∞ t d [ X j ( s ) − λ 0 ( s − ϑ 0 ) d s ] = 1 n ∑ j = 1 n ∫ − ∞ + ∞ 1 l { s < t } d π j ( s ) (3.12)

where we put π j ( t ) = X j ( t ) − Λ 0 ( t − ϑ 0 ) .

On the other hand, we have

∫ − ∞ + ∞ h ( s − ϑ 0 ) d W n ( s ) = 1 n ∑ j = 1 n ∫ − ∞ + ∞ h ( s − ϑ 0 ) d π j ( s ) . (3.13)

Taking into account the expression (3.12) and (3.13), we have the representation of u ^ n ( t )

u ^ n ( t ) = W n ( t ) + Λ ˙ 0 ( t − ϑ ^ n ) v ^ n = 1 n ∑ j = 1 n ∫ − ∞ + ∞ [ 1 l { s < t } + Λ ˙ 0 ( t − ϑ ^ n ) h ( s − ϑ ^ n ) ] d π j ( s ) . (3.14)

Thus, we can calculate the characteristic function as following

ϕ n ( μ ) = exp { n ∫ − ∞ + ∞ [ exp { i μ n [ 1 l { s < t } + Λ ˙ 0 ( t − ϑ ^ n ) h ( s − ϑ ^ n ) ] } − 1 − i μ n [ 1 l { s < t } + Λ ˙ 0 ( t − ϑ ^ n ) h ( s − ϑ ^ n ) ] ] d s } . (3.15)

By the Taylor formula

e i ϕ − 1 − i ϕ = ( i ϕ ) 2 2 + o ( ϕ 2 ) ,

we have as n → ∞

ϕ n ( μ ) → exp { − μ 2 2 ∫ − ∞ + ∞ [ 1 l { s < t } + Λ ˙ 0 ( t − ϑ 0 ) h ( s − ϑ 0 ) ] 2 λ 0 ( s − ϑ 0 ) d s } . (3.16)

This last expression (3.16) is equivalent to:

E ϑ 0 exp { i μ W ( Λ 0 ( t − ϑ 0 ) ) + i μ Λ ˙ 0 ( t − ϑ 0 ) ∫ − ∞ + ∞ h ( s − ϑ 0 ) d W ( Λ 0 ( s ) ) } ,

which is the characteristic function defined in (3.11).

Therefore, we have the convergence of the one-dimensional distributions. In the general case, the verification of the convergence is entirely similar.

Lemma 3.5. For any n ∈ ℕ , and for any t 1 , t 2 ∈ ℝ , we have

E ϑ 0 | u n ( t 1 ) − u n ( t 2 ) | 2 ≤ C | t 1 − t 2 | .

Proof of the Lemma 3.5. For any n ∈ ℕ , and for any t 1 , t 2 ∈ ℝ (say t 1 ≥ t 2 ), we have

E θ 0 | u n ( t 1 ) − u n ( t 2 ) | 2 = E ϑ 0 | W n ( t 1 ) + Λ ˙ 0 ( t 1 − ϑ 0 ) ⋅ ∫ − ∞ ∞ h ( s − ϑ 0 ) d W n ( s ) − W n ( t 2 ) − Λ ˙ 0 ( t 2 − ϑ 0 ) ⋅ ∫ − ∞ ∞ h ( s − ϑ 0 ) d W n ( s ) | 2 ≤ 2 E ϑ 0 | W n ( t 1 ) − W n ( t 2 ) | 2 + 2 E ϑ 0 | [ Λ ˙ 0 ( t 1 − ϑ 0 ) − Λ ˙ 0 ( t 2 − ϑ 0 ) ] ∫ − ∞ ∞ h ( s − ϑ 0 ) d W n ( s ) | 2

= 2 ( Λ 0 ( t 1 − ϑ 0 ) − Λ 0 ( t 2 − ϑ 0 ) ) + 2 [ Λ ˙ 0 ( t 1 − ϑ 0 ) − Λ ˙ 0 ( t 2 − ϑ 0 ) ] 2 ∫ − ∞ ∞ h ( s − ϑ 0 ) 2 d Λ 0 ( s − ϑ 0 ) ≤ 2 ∫ t 2 − ϑ 0 t 1 − ϑ 0 λ 0 ( s ) d s + 2 ( ∫ t 2 − ϑ 0 t 1 − ϑ 0 λ ˙ 0 ( τ ) d τ ) 2 ∫ − ∞ ∞ h ( s − ϑ 0 ) 2 λ 0 ( s − ϑ 0 ) d s ≤ 2 | t 1 − t 2 | sup s ∈ ℝ | λ 0 ( s ) | + 2 | t 1 − t 2 | 2 ( sup s ∈ ℝ | λ ˙ ( s ) | ) 2 ∫ − ∞ ∞ h ( u ) 2 λ 0 ( u ) d u ≤ C ′ | t 1 − t 2 | + C ″ | t 1 − t 2 | 2 ≤ C | t 1 − t 2 | .

Note that the two lemmas above are not sufficient to establish the weak convergence of the process u n in the space D ( − ∞ , ∞ ) and also the convergence of the random process T ( u n ) . However, the increments of the process u n being independent, the convergence of the process u n on finite intervals [ A , B ] ⊂ ℝ (that is, convergence in the Skorohod space D [ A , B ] of functions on [ A , B ] without discontinuities of the second kind) follows from ( [

Lemma 3.6. For any ε > 0 , we have

lim κ → 0 lim n → ∞ sup | t 1 − t 2 | < κ ℙ { | u n ( t 1 ) − u n ( t 2 ) | > ε } = 0.

Proof of the Lemma 3.6. For all ε > 0 , we must show that

lim κ → 0 lim n → ∞ sup | t 1 − t 2 | < κ ℙ { | u n ( t 1 ) − u n ( t 2 ) | > ε } = 0.

In fact, by Bienaymé-Chebyshev inequality we have:

ℙ ϑ 0 { | u n ( t 1 ) − u n ( t 2 ) | > ε } ≤ 1 ε 2 E ϑ 0 | u n ( t 1 ) − u n ( t 2 ) | 2 ≤ C ε 2 | t 1 − t 2 | ≤ C κ ε 2 → κ → 0 0.

Therefore the Lemma 3.2 is proved.

So, the last ingredient of the proof of Theorem 3.1 is the following estimate on the tails of the process u n ( t ) .

Lemma 3.7. Let the conditions C are satisfied. For any ε > 0 , there exist T > 0 and n 0 such that for all n ≥ n 0 , we have

ℙ ϑ 0 ( sup | s | > T u n ( s ) > ε ) ≤ ε (3.17)

Proof of the Lemma 3.7. We have

ℙ ϑ 0 ( sup | s | > T u n ( s ) > ε ) ≤ ℙ ϑ 0 ( sup s > T u n ( s ) > ε ) + ℙ ϑ 0 ( sup s < − T u n ( s ) > ε ) (3.18)

we have for the first expression

ℙ ϑ 0 ( sup s > T u n ( s ) > ε ) ≤ K E ϑ 0 u n 2 ( s ) ε 2

Direct calculation allows verifying that

sup s E ϑ 0 u ^ n 2 ( s ) ≤ C 1

where the constant C 1 > 0 does not depend on n. Hence

ℙ ϑ 0 ( sup s > T u n ( s ) > ε ) ≤ K C 1 ε 2 → 0

For the second term of 18, in a similar manner, we obtain a bound

ℙ ϑ 0 ( sup s < − T u n ( s ) > ε ) ≤ K ′ C 2 ε 2 → 0

This convergence allows us to say that for n ≥ n 0 with some n 0 , we obtain the estimate (3.17)

Proposition 3.8. Let the conditions C are satisfied. Then the test

Φ ^ n ( X ( n ) ) = 1 l { Γ ^ n > c ε }

is consistent under alternatives H 1 ,that is:

β ( Φ ^ n , Λ ) → n → ∞ 1,

and it is uniformly consistent under alternatives H 1 ρ , that is:

inf Λ ( ⋅ ) ∈ F ρ β ( Φ ^ n , Λ ) → n → ∞ 1.

Proof of the Proposition 3.8. Under the hypothesis H 1 , the power β ( Φ ^ n , Λ ) is

β ( Φ ^ n , Λ ) = ℙ ( do not choose H 0 / H 0 is false ) = ℙ ( Γ ^ n > c ε / H 1 ) = ℙ Λ ( Γ ^ n > c ε ) .

We can write

ℙ Λ ( Γ ^ n > c ε ) = ℙ Λ ( n ‖ Λ ^ n ( t ) − Λ 0 ( ⋅ − ϑ ^ n ) ‖ ϑ ^ n > c ε ) ≥ ℙ Λ ( n ‖ Λ ( ⋅ ) − Λ 0 ( ⋅ − ϑ ^ n ) ‖ ϑ ^ n − n ‖ Λ ( ⋅ ) − Λ ^ n ( ⋅ ) ‖ ϑ ^ n > c ε ) = ℙ Λ ( n ‖ Λ ( ⋅ ) − Λ ^ n ( ⋅ ) ‖ ϑ ^ n < n ‖ Λ ( ⋅ ) − Λ 0 ( ⋅ − ϑ ^ n ) ‖ ϑ ^ n − c ε ) = ℙ Λ ( ‖ W n ( ⋅ ) ‖ ϑ ^ n < n ‖ Λ ( ⋅ ) − Λ 0 ( ⋅ − ϑ ^ n ) ‖ ϑ ^ n − c ε ) ≈ ℙ Λ ( ‖ W n ( ⋅ ) ‖ ϑ ^ n < n g − c ε )

→ n → ∞ ℙ { sup u ∈ ℝ | W ( u ) | < ∞ } = 1

where we have put

g = inf ϑ ∈ Θ ‖ Λ ( ⋅ ) − Λ 0 ( ⋅ − ϑ ) ‖ ϑ > 0.

Therefore the Kolmogorov-Smirnov type test is consistent for this alternative. The presented above proof allows verifying the uniform consistency of this test against the alternative H 1 ρ .

Indeed we have

inf Λ ( ⋅ ) ∈ L ρ β ( Φ ^ n , Λ ) ≥ ℙ Λ ( ‖ W n ( ⋅ ) ‖ ϑ ^ n < n g ρ − c ε ) → n → ∞ 1

where g ρ = inf Λ ( ⋅ ) ∈ L ρ inf ϑ ∈ Θ ‖ Λ ( ⋅ ) − Λ 0 ( ⋅ − ϑ ) ‖ ϑ > 0

The Proposition 3.8 is thus proved.

This work is devoted to the Kolmogorov-Smirnov test in the case of observations of non-homogeneous Poisson processes. The main results are obtained in the situation where, under the null hypothesis, the intensity functions of the observed inhomogeneous Poisson processes depend on an unknown parameter.

As the GoF test studied in this work is mainly based on the maximum likelihood estimator (MLE), we present the asymptotic properties of MLE in asymptotics of large samples. The conditions of coherence and asymptotic normality are given.

We have studied the Kolmogorov-Smirnov test for inhomogeneous Poisson processes with a parametric null hypothesis. The unknown parameter is the translation parameter. The construction of the test is based on the MLE of this parameter and the main result is that due to the structure of the statistics the substitution of the estimator instead of the unknown parameter leads to the limit of the test statistic with distribution which does not depend on the unknown parameter.

In this work, we find the Kolmogorov-Smirnov GoF test based on sup-metrics in the case of the translation parameter. It is natural to ask: what if we take L 2 ( ℝ ) metrics?

The authors declare no conflicts of interest regarding the publication of this paper.

Wandji Tanguep, E.D. and Njamen Njomen, D.A. (2021) Kolmogorov-Smirnov APF Test for Inhomogeneous Poisson Processes with Shift Parameter. Applied Mathematics, 12, 322-335. https://doi.org/10.4236/am.2021.124023