Minimum penalized Hellinger distance for model selection in small samples

In statistical modeling area, the Akaike information criterion AIC, is a widely known and extensively used tool for model choice. The {\phi}-divergence test statistic is a recently developed tool for statistical model selection. The popularity of the divergence criterion is however tempered by their known lack of robustness in small sample. In this paper the penalized minimum Hellinger distance type statistics are considered and some properties are established. The limit laws of the estimates and test statistics are given under both the null and the alternative hypotheses, and approximations of the power functions are deduced. A model selection criterion relative to these divergence measures are developed for parametric inference. Our interest is in the problem to testing for choosing between two models using some informational type statistics, when independent sample are drawn from a discrete population. Here, we discuss the asymptotic properties and the performance of new procedure tests and investigate their small sample behavior.


Introduction
A comprehensive surveys on Pearson Chi-square type statistics has been provided by many authors as Cochran (1952), Watson (1956) and Moore (1978Moore ( ,1986, in particular on quadratics forms in the cell frequencies. Recently, Andrews(1988aAndrews( , 1988b has extended the Pearson chi-square testing method to non-dynamic parametric models, i.e., to models with covariates. Because Pearson chi-square statistics provide natural measures for the discrepancy between the observed data and a specific parametric model, they have also been used for discriminating among competing models. Such a situation is frequent in Social Sciences where many competing models are proposed to fit a given sample. A well know difficulty is that each chi-square statistic tends to become large without an increase in its degrees of freedom as the sample size increases. As a consequence goodness-of-fit tests based on Pearson type chi-square statistics will generally reject the correct specification of every competing model. To circumvent such a difficulty, a popular method for model selection, which is similar to use of Akaike (1973) Information Criterion (AIC), consists in considering that the lower the chi-square statistic, the better is the model. The preceding selection rule, however, does not take into account random variations inherent in the values of the statistics.
We propose here a procedure for taking into account the stochastic nature of these differences so as to assess their significance. The main propose of this paper is to address this issue. We shall propose some convenient asymptotically standard normal tests for model selection based on φ−divergence type statistics. Following Vuong (1989Vuong ( , 1993, the procedures considered here are testing the null hypothesis that the competing models are equally close to the data generating process (DGP) versus the alternative hypothesis that one model is closer to the DGP where closeness of a model is measured according to the discrepancy implicit in the φ−divergence type statistic used. Thus the outcomes of our tests provide information on the strength of the statistical evidence for the choice of a model based on its goodness-of-fit. The model selection approach proposed here differs from those of Cox (1961Cox ( , 1962 and Akaike (1974) for non nested hypotheses. This difference is that the present approach is based on the discrepancy implicit in the divergence type statistics used, while these other approaches as Vuong's (1989) tests for model selection rely on the Kullback-Leibler (1951) information criterion (KLIC). Beran (1977) showed that by using the minimum Hellinger distance estimator, one can simultaneously obtain asymptotic efficiency and robustness properties in the presence of outliers. The works of Simpson (1989) and Lindsay (1994) have shown that, in the tests hypotheses, robust alternatives to the likelihood ratio test can be generated by using the Hellinger distance. We consider a general class of estimators that is very broad and contains most of estimators currently used in practice when forming divergence type statistics. This covers the case studies in Harris and Basu (1994); Basu et al. (1996); Basu and Basu (1998) where the penalized Hellinger distance is used. The remainder of this paper is organized as follows. Section 2 introduces the basic notations and definitions. Section 3 gives a short overview of divergence measures. Section 4 investigates the asymptotic distribution of the penalized Hellinger distance. In section 5, some applications for testing hypotheses are proposed. Section 6 presents some simulation results. Section 7 concludes the paper.

Definitions and notation
In this section, we briefly present the basic assumptions on the model and parameters estimators, and we define our generalized divergence type statistics. We consider a discrete statistical model, i.e X 1 , X 2 , . . . X n an independent random sample from a discrete population with support X = {1, . . . , m}. Let P = (p 1 , . . . , p m ) T be a probability vector i.e P ∈ Ω m where Ω m is the simplex of probability m-vectors, We consider a parameter model which may or may not contain the true distribution P , where Θ is a compact subset of k-dimensional Euclidean space (with k < m − 1). If P cointains P , then there exists a θ 0 ∈ Θ such that P θ 0 = P and the model P is said to be correctly specified.
We are interested in testing H 0 : P ∈ P ( with true parameter θ 0 ) versus H 1 : P ∈ Ω m − P.
By · we denote the usual Euclidean norm and we interpret probability distributions on X as row vectors from R m . For simplicity we restrict ourselves to unknown true parameters θ 0 satisfying the classical regularity conditions given by Birch (1964): 1. True θ 0 is an interior point of Θ and p iθ 0 > 0 for i = 1, . . . , m. Thus is an interior point of the set Ω m .
2. The mapping P : Θ −→ Ω m is totally differentiable at θ 0 so that the partial derivatives of p i with respect to each θ j exist at θ 0 and p i (θ) has a linear approximation at θ 0 given by is of full rank (i.e. of rank k and k < m).
Under the hypothesis that P ∈ P, there exists an unknown parameter θ 0 such that P = P θ 0 and the problem of point estimation appears in a natural way. Let n be sample size. We can estimate the distribution P θ 0 = (p 1 (θ), p 2 (θ), . . . , p m (θ)) T by the vector of observed frequencies P = (p 1 , . . . ,p m ) on X ie of measurable mapping X n −→ Ω m . This non parametric estimator P = (p 1 , . . . ,p m ) is defined bŷ We can now define the class of φ-divergence type statistics considered in this paper.

A brief review of φ-divergences
Many different measures quantifying the degree of discrimination between two probability distributions have been studied in the past.  Morales et al. (1997Morales et al. ( , 1998, Zografos (1994Zografos ( , 1998, Bar-Hen (1996)  Consider two populations X and Y , according to classifications criteria can be grouped into m classes species x 1 , x 2 , . . . , x m and y 1 , y 2 , . . . , y m with probabilities P = (p 1 , p 2 , . . . , p m ) and Q = (q 1 , q 2 , . . . , q m ) respectively. Then is the φ−divergence between P and Q (see Csiszár, 1967) for every φ in the set Φ of real convex functions defined on [0, ∞[. The function φ(t) is assumed to verify the following regularity condition : φ : [0, +∞[−→ R ∪ {∞} is convex and continuous, where 0φ( 0 0 ) = 0 and 0φ( p 0 ) = lim u−→∞ (φ(u)/u). Its restriction on ]0, +∞[ is finite, twice continuously differentiable in a neighborhood of u = 1, with φ(1) = φ (1) = 0 and φ (1) = 1 (cf. Liese and Vajda (1987)). We shall be interested also in parametric estimators of P θ 0 which can be obtained by means of various point estimatorŝ It is convenient to measure the difference between observed P and expected frequencies P θ 0 . A minimum Divergence estimator of θ is a minimizer of D φ ( P , P θ 0 ) where P is a nonparametric distribution estimate. In our case, where data come from a discrete distribution, the empirical distribution defined in (2.1) can be used.
In particular if we replace φ 1 (x) = −4[ √ x − 1 2 (x + 1)] in (3.2) we get the Hellinger distance between distribution P and P θ given by (3.4) Liese and Vajda (1987), Lindsay (1994) and Morales et al. (1995) introduced the so-called minimum φ-divergence estimate defined by In particular if we replace φ = − log x + x − 1 we get where KL m is the modified Kullback-Leibler divergence.
Beran (1977) first pointed out that the minimum Hellinger distance estimator (MHDE) of θ, defined byθ has robustness proprieties. Further results were given by Tamura and Boos (1986), Simpson (1987), and Donoho and Liu (1988), Simpson (1987Simpson ( , 1989 and Basu et al. (1997) for more details on this method of estimation. Simpson, however, noted that the small sample performance of the Hellinger deviance test at some discrete models such as the Poisson is somewhat unsatisfactory, in the sense that the test requires a very large sample size for the chi-square approximation to be useful (Simpson (1989), Table 3). In order to avoid this problem, one possibility is to use the penalized Hellinger distance (see Harris )). The penalized Hellinger distance family between the probability vectors P and P θ is defined by : where h is a real positive number with = {i :p i = 0} and c = {i :p i = 0}. Note that when h = 1, this generates the ordinary Hellinger distance (Simpson, 1989). Hence (3.7) can be written as followŝ One of the suggestions to use the penalized Hellinger is motivated by the fact that this suitable choice may lead to an estimate more robust than the MLE. A model selection criterion can be designed to estimate an expected overall discrepancy, a quantity which reflects the degree of similarity between a fitted approximating model and the generating or true model. Estimation of Kullback's information (see Kullback-Leibler (1951)) is the key to deriving the Akaike Information criterion AIC (Akaike (1974)). Motivated by the above developments, we propose by analogy with the approach introduced by Vuong (1993), a new information criterion relating to the φ-divergences. In our test, the null hypothesis is that the competing models are as close to the data generating process (DGP) where closeness of a model is measured according to the discrepancy implicit in the penalized Hellinger divergence.

Asymptotic distribution of the penalized Hellinger distance
Hereafter, we focus on asymptotic results. We assume that the true parameter θ 0 and mapping P : Θ −→ Ω m satisfy conditions 1-6 of Birch (1964). We consider the m-vector P θ = (p 1θ , . . . , p mθ ) T , the m × k Jacobian matrix The above defined matrices are considered at the point θ ∈ Θ where the derivatives exist and all the coordinates p j (θ) are positive.
The stochastic convergences of random vectors X n to a random vector X are denoted by X n P −→ X and X n L −→ X (convergences in probability and in law, respectively). Instead c n X n P −→ 0 for a sequence of positive numbers c n , we can write X = o p (c −1 n ).
We need the following result to prove Theorem (4.3). Let φ ∈ Φ, let p : Θ → Ω m be twice continuously differentiable in a neighborhood of θ 0 and assume that conditions 1-5 of Section 2 hold. Suppose that I θ 0 is the k × k Fisher Information matrix and θ P H satisfying (3.7) then the limiting distribution of and applying the Central Limit Theorem we have For simplicity, we write D h H ( P , P θ P H ) instead of P HD h ( P , P θ P H ).

Theorem 4.3 Under the assumptions of Proposition
proof. A first order Taylor expansion gives In the same way as in Morales et al. (1995), it can be established that : From (4.12) and (4.13) we obtain Where I is the m × m unity matrix, have the same asymptotic distribution. Furthermore it is clear (applying TCL) that The case which is interest to us here is to test the hypothesis H 0 : P ∈ P. Our proposal is based on the following penalized divergence test statistic D h H ( P , P θ P H ) where P and θ P H have been introduced in Theorem (4.3) and (3.7) respectively.
Using arguments similar to those developed by Basu (1996), under the assumptions of (4.3) and the hypothesis H 0 : P = P θ , the asymptotic distribution of 2nD h H ( P , P θ P H ) is a chi-square when h = 1 with m − k − 1 degrees of freedom. Since the others members of penalized Hellinger distance tests differ from the ordinary Hellinger distance test only at the empty cells, they too have the same asymptotic distribution. (See Simpson 1989, Basu, Harris and Basu 1996 among others).
Considering now the case when the model is wrong i.e H 1 : P = P θ . We introduce the following regularity assumptions (A 1 ) There exists θ 1 = arg inf θ∈Θ P HD h (P, P θ ) such that :   , with Λ 11 = Σ p in (4.10) and Theorem 4.4 Under H 1 : P = P θ and assume that conditions (A 1 ) and (A 2 ) hold, we have : where From the assumed assumptions (A 1 ) and (A 2 ), the result follows.

Applications for testing hypothesis
The estimate D h H ( P , P θ P H ) can be used to perform statistical tests.

Test of goodness-fit
For completeness, we look at D h H ( P , P θ P H ) in the usual way, i.e., as a goodnessof-fit statistic. Recall that here θ P H is the minimum penalized Hellinger distance estimator of θ. Since D h H ( P , P θ P H ) is a consistent estimator of D h H (P, P θ ), the null hypothesis when using the statistic D h H ( P , P θ P H ) is Hence, if H 0 is rejected so that one can infer that the parametric model P θ is misspecified. Since D h H (P, P θ ) is non-negative and takes value zero only when P = P θ , the tests are defined through the critical region Remark 5.1 Theorem (4.4) can be used to give the following approximation to the power of test H 0 : D h H (P, P θ ) = 0.
Approximated power function is where q α,k is the (1 − α)-quantile of the χ 2 distribution with m − k − 1 degrees of freedom and F n is a sequence of distribution functions tending uniformly to the standard normal distribution F(x). Note that if H 0 : D h H (P, P θ ) = 0, then for any fixed size α the probability of rejection H 0 : D h H (P, P θ ) = 0 with the rejection rule 2nD h H ( P , P θ P H ) > q α,k tends to one as n → ∞.
Obtaining the approximate sample n, guaranteeing a power β for a give alternative P , is an interesting application of formula (5.17). If we wish the power to be equal to β * , we must solve the equation It is not difficult to check that the sample size n * , is the solution of the following equation The solution is given by 2 and b = q α,k D h H (P, P θ ) and the required size is n 0 = [n * ] + 1 , where [·] denotes "integer part of".

Test for model selection
As we mentioned above, when one chooses a particular φ−divergence type statistic D h H ( P , P θ P H ) = P HD h H ( P , P θ P H ) with θ P H the corresponding minimum penalized Hellinger distance estimator of θ, one actually evaluates the goodness-of-fit of the parametric model P θ according to the discrepancy D h H (P, P θ ) between the true distribution P and the specified model P θ . Thus it is naturel to define the best model among a collection of competing models to be the model that is closest to the true distribution according to the discepancy D h H (P, P θ ).
In this paper we consider the problem of selecting between two models. Let G µ = {G(. | µ); µ ∈ Γ} be another model, where Γ is a q−dimensional parametric space in R q . In a similar way, we can define the minimum penalized Hellinger distance estimator of µ and the corresponding discrepancy D h H (P, G µ ) for the model G µ .
Our special interest is the situation in which a researcher has two competing parametric models P θ and G µ , and he wishes to select the better of two models based on their discrimination statistic between the observations and models P θ and G µ , defined respectively by D h H ( P , P θ P H ) and D h H ( P , G µ P H ). Let the two competing parametric models P θ and G µ with the given discrepancy D h H (P, ·).

Definition 5.2
H eq 0 : D h H (P, P θ ) = D h H (P, G µ ) means that the two models are equivalent, H P θ : D h H (P, P θ ) < D h H (P, G µ ) means that P θ is better than G µ , H Gµ : D h H (P, P θ ) > D h H (P, G µ ) means that P θ is worse than G µ , Remark 5.3 1) It does not require that the same divergence type statistics be used in forming D h H ( P , P θ P H ) and D h H ( P , G µ P H ). Choosing, however, different discrepancy for evaluating competing models is hardly justified.
2) This definition does not require that either of the competing models be correctly specified. On the other hand, a correctly specified model must be at least as good as any other model.
The following expression of the indicator D h H (P, P θ ) − D h H (P, G µ ) is unknown, but from the previous section, it can be estimated by the the difference This difference converges to zero under the null hypothesis H eq 0 , but converges to a strictly negative or positive constant when H P θ or H Gµ holds. These properties actually justify the use of D h H ( P , P θ P H ) − D h H ( P , G µ P H ) as a model selection indicator and common procedure of selecting the model with highest goodness-of-fit. As argued in the introduction, however, it is important to take into account the random nature of the difference D h H ( P , P θ P H ) − D h H ( P , G µ P H ) so as to assess its significance. To do so we consider the asymptotic distribution of √ n D h H ( P , P θ P H ) − D h H ( P , G µ P H ) under H eq 0 . Our major task is to to propose some tests for model selection, i.e., for the null hypothesis H eq 0 against the alternative H P θ or H Gµ . We use the next lemma with θ P H and µ P H as the corresponding minimum penalized Hellinger distance estimator of θ and µ. Using P and P θ defined earlier, we consider the vector proof.
The results follows from a first order Taylor expansion.
We define Q θ , Q µ and Λ * are consistently estimated by their sample analogues K θ , K µ , Q θ , Q µ and Λ * , hence Γ 2 is consistently estimated by Next we define the model selection statistic and its asymptotic distribution under the null and alternatives hypothesis. Let where HI h stands for the penalized Hellinger Indicator. The following theorem provides the limit distribution of HI h under the null and alternatives hypothesis.
Under H eq 0 : P θ = G µ and P θ P H = G µ P H we get : Finally, applying the Central Limit Theorem and assumptions (A1)-(A2), we can now immediately obtain HI h L −→ N (0, 1).
6 Computational results

Example
To illustrate the model procedure discussed in the preceding section,we consider an example. we need to define the competing models, the estimation method used for each competing model and the Hellinger penalized type statistic to measure the departure of each proposed parametric model from the true data generating process. For our competing models, we consider the problem of choosing between the family of poisson distribution and the family of geometric distribution. The poisson distribution P (λ) is parameterized by λ and has density The geometric distribution G(p) is parameterized by p and has density g(x, p) = (1 − p) x−1 × p for x ∈ N * and zero otherwise.
We use the minimum penalized Hellinger distance statistic to evaluate the discrepancy of the proposed model from the true data generating process.
We where π(π ∈ [0, 1]) is specific value to each set of experiments. In each set of experiment several random sample are drawn from this mixture of distributions. The sample size varies from 20 to 300, and for each sample size the number of replication is 1000. In each set of experiment, we choose two values of the parameter h = 1 and h = 1/2, where h = 1 corresponds to the classic Hellinger distance. The aim is to compare the accuracy of the selection model depending on the parameter setting chosen. In order a perfect fit by the proposed method, for the chosen parameters of these two distributions, we note that most of the mass is concentrated between 0 and 10. Therefore, the chosen partition has eight cells defined by and [C 7 , C 8 [= [7, +∞[ represents the last cell. We choose different values of π which are 0.00, 0.25, 0.535, 0.75, 1.00. Although our proposed model selection procedure does not require that the data generating process belong to either of the competing models, we consider the two limiting cases π = 1.00 and π = 0.00 for they correspond to the correctly specified cases. To investigate the case where both competing models are misspecified but not at equal distance from the DGP, we consider the case π = 0.25, π = 0.75 and π = 0.535.   The former case correspond to a DGP which is poisson but slightly contaminated by a geometric distribution. The second case is interpreted similarly as a geometric slightly contaminated by a poisson distribution. In the last case, π = 0.535 is the value for which the poisson D h H ( P , P λ P H ) and the geometric D h H ( P , G p P H ) families are approximatively at equal distance to the mixture m(π) according to the penalized Hellinger distance with the above cells.   In the first two sets of experiments (π = 0.00 and π = 1.00) where one model is correctly specified, we use the labels 'correct', 'incorrect' and 'indecisive' when a choice is made. The first halves of tables 1-5 confirm our asymptotic results. They all show that the minimum penalized Hellinger estimators λ P H and p P H converge to their pseudo-true values in the misspecified cases and to their true values in the correctly specified cases as the sample size increases . With respect to our HI h , it diverges to −∞ or +∞ at the approximate rate of √ n except in the table 5. In the latter case the HI h statistic converges, as expected, to zero which is the mean of the asymptotic N (0, 1) distribution under our null hypothesis of equivalence.
With the exception of table 1 and 2, we observed a large percentage of incorrect decisions. This is because both models are now incorrectly specified. In contrast, turning to the second halves of the tables 1-2, we first note that the percentage of correct choices using HI h statistic steadily increases and ultimately converges to 100%.   The preceding comments for the second halves of tables 1 and 2 also apply to the second halves of tables 3 and 4. In all tables (1,2,3 and 4), the results confirm, in small samples, the relative domination of the model selection procedure based on the penalized Hellinger statistic test (h = 1/2) than the other corresponding to the choice of classical Hellinger statistic test (h = 1), in percentages of correct decisions. Table 5 also confirms our asymptotics results : as sample size incerases, the percentage of rejection of both models converges, as it should, to 100%.
In figures 1, 3, 5, 7 and 9 we plot the histogramm of datasets and overlay the curves for Geometric and poisson distribution. When the DGP is correctly specified figure 1, the poisson distribution has a reasonable chance of being distinguished from geometric distribution.
Similarly, in figure 3, as can be seen, the geometric distribution closely approximates the data sets. In figures 5 and 7 the two distributions are close but the geometric (figure 5) and the poisson distributions ( figure 7) Figure 9 : Histogram of DGP=0.465×Geom+0.535×Pois with n=50 Figure 10 : Comparative barplot of HIn depending n to be much closer to the data sets. When π = 0.535, the distributions for both (figure 9) poisson distribution and geometric distribution are similar, while being slightly symmetrical about the axis that passes through the mode of data distribution. This follows from the fact that these two distributions are equidistant from the DGP. and would be difficult to distinguish from data in practice.
The preceding results in tables and the theorem (5.5) confirm, in figures 2, 4, 6 and 8, that the Hellinger indicator for the model selection procedure based on penalized hellinger divergence statistic with h = 0.5 (light bars) dominates the procedure obtained with h = 1 (dark bars) corresponding to the ordinary Hellinger distance. As expected, our statistic divergence HI h diverges to −∞ (figure 2, 8) and to +∞ (figure 4, figure 8) more rapidly when we use the penalized Hellinger distance test than the classical Hellinger distance test. Hence, Figure 10 allows a comparison with the asymptotic N (0, 1) approximation under our null hypothesis of of equivalence. Hence the indicator HI 1/2 ,  based on the penaliezd Hellinger distance is closer to the mean of N (0, 1) than is the indicator HI 1 .

Conclusion
In this paper we investigated the problems of model selection using divergence type statistics. Specifically, we proposed some asymptotically standard normal and chi-square tests for model selection based on divergence type statistics that use the corresponding minimum penalized Hellinger estimator. Our tests are based on testing whether the competing models are equally close to the true distribution against the alternative hypotheses that one model is closer than the other where closeness of a model is measured according to the discrepancy implicit in the divergence type statistics used. The penalized Hellinger divergence criterion outperforms classical criteria for model selection based on the ordinary Hellinger distance, especially in small sample, the difference is expected to be minimal for large sample size. Our work can be extended in several directions. One extension is to use random instead of fixed cells. Random cells arise when the boundaries of each cell c i depend on some unknown parameter vector γ, which are estimated. For various examples, see e.g., Andrews (1988b). For instance, with appropriate random cells, the asymptotic distribution of a Pearson type statistic may become independent of the true parameter θ 0 under correct specification. In view of this latter result, it is expected that our model selection test based on penalized Hellinger divergence measures will remain asymptotically normally or chi-square distributed.