^{1}

^{*}

^{1}

Minimum Hellinger distance (MHD) estimation is extended to a simulated version with the model density function replaced by a density estimate based on a random sample drawn from the model distribution. The method does not require a closed-form expression for the density function and appears to be suitable for models lacking a closed-form expression for the density, models for which likelihood methods might be difficult to implement. Even though only consistency is shown in this paper and the asymptotic distribution remains an open question, our simulation study suggests that the methods have the potential to generate simulated minimum Hellinger distance (SMHD) estimators with high efficiencies. The method can be used as an alternative to methods based on moments, methods based on empirical characteristic functions, or the use of an expectation-maximization (EM) algorithm.

In actuarial science or finance, we often encounter the problem of fitting distributions to data where the distributions have no closed-form expressions for their densities. These distributions are often infinitely divisible and they happen to be the distributions of the regularly spaced increments of Lévy processes. Beside infinitely divisible distributions, mixture distributions created using a mixing mechanism also provide examples of continuous densities without a closed-form expression. These types of distributions are often encountered in actuarial science. A few examples will be provided as illustrations subsequently.

Likelihood methods might be difficult to implement in such cases, due to the lack of a closed-form expression for the density function. To handle such a situation, we can consider the following approaches:

1) Expectation-maximization (EM) algorithm. Only under special conditions can the EM algorithm be used as it requires some conditional distributions, and these conditional distributions might be difficult to obtain; see McNeil, Frey and Embrechts [

2) Method of moments. Even though the model density has no closed form, if the model moments can be expressed in closed form, then the method of moments can be used. The main drawback of the method of moments is that estimators thus obtained might not be efficient nor robust for models with three or more parameters as the estimators will depend on a polynomial of degree three or higher, making the methods very sensitive to data which are contaminated; see Küchler and Tappe [

3) The k-L procedure. Even if the density has no closed form, if the model characteristic function has a closed-form expression, then we can select points from the real and imaginary parts of the empirical characteristic function and match them with their model counterparts at the chosen points. This is the k-L procedure as proposed by Feuerverger and McDunnough [

4) Indirect inference. These methods are based on simulations and they require two steps. First, we need to choose a proxy model to obtain the estimators which are biased. Second, we remove the bias using simulations. See Garcia, Renault and Veredas [

When implementing these methods for distributions without closed-form densities, there are some drawbacks which motivate us in this paper to extend minimum Hellinger distance methods originally proposed by Beran [

Q n ( θ ) = ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ S ( x ) ] 1 2 ) 2 d x (1)

to obtain the simulated minimum Hellinger distance (SMHD) estimators, where f n ( x ) is an empirical density estimate based on the observed data with the property f n ( x ) → p f θ 0 ( x ) where θ 0 is the true vector of parameters. This consistency property will imply ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ 0 ( x ) ] 1 2 ) 2 d x → p 0 as n → ∞ ; see section 3 (page 224) of Tamura and Boos [

Clearly, the new method proposed here will avoid the problem of arbitrariness in the choice of points for the k-L procedure based on characteristic functions. Unlike indirect inference, the proposed method does not need a proxy model. Furthermore, the estimators obtained using the proposed method might be more robust and efficient than method of moments estimators. Besides, the proposed method does not require conditioning, which can be difficult, whereas the EM algorithm does.

It appears that the proposed method, which originally combines simulation with Hellinger distance, adds to the set of statistical techniques that can be useful for financial and actuarial data, yet many of which do not receive much attention in the actuarial literature. SMHD methods depend on being able to draw samples from the parametric family; in general, this is indeed possible. Consequently, SMHD methods also add to the existing literature on simulated inference which is relatively new; see comments by Davidson and MacKinnon [

The new method is built on the classical version (version D) of Hellinger distance as proposed by Beran [

Q n ( θ ) = ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ ( x ) ] 1 2 ) 2 d x (2)

to obtain the minimum Hellinger distance (MHD) estimators. The MHD estimators have been known to have nice robustness properties with breakdown point greater than 0. Also, they are consistent with, in general, less stringent conditions for consistency than maximum likelihood (ML) estimators. However, more restrictions are placed upon the underlying parametric family for the MHD estimators to attain full efficiency, such assuming f θ ( x ) having a compact support for example. Despite this drawback, simulation studies often show that the methods perform well across many models. For a literature review of Hellinger distance (HD) methods, see chapters 3 and 10 of the book by Basu, Shioya and Park [

In this paper, we introduce a simulated version of HD methods and show that the SMHD estimators are consistent. However, the question of asymptotic normality is still not resolved for the time being. Further work should generate results on asymptotic distributions for the SMHD estimators that shall then be presented in a subsequent paper. In this paper, the methods are presented with fewer technicalities and we relate them with the traditional likelihood methods. In doing so, we wish to encourage practitioners to use these methods for their applied works in their fields. In the next paragraphs, we will consider a few examples for illustrations of the types of distributions without closed-form expressions often encountered in finance and actuarial science where the new simulated method can be particularly useful.

Example 1

We present here the class of normal mean-variance mixture distributions where the random variable X can be represented using equality in distribution as

X = d θ + μ W + σ W Z , (3)

where

1) θ , μ and σ are parameters with − ∞ < θ < ∞ , − ∞ < μ < ∞ , and σ > 0 ;

2) W is a nonnegative random variable with an infinitely divisible (ID) distribution;

3) Z follows a standard normal distribution N ( 0 , 1 ) and is independent of W .

The generalized hyperbolic, variance-gamma, and normal-inverse Gaussian distributions belong to this class; see McNeil, Frey and Embrechts [

X can be obtained and given by M X ( s ) = e θ s M W ( μ s + 1 2 σ 2 s 2 ) , where the

moment generating functions of X and W are given respectively by M X ( s ) and M W ( s ) . Distributions of the increments observed at regular intervals of a subordinated Brownian motion process belong to this class. It can easily be seen that the density function of X depends on the density function of W . Consequently, the density function of X might not have a closed-form expression in general. Closely related to the variance-gamma distribution is the generalized normal-Laplace (GNL) distribution which is introduced by Reed [

Example 2

A random variable X follows a GNL distribution if it can be represented as

X = d ρ μ + σ ρ Z + 1 α G 1 − 1 β G 2 , (4)

where

1) the parameters are μ , σ , ρ , α and β , with − ∞ < μ < ∞ , σ > 0 , ρ > 0 , α > 0 , and β > 0 ;

2) the random variables G 1 and G 2 are independent and follow a common gamma distribution with density function g ( x ; ρ ) = 1 Γ ( ρ ) x ρ − 1 e − x , x > 0 , ρ > 0 ;

3) Z follows a standard normal distribution, N ( 0 , 1 ) , with Z being independent of G 1 and G 2 .

The distribution is infinitely divisible and can display asymmetry and fatter tail than the normal distribution. It will be symmetric if α = β . The vector of parameters is θ = ( μ , σ , ρ , α , β ) ′ and the mgf for X can be obtained using the representation given by Equation (4) and is given by

M X ( s ) = e ρ ( μ s + 1 2 σ 2 s 2 ) ( α α − s ) ρ ( β β + s ) ρ . (5)

From the cumulant generating function, the mean and variance are given respectively by

E ( X ) = ρ ( μ + 1 α − 1 β ) (6)

and

V ( X ) = ρ ( σ 2 + 1 α 2 + 1 β 2 ) . (7)

Higher cumulants are

κ r = ρ ( r − 1 ) ! ( 1 α r + ( − 1 ) r 1 β r ) for r > 2 . (8)

Due to the lack of a closed-form expression for the density function, Reed [

For more on Lévy processes and infinitely divisible distributions used in finance, see chapter 6 of the book by Schoutens [

Assume that we have a random sample of observations X 1 , ⋯ , X n and they are independent and identically distributed as the random variable X which is continuous with model density given by f θ ( x ) . The vector of parameters is denoted by θ = ( θ 1 , ⋯ , θ m ) ′ . In his seminal paper, Beran [

Q n ( θ ) = ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ ( x ) ] 1 2 ) 2 d x . (9)

Beran [

down points of the estimators are around 1 2 and it is well-known that the sam-

ple mean has a breakdown point of 0. See Hogg, McKean and Craig [

Q n ( θ ) = 2 − 2 ∫ − ∞ ∞ [ f n ( x ) ] 1 2 [ f θ ( x ) ] 1 2 d x (10)

and, using the Cauchy-Schwarz inequality, ∫ − ∞ ∞ [ f n ( x ) ] 1 2 [ f θ ( x ) ] 1 2 d x ≤ 1 , we find

0 ≤ Q n ( θ ) ≤ 2 . (11)

Moreover, since ∫ − ∞ ∞ [ f n ( x ) ] 1 2 [ f θ ( x ) ] 1 2 d x = 1 if and only if f n ( x ) = f θ ( x )

almost everywhere, it implies Q n ( θ ) = 0 if and only if f n ( x ) = f θ ( x ) almost everywhere.

The objective function is stable and bounded. This might explain why, intuitively, minimizing such an objective function, we obtain estimators that are also stable and therefore robust in some sense.

Kernel density estimators are often used to define f n ( x ) . One of the simplest kernel density estimators is the rectangular kernel density estimator which generalizes the usual histogram estimator. In general, kernel density estimators have the form

f n ( x ) = 1 n h n ∑ i = 1 n ω ( x − x i h n ) , (12)

where

a) h n is the bandwidth with the property that h n → 0 and n h n → ∞ as n → ∞ ;

b) ω ( x ) is a density function.

The property specified by a) guarantees the consistency of f n ( x ) ; see Corollary 6.4.1 given by Lehmann [

For the rectangular kernel density, the following symmetric density around 0 is chosen with ω ( x ) = 1 2 for − 1 < x < 1 . The kernel ω ( x ) has a compact

support. The density estimate at x is then the average of rectangles located within h n units from x . For other kernels and their implementation using the package R, see chapter 10 of the book by Rizzo [

If f θ ( x ) has no closed-form expression but random samples can be drawn from the distribution with density f θ ( x ) , clearly we can use the same type of kernel density estimator, used to define f n ( x ) , to estimate f θ 0 ( x ) . In other words, in order to estimate f θ ( x ) , we similarly define f θ S ( x ) as being the kernel density estimator based on a random sample of size U = τ n . Note that U → ∞ as n → ∞ and τ needs to be reasonably large so that there is little loss of efficiencies due to simulations; we recommend τ ≥ 10 .

Consequently, for the simulated version, we shall minimize the objective function given by

Q n ( θ ) = ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ S ( x ) ] 1 2 ) 2 d x (13)

to obtain the SMHD estimators.

For terminology, we shall call the classical version, which is deterministic in terms of f θ ( x ) , version D, and the simulated version, version S. Since Q n ( θ ) , as given by Equation (13), is not differentiable, a direct simplex search method which is derivative-free is recommended. The R package already has a built-in function for performing the Nelder-Mead simplex method which is a derivative-free method to minimize a function. Also, there is a built-in function to handle density estimates using various kernels. These features will facilitate the implementation of SMHD methods for applied works by practitioners. Furthermore, because the densities f n ( x ) and f θ S ( x ) based on a rectangular or triangular kernel are positive only in some finite interval and zero elsewhere, this makes the integration for evaluating Equation (13) easy to handle. A trapezoid quadrature method will suffice to find the SMHD estimators. Note that for the simulated version, we still have

0 ≤ Q n ( θ ) ≤ 2 . (14)

As data are also smoothed, intuitively, these features will again make the simulated version robust.

The paper is organized as follows. In Section 2, we will look into the asymptotic properties of MHD estimators. More precisely, we shall briefly review the asymptotic properties of the classical MHD estimators in Section 2.1 and establish the consistency of SMHD estimators in Section 2.2. Also in Section 2.2, an estimator for the Fisher information matrix is proposed with the use of SMHD estimators. In Section 3, we use a limited simulation study to compare the efficiencies of the SMHD estimators with those of method of moments estimators, using the GNL distribution. Despite being limited, the study seems to show that the SMHD estimators are more efficient than the method of moments estimators. This seems to point to the potential of SMHD methods to generate estimators with good efficiency and further justify their use in actuarial science and finance.

MHD estimators can be seen to be consistent in general for version D and version S. In fact, the conditions are even less restrictive than the conditions for maximum likelihood estimators to be consistent. Since we aim for applications, we only consider asymptotic properties under the strict parametric model, i.e., assuming the observations come from the parametric density family f θ ( x ) , where θ ∈ Ω , and the parameter space Ω is assumed to be compact.

Let

‖ ( f 1 ) 1 2 − ( f 2 ) 1 2 ‖ = [ ∫ − ∞ ∞ ( [ f 1 ( x ) ] 1 2 − [ f 2 ( x ) ] 1 2 ) 2 d x ] 1 2 , (15)

where f 1 ( x ) and f 2 ( x ) are density functions. Note that ‖ ⋅ ‖ is a norm in the density functional space and it will respect the triangular inequality.

Tamura and Boos [

it is sufficient for the MHD estimators given by the vector θ ^ obtained by minimizing Equation (10) to be consistent, i.e., θ ^ → p θ 0 , assuming the parameter space Ω is compact. See Theorem 3.1 by Tamura and Boos [

However, for asymptotic normality, they require more stringent conditions to be as efficient as ML estimators. They are found in Theorem 4 given by Beran [

a compact support K for both ∂ log f θ ( x ) ∂ θ and f θ ( x ) . Despite these restric-

tions, empirical studies often show that the estimators have high efficiencies in many models without the condition of compact support for the parametric family met. The regularity conditions of Beran’s Theorem 4 when restricted to the strict parametric model are stated using Theorem 1 below. We also require the vector of true parameters θ 0 to be in Ω , where Ω is compact. Theorem 1 can be viewed as a corollary of Theorem 4 as given by Beran [

Theorem 1

Suppose

1) The kernel density ω ( x ) is symmetric about 0 and has a compact support.

2) The function ω ( x ) is twice differentiable and its second derivative is bounded on the compact support.

3) ∂ log f θ ( x ) ∂ θ and f θ ( x ) have a compact support K and f θ ( x ) > 0 on K .

4) f θ ( x ) is twice absolutely continuous with its second derivative with respect to x being bounded.

5) lim n → ∞ n 1 2 c n = ∞ , lim n → ∞ n 1 2 c n 2 = 0 , and lim n → ∞ c n = 0 .

6) There exists a positive constant s which might depend on f θ 0 ( x ) such that n ( s n − s ) is bounded in probability.

Then n ( θ ^ − θ 0 ) → L N ( 0 , I ( θ 0 ) − 1 ) where I ( θ 0 ) is the Fisher information matrix with elements given by

E ( ∂ log f θ ( x ) ∂ θ j ∂ log f θ ( x ) ∂ θ i ) = − E ( ∂ 2 log f θ ( x ) ∂ θ j ∂ θ i ) , i = 1 , ⋯ , m , j = 1 , ⋯ , m (16)

and assumed to exist.

We just give an outline establishing the results of Theorem 1 and focus only on the strict parametric model for applications with the aim that it might help practitioners in the applied fields to follow more easily the arguments needed to develop the new method subsequently.

Note that, beside the rectangular kernel, the triangular kernel with ω ( x ) = 1 − | x | for − 1 ≤ x ≤ 1 and the Epanechnikov kernel with ω ( x ) = 3 4 ( 1 − x 2 ) for

− 1 ≤ x ≤ 1 meet conditions 1 and 2 as required by Theorem 1 and are available in the package R.

For establishing asymptotic normality results for the estimators as indicated by Theorem 1, we can consider a Taylor expansion of the system of equations

D ( θ ^ ) = ∂ Q n ( θ ) ∂ θ | θ = θ ^ = 0 around the true vector of parameters θ 0 . The system of equations implies

D ( θ ^ ) = ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ ^ ( x ) ] 1 2 ) ∂ f θ ^ ( x ) ∂ θ f θ ^ ( x ) d x = 0 (17)

with ∂ f θ ^ ( x ) ∂ θ = ∂ f θ ( x ) ∂ θ | θ = θ ^ and f θ ^ ( x ) = f θ ( x ) | θ = θ ^ .

We proceed to perform a Taylor expansion by noting

D ( θ 0 ) = ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ 0 ( x ) ] 1 2 ) ∂ f θ 0 ( x ) ∂ θ f θ 0 ( x ) d x , (18)

D ˙ ( θ 0 ) = ∂ D ( θ ) ∂ θ | θ = θ 0 = − 1 2 ∫ − ∞ ∞ ( ∂ log f θ 0 ( x ) ∂ θ ) ( ∂ log f θ 0 ( x ) ∂ θ ) ′ f θ 0 ( x ) d x + o p ( 1 ) ,(19)

assuming D ( θ ) is differentiable with respect to θ and

∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ 0 ( x ) ] 1 2 ) ∂ s θ 0 ( x ) ∂ θ d x → p 0 , with s θ 0 ( x ) = ∂ f θ 0 ( x ) ∂ θ f θ 0 ( x ) , using the compact support assumption for { f θ } . As a result, we can write that

D ˙ ( θ 0 ) = − 1 2 I ( θ 0 ) + o p ( 1 ) . (20)

Therefore, with the regularity conditions met, we will have the representation

n ( θ ^ − θ 0 ) = − [ D ˙ ( θ 0 ) ] − 1 n D ( θ 0 ) + o p ( 1 ) , (21)

where o p ( 1 ) is the remainder term which converges to 0 in probability, which can be re-expressed using the following equality which holds in law,

n ( θ ^ − θ 0 ) = d 2 [ I ( θ 0 ) ] − 1 n ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ 0 ( x ) ] 1 2 ) ∂ f θ 0 ( x ) ∂ θ f θ 0 ( x ) d x . (22)

Using the argument given by Beran [

n ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ 0 ( x ) ] 1 2 ) ∂ f θ 0 ( x ) ∂ θ f θ 0 ( x ) d x = n ∫ − ∞ ∞ 1 2 ( f n ( x ) − f θ 0 ( x ) ) f θ 0 ( x ) ∂ f θ 0 ( x ) ∂ θ f θ 0 ( x ) d x + o p ( 1 ) . (23)

This can be viewed as a form of generalized delta method to establish equality of the left-hand side and the right-hand side of Equation (23).

Consequently, Equation (22) can be re-expressed, using the equality in distribution, as

n ( θ ^ − θ 0 ) = d [ I ( θ 0 ) ] − 1 n ∫ − ∞ ∞ ( f n ( x ) − f θ 0 ( x ) ) ∂ log f θ 0 ( x ) ∂ θ d x . (24)

Note that

∫ − ∞ ∞ ( f n ( x ) − f θ 0 ( x ) ) ∂ log f θ 0 ( x ) ∂ θ d x = ∫ − ∞ ∞ f n ( x ) ∂ log f θ 0 ( x ) ∂ θ d x (25)

as, in general, ∫ − ∞ ∞ f θ 0 ( x ) ∂ log f θ 0 ( x ) ∂ θ d x = 0 . Furthermore,

n ∫ − ∞ ∞ f n ( x ) ∂ log f θ 0 ( x ) ∂ θ d x = n ∫ − ∞ ∞ ∂ log f θ 0 ( x ) ∂ θ d F n ( x ) + o p ( 1 ) , (26)

where F n ( x ) is the commonly used sample distribution function. This allows the following representation:

n ( θ ^ − θ 0 ) = d [ I ( θ 0 ) ] − 1 1 n ∑ i = 1 n ∂ log f θ 0 ( x i ) ∂ θ . (27)

Therefore, n ( θ ^ − θ 0 ) → L N ( 0 , I ( θ 0 ) − 1 ) .

For the simulated version, i.e., version S, we can only obtain results for consistency and they will be given in the next section. As for asymptotic normality, we cannot conclude for the time being whether or not conditions of Theorem 7.1 given by Newey and McFadden [

For version S, we minimize

Q n ( θ ) = ∫ − ∞ ∞ ( [ f n ( x ) ] 1 2 − [ f θ S ( x ) ] 1 2 ) 2 d x . (28)

We recommend using the same seed across different values of θ if possible and the simulated sample size U = τ n such that U → ∞ at the same rate as n → ∞ . These recommendations conform with other simulated methods of inference such as the method of simulated moments as discussed by Davidson and McKinnon [

Let

‖ G n ( θ ) ‖ = ( Q n ( θ ) ) 1 2 , (29)

with Q n ( θ ) as defined by Equation (28). The following Theorem, which is essentially Theorem (3.1) given by Pakes and Pollard [

Theorem 2

Suppose

1) The parameter space Ω is compact, and θ 0 ∈ Ω .

2) θ ^ S minimizes ‖ G n ( θ ) ‖ or equivalently Q n ( θ ) .

3) sup ‖ θ − θ 0 ‖ > δ ‖ G n ( θ ) ‖ − 1 for each δ > 0 , is bounded in probability, where ‖ ⋅ ‖ denotes the norm being used.

Then θ ^ S → p θ 0 .

Clearly, we have consistency for θ ^ S as 0 ≤ Q n ( θ ) ≤ 2 and Q n ( θ ) → p 0 only at θ = θ 0 .

For the time being, we cannot assert that θ ^ S follows a multivariate normal distribution asymptotically as we cannot verify the regularity conditions of Theorem 7.1 given by Newey and McFadden [

ymptotic covariance to be ( 1 + 1 τ ) V , with V being the asymptotic covariance

matrix of the estimators without using simulations. Conforming with other simulated methods which typically give the same type of asymptotic covariance formula, we recommend choosing τ ≥ 10 to minimize the loss of efficiency due to simulations. The matrix ( 1 + 1 τ ) V , where V = I ( θ 0 ) − 1 , can be viewed as a form of benchmark for the approximate asymptotic covariance matrix for θ ^ S if indeed asymptotic normality can be shown. In the absence of a rigorous proof, we have to rely on simulations to evaluate the efficiency of θ ^ S , just as for version D when the support of the distribution is not compact. Further asymptotic results to be obtained in the future will be presented in a subsequent paper.

Since we have estimates for densities, it is natural that we can estimate the Fisher information matrix. Clearly, if the model density has a closed-form expression, then the following matrix

1 n ∑ i = 1 n ( ∂ f θ ^ S ( x i ) ∂ θ f θ ^ S ( x i ) ) ( ∂ f θ ^ S ( x i ) ∂ θ f θ ^ S ( x i ) ) ′ (30)

can be used to estimate I ( θ 0 ) . Instead of f θ ^ S ( x i ) , if it is not available, we can use the kernel density estimate of f θ ^ S ( x i ) , and, following a method given by Pakes and Pollard [

Δ f θ ^ S ( x i ) Δ θ j = f θ ^ S + ϵ n e j ( x i ) − f θ ^ S ( x i ) ϵ n , (31)

with ϵ n → 0 at the rate ϵ n = o ( n − δ ) , where δ ≤ 1 2 , to estimate ∂ f θ ^ S ( x i ) ∂ θ j , j = 1 , ⋯ , m , assuming f θ ^ S ( x i ) > 0 , i = 1 , ⋯ , n . The vector e j is a unit vector with 1 in its j-th place and 0 elsewhere. Replacing f θ ^ S ( x i ) and ∂ f θ ^ S ( x i ) ∂ θ j

by these estimates will give an estimator for the information matrix. An estimate of the information matrix is useful as the information matrix is related to the Cramer-Rao lower bound.

In this study, we shall compare the efficiencies of the moment estimators for the case with α = β , i.e., the GNL distribution with only four parameters. Reed [

obtained using central empirical moments m r = 1 n ∑ i = 1 n ( X i − X ¯ ) 2 , r = 2 , ⋯ , 6 as

they follow the same type of relationships which exist between model cumulants κ r and model central moments. Let μ r = E ( X − μ ) 2 , r > 2 , and μ = E ( X ) . The following relationships can be found in Stuart and Ord [

κ 1 = μ ,

κ 2 = μ 2 ,

κ 3 = μ 3 ,

κ 4 = μ 4 − 3 μ 2 2 ,

κ 5 = μ 5 − 10 μ 3 μ 2 ,

κ 6 = μ 6 − 15 μ 4 μ 2 − 10 μ 3 2 + 30 μ 2 3 . (32)

Explicitly, the moments estimators are

α ˜ = β ˜ = ( 20 k 4 k 6 ) 1 2 , ρ ˜ = 100 3 k 4 3 k 6 2 , μ ˜ = k 1 ρ ˜ and σ ˜ 2 = k 2 ρ ˜ − 2 α ˜ 2 . (33)

Reed [

A limited simulation study using parameters for the symmetric GNL distribution with four parameters, focusing on parameters in the ranges μ = 0 , σ = 0.008 , 0.1 ≤ ρ ≤ 5.0 , 30 ≤ α ≤ 40 , has been carried out and the relevant results are summarized in

We noticed that the method of moments estimator for σ 2 is often negative and we set it equal to zero whenever this is the case, and the comparisons of efficiencies use this version of the method of moments estimator. The density estimate is based on the built-in function of the package R with a rectangular kernel and default bandwidth based on the normal distribution. The overall asymptotic relative efficiency ( ) used for comparisons is

A R E = M S E ( μ ^ S ) + M S E ( σ ^ S ) + M S E ( α ^ S ) + M S E ( ρ ^ S ) M S E ( μ ˜ ) + M S E ( σ ˜ ) + M S E ( α ˜ ) + M S E ( ρ ˜ ) , (34)

with M S E ( θ ^ ) being the commonly used mean square error of the estimator θ ^ and it is estimated using M = 50 samples for estimating the expression for ARE and the values of the estimated ARE’s using different sets of parameters are displayed in

Despite the scope of the study being limited, it suggests that SMHD estimators perform much better than method of moments estimators overall for the ranges of parameters used in finance. The method of moments estimator for θ tends to perform better for small values of ρ and deteriorates rapidly as ρ grows larger with A R E → 0 even for various parameter values that we tested which lie outside the ranges indicated above and not shown in

α⋱ρ | 0.1 | 0.2 | 0.3 | 0.4 | 1.0 | 5.0 |
---|---|---|---|---|---|---|

30 | 0.591517 | 0.230986 | 0.108643 | 0.091801 | 0.000000 | 0.002654 |

32 | 0.424564 | 0.086852 | 0.334888 | 0.034317 | 0.000003 | 0.000022 |

34 | 0.786910 | 0.382450 | 0.321474 | 0.059785 | 0.000011 | 0.014531 |

36 | 0.618782 | 0.991033 | 0.200291 | 0.001051 | 0.000020 | 0.000062 |

38 | 0.449053 | 0.317762 | 0.121347 | 0.069595 | 0.000001 | 0.005445 |

40 | 0.434306 | 0.472453 | 0.144194 | 0.008689 | 0.000000 | 0.000002 |

Note: Tabulated values are estimates of the asymptotic relative efficiencies of the SMHD estimators versus the MM estimators.

Individual ratios of mean square errors for some sets of parameters

θ = ( μ = 0 , σ = 0.008 , α = 30 , ρ = 0.1 ) ′

M S E ( μ ^ S ) M S E ( μ ˜ ) = 22.9437 , M S E ( σ ^ S ) M S E ( σ ˜ ) = 0.8584 , M S E ( α ^ S ) M S E ( α ˜ ) = 0.5918 , M S E ( ρ ^ S ) M S E ( ρ ˜ ) = 0.0222 , A R E = 0.5915

θ = ( μ = 0 , σ = 0.008 , α = 34 , ρ = 0.3 ) ′

M S E ( μ ^ S ) M S E ( μ ˜ ) = 925.3334 , M S E ( σ ^ S ) M S E ( σ ˜ ) = 0.6064 , M S E ( α ^ S ) M S E ( α ˜ ) = 0.3240 , M S E ( ρ ^ S ) M S E ( ρ ˜ ) = 0.0151 , A R E = 0.3215

θ = ( μ = 0 , σ = 0.008 , α = 40 , ρ = 1 ) ′

M S E ( μ ^ S ) M S E ( μ ˜ ) = 1.3739 , M S E ( σ ^ S ) M S E ( σ ˜ ) = 0.0503 , M S E ( α ^ S ) M S E ( α ˜ ) = 0.0004 , M S E ( ρ ^ S ) M S E ( ρ ˜ ) = 0.0000 , A R E = 0.0000

As SMHD estimators remain consistent with minimum regularity conditions and despite the lack of results on asymptotic normality, the proposed method appears to be useful for fitting actuarial and financial models using continuous infinitely divisible distributions which arise from Lévy processes or continuous mixture distributions constructed using mixing operations, whenever it is not difficult to simulate from these distributions but the density functions of these distributions have no closed-form expressions. In many models, the proposed method appears to be more efficient than traditional methods such as the method of moments. The proposed method is not difficult to implement but methods based on simulations do not seem to receive much attention in finance and actuarial science. They might be considered as additional robust statistical techniques for analyzing empirical data, especially if point estimation is the main interest.

The helpful comments of an anonymous referee and the kind support of the OJS staff, which led to an improvement in the presentation of the paper, are gratefully acknowledged.

Luong, A. and Bilodeau, C. (2017) Simulated Minimum Hellinger Distance Estimation for Some Continuous Financial and Actuarial Models. Open Journal of Statistics, 7, 743-759. https://doi.org/10.4236/ojs.2017.74052