Scientific Research

An Academic Publisher

Simulated Minimum Cramér-Von Mises Distance Estimation for Some Actuarial and Financial Models

**Author(s)**Leave a comment

KEYWORDS

1. Introduction

In actuarial science or finance we often model losses or log-returns with distribution functions where neither the distribution function nor its corresponding density function has a closed form expression yet it is not complicated to draw random samples from these distributions. It is clear that likelihood methods are complicated in such a situation.

For statistical inferences using models with these features, we shall assume to have independent and identically distributed (iid) observations ${X}_{1},\cdots ,{X}_{n}$ which have a common distribution as $X$ with model distribution and density given respectively by ${F}_{\theta}\left(u\right)$ and ${f}_{\theta}\left(u\right)$ . Neither ${F}_{\theta}\left(u\right)$ nor ${f}_{\theta}\left(u\right)$ has a closed form expression but often its moment generating function (mgf) ${M}_{\theta}\left(s\right)$ has a closed form expression. The vector of parameters of interest is

$\theta ={\left({\theta}_{1},\cdots ,{\theta}_{m}\right)}^{\prime}$

The compound Poisson distribution used in actuarial sciences and jump diffusion distribution in finance are typical examples for these types of models. Furthermore, in many circumstances distributions derived from the increments of Lévy processes also display these characteristics and it is of interest to make inferences for the vector of parameters. We shall illustrate the situation with example 1 and example 2 below.

Example 1

In this example, we shall consider the compound Poisson gamma distribution which is commonly used in actuarial science and it arises from the compound Poisson processes which also belong to the class of Lévy processes.

The compound gamma distribution is the distribution of a random variable $X$ representable as a random sum, i.e.,

$X={\displaystyle {\sum}_{i=1}^{N}{Y}_{i}}$ with the ${Y}_{i}$ ’s being iid with a common gamma distribution with the density function given by ${f}_{Y}\left(y\right)=\frac{1}{\Gamma \left(\alpha \right){\beta}^{\alpha}}{y}^{\alpha -1}{\text{e}}^{-\frac{y}{\beta}},\text{\hspace{0.17em}}y>0,\text{\hspace{0.17em}}\alpha >1,\text{\hspace{0.17em}}\beta >0$ and the moment generating function is given by ${M}_{Y}\left(s\right)=\frac{1}{{\left(1-\beta s\right)}^{\alpha}}$ . The random variable $N$ follows a Poisson distribution with parameter $\lambda >0$ and it is assumed that the ${Y}_{i}$ ’s and $N$ are independent.

Note that the moment generating function of $X$ is

${M}_{\theta}\left(s\right)={\text{e}}^{\lambda \left({M}_{Y}\left(s\right)-1\right)}$ (1)

and from the mgf ${M}_{\theta}\left(s\right)$ the first there cumulants can be found and they are given by

${c}_{1}=\alpha \beta \lambda ,\text{\hspace{0.17em}}{c}_{2}=\alpha {\beta}^{2}\lambda \left(1+\alpha \right),\text{\hspace{0.17em}}{c}_{3}=\alpha \lambda \left(\alpha +1\right)\left(\alpha +2\right){\beta}^{3}$ (2)

The vector of parameters is $\theta ={\left(\alpha ,\beta ,\lambda \right)}^{\prime}$ .

It is not difficult to simulate from the distribution of $X$ but the density function of $X$ has no closed form, see Klugman et al. [1] (p. 143) for the series representation of this density and strictly speaking this is not a continuous model but a hybrid model where there is a probability mass assigned to the origin. Furthermore, continuous distributions created using a mixing mechanism also leads to continuous mixture distributions without closed form density but simulated samples often can be drawn from such distributions. These distributions are commonly used in actuarial science and they are given by Klugman et al. [1] (pp. 62-65); see Luong [2] for other distributions with similar features used in actuarial science.

Lévy processes are also used in finance and they can be used as alternative models to the classical Brownian motion. The distributions of the increments of these processes can be more flexible than the normal distribution, they can be asymmetric and have fatter tail than the normal distribution. Consequently, they are more suitable to model log-returns of assets in finance. The following double exponential jump diffusion distribution is an illustration of an alternative distribution to the normal distribution which is the distribution of the increments of a Brownian motion.

Example 2

The double exponential jump diffusion model is a special case of a larger class of jump diffusion models where the distribution for the jumps follows an asymmetric Laplace distribution instead of the classical normal distribution as in the classical jump-diffusion model introduced by Merton [3] . This distribution has six parameters and it has been studied by Kou [4] , Kou and Wang [5] . A sub- model which is the double exponential jump diffusion model with only five parameters has been found very useful for modeling log-returns of stocks, see Tsay [1] (pp. 311-319). We shall call this model, the KWT model. Exact pricing for European call option for this model is also possible with the use of some special functions. The distribution can be represented as the distribution of $X$ with

$X=Z+{\displaystyle {\sum}_{i=1}^{N}{Y}_{i}}$

The ${Y}_{i}$ ’s are iid with a common distribution and mgf given respectively by

${f}_{Y}\left(y;\omega ,\eta \right)=\frac{1}{2\eta}{\text{e}}^{-\left|\frac{x-\omega}{\eta}\right|},-\infty <x<\infty ,-\infty <\omega <\infty ,\eta >0,$ (3)

${M}_{Y}\left(s\right)=\frac{{\text{e}}^{\omega s}}{1-{\eta}^{2}{s}^{2}}$ . (4)

The distribution function of the double exponential distribution is

${F}_{Y}\left(y\right)=\frac{1}{2}{\text{e}}^{\frac{x-\omega}{\eta}}$ for $x\le \omega $ and ${F}_{Y}\left(y\right)=\frac{1}{2}+\frac{1}{2}\left(1-{\text{e}}^{-\frac{x-\omega}{\eta}}\right)$ for $x>\omega $ . (5)

Since this distribution has an explicit expression, simulated samples drawn from the double exponential distribution can be based on the inverse method.

Tsay [6] (p. 312) also gives additional properties of the distribution of $Y$ , i.e., the mean and variance are given respectively by

$E\left(Y\right)=\omega ,\text{\hspace{0.17em}}V\left(Y\right)=2{\eta}^{2}$ (6)

It is assumed that the ${Y}_{i}$ ’s, $Z$ and $N$ are independent, $N$ follows a Poisson distribution with parameter $\lambda $ and $Z$ has a normal distribution $N\left(\mu ,{\sigma}^{2}\right)$ .

It is easy to see that the mgf of $X$ is given by

${M}_{\theta}\left(s\right)={\text{e}}^{\mu s+\frac{1}{2}{\sigma}^{2}{s}^{2}}{\text{e}}^{\lambda \left(\frac{{\text{e}}^{\omega s}}{1-{\eta}^{2}{s}^{2}}-1\right)}$ . (7)

From ${M}_{\theta}\left(s\right)$ , the first five cumulants can be found and they are given by

${c}_{1}=\mu +\lambda \omega ,\text{\hspace{0.17em}}{c}_{2}={\sigma}^{2}+\lambda \left(2{\eta}^{2}+{\omega}^{2}\right),\text{\hspace{0.17em}}{c}_{3}=\lambda \left(6{\eta}^{2}\omega +{\omega}^{3}\right)$ (8)

and

${c}_{4}=\lambda \left(24{\eta}^{4}+12{\eta}^{2}{\omega}^{2}+{\omega}^{4}\right),\text{\hspace{0.17em}}{c}_{5}=\lambda \left(120{\eta}^{4}\omega +20{\eta}^{2}{\omega}^{3}+{\omega}^{5}\right).$ (9)

The vector of parameters is $\theta ={\left(\mu ,{\sigma}^{2},\lambda ,\omega ,\eta \right)}^{\prime}$ .

For models introduced by these examples if we use methods of moments (MM) to estimate the parameters, the MM estimators will lack of robustness properties and they might not even be efficient as for models with more than two parameters, MM estimators will depend on polynomials of degree higher or equal to three hence will be unstable in the presence of outliers. Estimators based on empirical characteristic functions procedures such as the KL procedures of Feurverger and McDunnough [7] involves an arbitrariness of choice of points to match the empirical characteristic function with its model counterpart which motivates us in this paper to extend Cramer-von Mises estimation to a simulated version (version S). The classical Minimum Cramer Von-Mises estimators (version D) are given by the vector $\stackrel{^}{\theta}$ which minimize the objective function

${Q}_{n}\left(\theta \right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({F}_{n}\left({x}_{i}\right)-{F}_{\theta}\left({x}_{i}\right)\right)}^{2}}$ or equivalently (10)

${Q}_{n}\left(\theta \right)={\displaystyle {\int}_{-\infty}^{\infty}{\left({F}_{n}\left(u\right)-{F}_{\theta}\left(u\right)\right)}^{2}\text{d}{F}_{n}\left(u\right)}$ (11)

as given by Duchesne et al. [8] with ${F}_{n}\left(u\right)$ and ${F}_{\theta}\left(u\right)$ are respectively the sample and model distribution. The MCVM estimators are known to be robust, ${F}_{n}\left(u\right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}I\left[{x}_{i}\le x\right]}$ is the commonly used sample distribution and $I[.]$ is

the indicator function. Note that if it is easy to draw samples from ${F}_{\theta}\left(u\right)$ , we can construct the simulated sample distribution function ${F}_{\theta}^{S}\left(u\right)$ using $S$ observations drawn from ${F}_{\theta}\left(u\right)$ similarly and minimize instead the following objective function

${Q}_{n}\left(\theta \right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({F}_{n}\left({x}_{i}\right)-{F}_{\theta}^{S}\left({x}_{i}\right)\right)}^{2}}$ (12)

to obtain estimators. We shall call these estimators simulated MCVM (SMCVM) estimators and denoted them by the vector ${\stackrel{^}{\theta}}^{S}$ and we shall call this version, version S. The method is numerically relatively simple to implement using simplex direct search methods which are derivative free. Packages like R already have built in function to minimize a function using the Nelder-Mead simplex method. The SMCVM method does not require a proxy model like other simulated methods such as methods based on indirect inference, see Garcia et al. [9] , also see Smith [10] . Therefore, the method appears to be useful for actuarial science and finance where there are needs to analyze data using these types of distributions. It can also be viewed as a natural extension of the classical MCVM methods proposed by Hogg and Klugman [11] (p. 83) where the asymptotic properties of the estimators have been established by Duchesne at al. [8] . Like Simulated minimum Hellinger (SMHD) method proposed by Luong and Bilodeau [12] , the new method is robust and it is even easier to implement than SMHD method as it makes use of sample distribution functions instead of density estimates. Furthermore, it can handle models like the compound Poisson model which displays a probability mass at the origin where SMHD method might not be suitable but comparing to SMHD estimators, the SCVM estimators might not be as efficient as the SMHD estimators for continuous models.

The paper is organized as follows. Following the approach in section 3 by Pakes and Pollard [13] (pp. 1037-1043) who make use of the Euclidean space and Euclidean norm to establish asymptotic properties of estimators, the Hilbert space ${l}^{2}$ is used in this paper with a natural norm extending respectively the Euclidean space and the commonly used Euclidean norm. Asymptotic properties for both the CVM estimators and the SCVM estimators can be established using a unified approach by considering minimizing the norm of a random function to obtain estimators and they are given in section 2. This approach also facilitates the use of the available results of their Theorems given in section 3 by Pakes and Pollard [13] as most of the results of their Theorems continue to hold in ${l}^{2}$ . The SMCVM estimators are shown to be consistent and have an asymptotic normal distribution. Their asymptotic covariance can be estimated using the influence function approach which were used by Duchesne et al. [8] . An estimate for the covariance matrix is also given in section 2 and by having such an estimate, it will make hypotheses testing for parameters easier to handle. Section 3 displays results of a limited simulation study using the compound Poisson gamma model and the double exponential jump distribution where we compare the SMCVM estimators with methods of moment (MM) estimators. For both models, it appears that the SCVM estimators are much more efficient than MM estimators using the overall relative efficiency criterion.

2. Asymptotic Properties of the SCVM Estimators

2.1. The Space l^{2} and Its Norm

We can make use elegant results of Theorem 3.1 and 3.2 in section 3 of the paper by Pakes and Pollard [13] (pp. 1037-1043) to investigate asymptotic properties of CVM estimators and simulated CVM estimators (SCVM). For asymptotic results of estimators using simulations in their seminal works, Pakes and Pollard [13] consider estimators obtained by minimizing the Euclidean norm of a vector of random functions. The vector of random functions in their set up belong to the Euclidean space. If we used their results to investigate CVM estimation it is more convenient to consider the Hilbert space ${l}^{2}$ with infinite dimension which generalize the Euclidean space and the following norm $\Vert \text{\hspace{0.05em}}.\text{\hspace{0.05em}}\Vert $ defined below which generalizes the Euclidean norm.

For an element ${x}_{l}={\left({x}_{l,1},{x}_{l,2},\cdots \right)}^{\prime}$ which belongs to ${l}^{2}$ , define

$\Vert {x}_{l}\Vert ={\left({\displaystyle {\sum}_{i=1}^{\infty}{x}_{l,i}^{2}}\right)}^{\frac{1}{2}}$ assumed to be finite. Clearly, $\Vert \text{\hspace{0.05em}}.\text{\hspace{0.05em}}\Vert $ is a norm for ${l}^{2}$ and it

generalizes naturally the Euclidean norm. Also, a vector $u={\left({u}_{1},\cdots ,{u}_{p}\right)}^{\prime}$ is of finite dimension p, hence belongs to the Euclidean space then it belongs to ${l}^{2}$ with $u={\left({u}_{1},\cdots ,{u}_{p},0,0,\cdots \right)}^{\prime}$ . The space ${l}^{2}$ and the norm $\Vert \text{\hspace{0.05em}}.\text{\hspace{0.05em}}\Vert $ have been studied in functional analysis or real analysis, see Davidson and Donsig [14] (pp. 137-141) for example.

For a matrix $A=\left({a}_{ij}\right),i=1,2,\cdots ,j=1,2,\cdots $ in ${l}^{2}$ , define $\Vert A\Vert ={\left({\displaystyle {\sum}_{i=1}^{\infty}{\displaystyle {\sum}_{j=1}^{\infty}{a}_{ij}^{2}}}\right)}^{\frac{1}{2}}$ . With the space ${l}^{2}$ , most of the results of their Theorems in section 3 are valid and only some minor changes are needed.

For estimation, we assume that we have a random sample which consist of n iid observations ${X}_{1},\cdots ,{X}_{n}$ from a continuous parametric family with distribution ${F}_{\theta}\left(u\right)$ . We also assumed that ${F}_{\theta}$ has no closed form expression but simulated samples can be drawn from ${F}_{\theta}\left(u\right)$ .The commonly used sample distribution function is denoted by ${F}_{n}\left(u\right)$ . The vector of parameters is denoted by

$\theta ={\left({\theta}_{1},\cdots ,{\theta}_{m}\right)}^{\prime}$ .

Define the following vectors of random functions

${G}_{n}\left(\theta \right)={\left(\frac{{F}_{n}\left({x}_{1}\right)-{F}_{\theta}({x}_{1})}{\sqrt{n}},\cdots ,\frac{{F}_{n}\left({x}_{n}\right)-{F}_{\theta}\left({x}_{n}\right)}{\sqrt{n}},0,0,\cdots \right)}^{\prime}$ (13)

for version D and it is easy to see that

${\Vert {G}_{n}\left(\theta \right)\Vert}^{2}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({F}_{n}\left({x}_{i}\right)-{F}_{\theta}\left({x}_{i}\right)\right)}^{2}}$ . (14)

Equivalently,

${\Vert {G}_{n}\left(\theta \right)\Vert}^{2}={\displaystyle {\int}_{-\infty}^{\infty}{\left({F}_{n}\left(u\right)-{F}_{\theta}\left(u\right)\right)}^{2}\text{d}{F}_{n}}$ , (15)

if ${F}_{\theta}\left(u\right)$ has support on the real line and

${\Vert {G}_{n}\left(\theta \right)\Vert}^{2}={\displaystyle {\int}_{0}^{\infty}{\left({F}_{n}\left(u\right)-{F}_{\theta}\left(u\right)\right)}^{2}\text{d}{F}_{n}}$ , if ${F}_{\theta}\left(u\right)$ is the distribution of a nonnegative random variable. Using the set up given by section 3 in Pollard and Pakes [13] , the classical MCVM estimators can be viewed as the vector of values which minimize ${\Vert {G}_{n}\left(\theta \right)\Vert}^{2}$ or $\Vert {G}_{n}\left(\theta \right)\Vert $ as defined by expression (15).

For the simulated version of MCVM estimation, i.e., version S, define

${G}_{n}\left(\theta \right)={\left(\frac{{F}_{n}\left({x}_{1}\right)-{F}_{\theta}^{S}\left({x}_{1}\right)}{\sqrt{n}},\cdots ,\frac{{F}_{n}\left({x}_{n}\right)-{F}_{\theta}^{S}\left({x}_{n}\right)}{\sqrt{n}},0,0,\cdots \right)}^{\prime}$ (16)

with ${F}_{\theta}^{S}\left(u\right)$ being the sample distribution function based on the simulated sample of observations of size S drawn out ${F}_{\theta}\left(u\right)$ . Then the SCVM estimators given by the vector ${\stackrel{^}{\theta}}^{S}$ is obtained by minimizing

${\Vert {G}_{n}\left(\theta \right)\Vert}^{2}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({F}_{n}\left({x}_{i}\right)-{F}_{\theta}^{S}\left({x}_{i}\right)\right)}^{2}}$ . (17)

Clearly, both versions of MCVM estimation can be treated in a unified way using this set up, we also have $\Vert G\left(\theta \right)\Vert <\infty $ in probability. For both versions, let

$G\left(\theta \right)={\left(\frac{{F}_{{\theta}_{0}}\left({x}_{1}\right)-{F}_{\theta}\left({x}_{1}\right)}{\sqrt{n}},\cdots ,\frac{{F}_{{\theta}_{0}}\left({x}_{n}\right)-{F}_{\theta}\left({x}_{n}\right)}{\sqrt{n}},0,0,\cdots \right)}^{\prime}$ ,

${\Vert G\left(\theta \right)\Vert}^{2}={\displaystyle {\int}_{-\infty}^{\infty}{\left({F}_{{\theta}_{0}}\left(u\right)-{F}_{\theta}\left(u\right)\right)}^{2}\text{d}{F}_{n}\left(u\right)}$ and we have ${F}_{n}\left(u\right)\stackrel{p}{\to}{F}_{{\theta}_{0}}\left(u\right)$ , $\Vert {G}_{n}\left(\theta \right)\Vert -\Vert G\left(\theta \right)\Vert ={o}_{p}\left(1\right)$ , with ${o}_{p}\left(1\right)$ being an expression which converges to 0 in probability.

We shall restate Theorem 3.1 given by Pakes and Pollard [13] (p. 1038) assuming the space ${l}^{2}$ and its norm as defined earlier are used so that it is more suitable for CVM estimation. The condition ii) which requires ${G}_{n}\left({\theta}_{0}\right)={o}_{p}\left(1\right)$ in their Theorem 3.1 can be replaced by $\Vert {G}_{n}\left({\theta}_{0}\right)\Vert ={o}_{p}\left(1\right)$ as only this condition is used in their proof. Note that the set up for their Theorem is very general, we only need to verify their conditions for estimators obtained by minimizing the objective function of the form

${Q}_{n}\left(\theta \right)={\Vert {G}_{n}\left(\theta \right)\Vert}^{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Vert {G}_{n}\left(\theta \right)\Vert $ (18)

Theorem 1: Under the following conditions, the estimators given by the vector $\stackrel{\u02dc}{\theta}$ converges in probability to ${\theta}_{0}$ , the vector of the true parameters, i.e., $\stackrel{\u02dc}{\theta}\stackrel{p}{\to}{\theta}_{0}$ .

1) $\Vert {G}_{n}\left(\stackrel{\u02dc}{\theta}\right)\Vert \le {o}_{p}\left(1\right)+{\mathrm{inf}}_{\theta \in \Omega}\Vert {G}_{n}\left(\theta \right)\Vert $ , $\Omega $ is the parameter space assumed to be compact.

2) $\Vert {G}_{n}\left({\theta}_{0}\right)\Vert ={o}_{p}\left(1\right)$ .

3) ${\mathrm{sup}}_{\Vert \theta -{\theta}_{0}\Vert >\delta}{\Vert {G}_{n}\left(\theta \right)\Vert}^{-1}={O}_{p}\left(1\right)$ for each $\delta >0$ , ${O}_{p}\left(1\right)$ is an expression bounded in probability.

Clearly for the SMCVM estimators given by the vector ${\stackrel{^}{\theta}}^{S}$ which minimizes ${Q}_{n}\left(\theta \right)={\Vert {G}_{n}\left(\theta \right)\Vert}^{2}$ will satisfy condition 1) and 2) of Theorem 1 as ${\Vert {G}_{n}\left(\theta \right)\Vert}^{2}\stackrel{p}{\to}0$ only at $\theta ={\theta}_{0}$ if the parametric family is well parameterized

which is the case in general. Note that the integrand of the integral defined by expression (10) is nonnegative and smaller or equal to one. Therefore, in probability,

$0<{Q}_{n}\left(\theta \right)\le 1$ for $\Vert \theta -{\theta}_{0}\Vert >\delta ,\text{\hspace{0.17em}}\delta >0$

The condition 3) is satisfied in general which implies consistency for the SCVM estimators, we then have ${\stackrel{^}{\theta}}^{S}\stackrel{p}{\to}{\theta}_{0}$ . Note that since ${Q}_{n}\left(\theta \right)$ is always bounded it is not surprising that it generates robust estimators. For more on robustness in the sense of bounded influence functions for the SMCVM estimators see section (2.2.2). Also, observe that ${\stackrel{^}{\theta}}^{S}$ remains consistent even the parametric models are only hybrid, i.e., with some discontinuity points such as in the case of the compound Poisson models. Now we turn our attention to the question of asymptotic normality for ${\stackrel{^}{\theta}}^{S}$ and discuss informally the arguments used to establish asymptotic normality for ${\stackrel{^}{\theta}}^{S}$ first and the formal arguments will follow subsequently from the proofs of Theorem 3.3 by Pakes and Pollard [13] (pp. 1040-1043). A version of their Theorem 3.3 is restated as Theorem 2 below.

Since ${Q}_{n}\left(\theta \right)={\left(\Vert {G}_{n}\left(\theta \right)\Vert \right)}^{2}$ is not differentiable, the traditional Taylor expansion argument cannot be used to establish asymptotic normality of estimators obtained by minimizing ${\left(\Vert {G}_{n}\left(\theta \right)\Vert \right)}^{2}$ . Here, we assume $G\left(\theta \right)$ is differentiable with derivative matrix $\Gamma \left(\theta \right)$ , it means Fréchet differentiable with respect to the norm $\Vert \text{\hspace{0.05em}}.\text{\hspace{0.05em}}\Vert $ for ${l}^{2}$ ; see Luenberger [15] for the notion of Fréchet differentiability and see chapter 3 of the book by Luenberger [15] for the notion of Hilbert space.

If the property of differentiability holds then we can define the random function ${Q}_{n}^{a}\left(\theta \right)$ to approximate ${Q}_{n}\left(\theta \right)$ with

${Q}_{n}^{a}\left(\theta \right)={\left(\Vert {L}_{n}\left(\theta \right)\Vert \right)}^{2},\text{\hspace{0.17em}}{L}_{n}\left(\theta \right)={G}_{n}\left({\theta}_{0}\right)+\Gamma \left({\theta}_{0}\right)\left(\theta -{\theta}_{0}\right)$ (19)

Let ${\stackrel{^}{\theta}}^{S}$ and ${\theta}^{*}$ be the vectors which minimize ${Q}_{n}\left(\theta \right)$ and ${Q}_{n}^{a}\left(\theta \right)$ respectively. The ideas behind the proofs for asymptotic normality of Theorem (3.3) of Pakes and Pollard are if the approximation of the original objective function ${Q}_{n}\left(\theta \right)$ which is not differentiable by a differentiable one namely ${Q}_{n}^{a}\left(\theta \right)$ is of the right order then the vector ${\stackrel{^}{\theta}}^{S}$ which minimizes ${Q}_{n}\left(\theta \right)$ and ${\theta}^{*}$ , the vector which minimizes ${Q}_{n}^{a}\left(\theta \right)$ are asymptotically equivalent, i.e., we have:

1) $\sqrt{n}\left({\stackrel{^}{\theta}}^{S}-{\theta}_{0}\right)=\sqrt{n}\left({\theta}^{*}-{\theta}_{0}\right)+{o}_{p}\left(1\right)$ or using equality in distribution, $\sqrt{n}\left({\stackrel{^}{\theta}}^{S}-{\theta}_{0}\right){=}^{d}\sqrt{n}\left({\theta}^{*}-{\theta}_{0}\right)$ and it is easy to see that ${\theta}^{*}$ can be expressed explicitly as ${\theta}^{*}=-{\left({\Gamma}^{\prime}\Gamma \right)}^{-1}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right),\Gamma =\Gamma \left({\theta}_{0}\right)$ since ${L}_{n}\left(\theta \right)$ is an affine transformation.

2) ${Q}_{n}\left({\stackrel{^}{\theta}}^{S}\right)={Q}_{n}^{a}\left({\theta}^{*}\right)+{o}_{p}\left({n}^{-1}\right)$ , ${o}_{p}\left({n}^{-1}\right)$ is an expression converging to 0 in probability at a faster rate than ${n}^{-1}$ .

Note that the matrix $\Gamma \left({\theta}_{0}\right)$ is of rank m with m columns but infinite number of rows given by

$\Gamma =\Gamma \left({\theta}_{0}\right)=\frac{1}{\sqrt{n}}\left({b}_{ij}\right)$ with ${b}_{ij}=-\frac{\partial {F}_{{\theta}_{0}}\left({x}_{i}\right)}{\partial {\theta}_{j}},i=1,\cdots ,n,j=1,\cdots ,m$

and

${b}_{ij}=0,i=n+1,\cdots ,j=1,\cdots ,m$

An estimate of this matrix $\Gamma =\Gamma \left({\theta}_{0}\right)$ is ${\stackrel{^}{\Gamma}}_{n}$ and is defined by expression (33) in section (2.2), consequently we can estimate $\frac{\partial {F}_{{\theta}_{0}}\left({x}_{i}\right)}{\partial {\theta}_{j}}$ by its estimate $-\stackrel{^}{{b}_{ij}},i=1,\cdots ,n,j=1,\cdots ,m$ using the corresponding elements ${\stackrel{^}{\Gamma}}_{n}\left(i,j\right)$ extracted from ${\stackrel{^}{\Gamma}}_{n}$ ,

$-\stackrel{^}{{b}_{ij}}=-\sqrt{n}{\stackrel{^}{\Gamma}}_{n}\left(i,j\right),i=1,\cdots ,n,j=1,\cdots ,m$ (20)

Under these conditions, it suffices to work with ${\theta}^{*}$ and ${Q}_{n}^{a}\left({\theta}^{*}\right)$ to derive asymptotic distribution for of ${\stackrel{^}{\theta}}^{S}$ . A regularity condition for the approximation is of the right order given by their Theorem 3.3 which is the most difficult to check is given as

${\mathrm{sup}}_{\Vert \theta -{\theta}_{0}\Vert \le {\delta}_{n}}\frac{\Vert {G}_{n}\left(\theta \right)-G\left(\theta \right)-{G}_{n}\left({\theta}_{0}\right)\Vert}{{n}^{-\frac{1}{2}}+\Vert {G}_{n}\left(\theta \right)\Vert +\Vert {G}_{n}\left({\theta}_{0}\right)\Vert}={o}_{p}\left(1\right)$

by Pakes and Pollard [13] (p. 1040).

A slightly more stringent condition which obviously implies the above regularity condition is

${\mathrm{sup}}_{\Vert \theta -{\theta}_{0}\Vert \le {\delta}_{n}}\sqrt{n}\Vert {G}_{n}\left(\theta \right)-G\left(\theta \right)-{G}_{n}\left({\theta}_{0}\right)\Vert ={o}_{p}\left(1\right)$ . (21)

For simulated methods for this condition to hold, in general independent samples for each $\theta $ cannot be used, see Pakes and Pollard [13] (p. 1048). Otherwise, only consistency can be guaranteed for estimators using version S, see section 2.2.2 for the same seed issue. For version S, the simulated samples are assumed to have size $U=\tau n$ and the same seed is used across different values of $\theta $ to draw samples of size $U$ . These two assumptions are quite standard for simulated methods of inferences, see section 9.6 for method of simulated moments (MSM) given by Davidson and McKinnon [16] (p. 384), also see Smith [10] (p. S66) for this assumption for his simulated quasi-likelihood estimators. For numerical optimization to find the minimum of ${Q}_{n}\left(\theta \right)$ , we rely on direct search simplex methods which are derivative free. Chong and Zak [17] (pp. 273-278) provides a good overview of derivative free simplex algorithm.

2.2. Asymptotic Normality

In this section, we shall state Theorem 2 which is essentially Theorem (3.3) given by Pakes and Pollard [13] and comment on the conditions needed to verify asymptotic normality for the MCVM estimators for version D and S.

Theorem 2

Let $\stackrel{\u02dc}{\theta}$ be a vector of consistent estimators for ${\theta}_{0}$ , the unique vector which satisfies $G\left({\theta}_{0}\right)=0$ .

Under the following conditions:

1) The parameter space $\Omega $ is compact.

2) $\Vert {G}_{n}\left(\stackrel{\u02dc}{\theta}\right)\Vert \le {o}_{p}\left({n}^{-\frac{1}{2}}\right)+{\mathrm{inf}}_{\theta \in \Omega}\Vert {G}_{n}\left(\theta \right)\Vert $

3) $G(.)$ is differentiable at ${\theta}_{0}$ with a derivative matrix $\Gamma =\Gamma \left({\theta}_{0}\right)$ of full rank

4) ${\mathrm{sup}}_{\Vert \theta -{\theta}_{0}\Vert \le {\delta}_{n}}\sqrt{n}\Vert {G}_{n}\left(\theta \right)-G\left(\theta \right)-{G}_{n}\left({\theta}_{0}\right)\Vert ={o}_{p}\left(1\right)$ for every sequence $\left\{{\delta}_{n}\right\}$ of positive numbers which converge to zero.

5) $\sqrt{n}\Vert {G}_{n}\left({\theta}_{0}\right)\Vert ={O}_{p}\left(1\right)$ , ${O}_{p}\left(1\right)$ is an expression bounded in probability.

6) ${\theta}_{0}$ is an interior point of the parameter space $\Omega $ , assumed to be compact.

Then, we have the following representation which will give the asymptotic distribution of $\stackrel{\u02dc}{\theta}$ in Corollary 1, i.e.,

$\sqrt{n}\left(\stackrel{\u02dc}{\theta}-{\theta}_{0}\right)=-{\left({\Gamma}^{\prime}\Gamma \right)}^{-1}\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)+{o}_{p}\left(1\right)$ , (22)

or equivalently, using equality in distribution,

$\sqrt{n}\left(\stackrel{\u02dc}{\theta}-{\theta}_{0}\right){=}^{d}-{\left({\Gamma}^{\prime}\Gamma \right)}^{-1}\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)$ . (23)

The proofs of these results follow from the results used to prove Theorem 3.3 given by Pakes and Pollard [13] . For expression (22) or expression (23) to hold only condition 5) of Theorem 2 is needed and used in their proofs of Theorem 3.3 and there is no need to assume that $\sqrt{n}{G}_{n}\left({\theta}_{0}\right)$ has an asymptotic distribution. Clearly MCVM estimators or SCVM estimators are obtained by minimizing $\Vert {G}_{n}\left(\theta \right)\Vert $ hence they will satisfy the condition 2) of Theorem 2 with $\Vert {G}_{n}\left(\theta \right)\Vert $ as defined by expression (14) or expression (17) depending it is version D or version S being considered.

Therefore, for version D,

$\sqrt{n}\left(\stackrel{\u02dc}{\theta}-{\theta}_{0}\right){=}^{d}-{\left({\Gamma}^{\prime}\Gamma \right)}^{-1}\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)$ , ${G}_{n}\left(\theta \right)$ as defined by expression (13)

And for version S,

$\sqrt{n}\left({\stackrel{^}{\theta}}^{S}-{\theta}_{0}\right){=}^{d}-{\left({\Gamma}^{\prime}\Gamma \right)}^{-1}\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)$ , ${G}_{n}\left(\theta \right)$ as defined by expression (16).

From the result of the Theorem, it is easy to see that we can obtain the main result of the following corollary which gives the asymptotic covariance matrix of the estimators.

Corollary 1.

Let ${Y}_{n}=\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)$ , if ${Y}_{n}\stackrel{L}{\to}N\left(0,V\right)$ and $\left({\Gamma}^{\prime}\Gamma \right)\stackrel{p}{\to}A$ , $A$ is full rank and symmetric then $\sqrt{n}\left(\stackrel{\u02dc}{\theta}-{\theta}_{0}\right)\stackrel{L}{\to}N\left(0,D\right)$ with

$D={\left(A\right)}^{-1}V{\left(A\right)}^{-1}$ (24)

The matrices $D$ and $V$ depend on ${\theta}_{0}$ , and we adopt the notations $D=D\left({\theta}_{0}\right),V=V\left({\theta}_{0}\right)$ .

These results are proved by Pakes and Pollard [13] , see the proofs of their Theorem (3.3). We just need to verify these conditions are met for SMCVM estimation. Before verifying these conditions for both versions of MCVM estimation, the following assumptions are needed to verify the condition 4 of Theorem 2 which is the most difficult condition to verify. We need to define the following sequence of functions, $\left\{{g}_{n}\left(\theta \right)\right\}$ as it will be used later,

${g}_{n}\left(\theta \right)=n{\Vert {G}_{n}\left(\theta \right)-G\left(\theta \right)-{G}_{n}\left({\theta}_{0}\right)\Vert}^{2},n=1,2,\cdots $

Assumption 1

1) As $n\to \infty $ and $\theta \to {\theta}_{0}$ , for version S of CVM estimation

${E}_{|x}\left\{\left({F}_{{\theta}_{0}}^{S}\left(x\right)-{F}_{{\theta}_{0}}\left(x\right)\right)\left({F}_{\theta}^{S}\left(x\right)-{F}_{\theta}\left(x\right)\right)\right\}\to {E}_{|x}\left\{{\left({F}_{{\theta}_{0}}^{S}\left(x\right)-{F}_{{\theta}_{0}}\left(x\right)\right)}^{2}\right\}$ , (25)

${E}_{|x}\{.\}$ Is the conditional expectation on $x$ of the expression inside the bracket.

2) The sequence of functions ${g}_{n}^{e}\left(\theta \right)$ is differentiable with continuous partial derivatives, ${g}_{n}^{e}\left(\theta \right)=E\left({g}_{n}^{}\left(\theta \right)\right)$ , the expectation is under ${\theta}_{0}$ and using the usual conditioning argument, it can also be expressed

$\begin{array}{c}{g}_{n}^{e}\left(\theta \right)={\displaystyle {\int}_{-\infty}^{\infty}(n{E}_{|x}\left\{{\left({F}_{{\theta}_{0}}^{S}\left(x\right)-{F}_{{\theta}_{0}}\left(x\right)\right)}^{2}\right\}+n{E}_{|x}\left\{{\left({F}_{\theta}^{S}\left(x\right)-{F}_{\theta}\left(x\right)\right)}^{2}\right\}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-2n{E}_{|x}\left\{\left({F}_{{\theta}_{0}}^{S}\left(x\right)-{F}_{{\theta}_{0}}\left(x\right)\right)\left({F}_{\theta}^{S}\left(x\right)-{F}_{\theta}\left(x\right)\right)\right\})\text{d}{F}_{{\theta}_{0}}\left(x\right)\end{array}$ (26)

For the condition 1) of Assumption 1 to hold we cannot use independent samples for different values of $\theta $ to draw simulated samples for version S of CVM estimation, otherwise

${E}_{|x}\left\{\left({F}_{{\theta}_{0}}^{S}\left(x\right)-{F}_{{\theta}_{0}}\left(x\right)\right)\left({F}_{\theta}^{S}\left(x\right)-{F}_{\theta}\left(x\right)\right)\right\}=0$ and ${g}_{n}^{e}\left(\theta \right)$ cannot converge to 0 in probability. This justifies the same seed must be used to generate random samples for different values of $\theta $ .

We shall proceed to check the regularity conditions for both versions of MCVM estimation and note that $\Gamma \left(\theta \right)$ is the derivative of $G\left(\theta \right)$ in ${l}^{2}$ means that $\Gamma =\Gamma \left({\theta}_{0}\right)$ is the Fréchet derivative at $\theta ={\theta}_{0}$ with the property

$\Vert {G}_{n}\left(\theta \right)-{G}_{n}\left({\theta}_{0}\right)-\Gamma \left(\theta -{\theta}_{0}\right)\Vert =o\left(\Vert \theta -{\theta}_{0}\Vert \right)$

As for the Euclidean space, the sufficient condition for differentiability here only requires the partial derivatives $\frac{\partial {F}_{\theta}\left(x\right)}{\partial {\theta}_{j}}$ being continuous with respect to

$\theta $ . For the notion of derivative in Hilbert space, see the notion of Fréchet derivative in Luenberger [15] (pp. 171-177) which generalizes the notion of derivative of Euclidean space. The conditions (1-3) of Theorem 2 can be verified easily. The condition (4) of Theorem 2 will be met in general if Assumption 1 holds, see Appendix for details and justifications.

We proceed to find the asymptotic distribution for $\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)$ . Using expression (22) and expression (23), we shall obtain the asymptotic covariance matrix for the MCVM estimators for both versions. For version D, the asymptotic covariance matrix has been obtained by Duchesne at al. [8] (p. 407), using the influence function approach with the statistical functional $R\left({F}_{n}\right)$ being defined as

$R\left({F}_{n}\right)=-{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)={\displaystyle {\int}_{-\infty}^{\infty}\left({F}_{n}\left(u\right)-{F}_{{\theta}_{0}}\left(u\right)\right)\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial \theta}\text{d}{F}_{n}\left(u\right)},\text{\hspace{0.17em}}\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial \theta}={\frac{\partial {F}_{\theta}\left(u\right)}{\partial \theta}|}_{\theta ={\theta}_{0}}$

$I{C}_{1}\left(x\right)={\frac{\partial R\left({F}_{\u03f5}\right)}{\partial \u03f5}|}_{\u03f5=0},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{F}_{\u03f5}=F+\u03f5\left({\delta}_{x}-F\right)$ (27)

${\delta}_{x}$ is the degenerate distribution at the point

is bounded which implies the MCVM estimators are robust for version D. We shall assume this property of bounded influence functions holds implicitly; we shall see this also makes version S robust. Furthermore, based on standard results of robust estimation theory, the representations given by expressions (28) and (31) using influence functions are valid for the statistical functionals being considered. Now since $R\left({F}_{{\theta}_{0}}\right)=0$ ,

$\sqrt{n}R\left({F}_{n}\right)=-\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)=\sqrt{n}\left(R\left({F}_{n}\right)-R\left({F}_{{\theta}_{0}}\right)\right)=\frac{1}{\sqrt{n}}{\displaystyle {\sum}_{i=1}^{n}I{C}_{1}\left({x}_{1}\right)}+{o}_{p}\left(1\right)$ (28)

This is the influence function representation of $R\left({F}_{n}\right)={\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)$ for version D and we have $\sqrt{n}\left(\stackrel{^}{\theta}-{\theta}_{0}\right)\stackrel{L}{\to}N\left(0,{D}_{1}\right)$ with ${D}_{1}={\left(A\right)}^{-1}{V}_{1}{\left(A\right)}^{-1}$ for version D, ${V}_{1}$ is the covariance matrix of $I{C}_{1}\left(x\right)$ , $I{C}_{1}\left(x\right)$ is given by expression (2.15) in Duchesne et al. [8] (p. 407), with

$I{C}_{1}\left(x\right)={\displaystyle {\int}_{-\infty}^{\infty}\left({\delta}_{x}\left(u\right)-{F}_{{\theta}_{0}}\left(u\right)\right)\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial \theta}\text{d}{F}_{{\theta}_{0}}\left(u\right)}$ (29)

$E\left(I{C}_{1}\left(x\right)\right)=0$ , since ${E}_{{\theta}_{0}}\left({\delta}_{x}\left(u\right)\right)={F}_{{\theta}_{0}}\left(u\right)$ .

Replacing $\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial \theta}$ by $\frac{\partial {F}_{\stackrel{^}{\theta}}\left(u\right)}{\partial \theta}$ and ${F}_{{\theta}_{0}}\left(u\right)$ by ${F}_{n}\left(u\right)$ in the above expression leads to approximate the vector

$I{C}_{1}\left(x\right)$ by $\stackrel{\u02dc}{I{C}_{1}\left(x\right)}$ with its elements given by

$\stackrel{\u02dc}{I{C}_{1l}\left(x\right)}=\frac{1}{n}{\displaystyle {\sum}_{j=1}^{n}\left(I\left[{x}_{j}\ge x\right]-{F}_{n}\left({x}_{j}\right)\right)\frac{\partial {F}_{\stackrel{^}{\theta}}\left({x}_{j}\right)}{\partial {\theta}_{l}}},l=1,\cdots ,m$

An estimate for the covariance matrix ${V}_{1}$ can be defined as

$\stackrel{\u02dc}{{V}_{1}}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\left(\stackrel{\u02dc}{I{C}_{1}\left({x}_{i}\right)}\right){\left(\stackrel{\u02dc}{I{C}_{1}\left({x}_{i}\right)}\right)}^{\prime}}.$ (30)

Using $\stackrel{\u02dc}{{V}_{1}}$ , an estimate for the asymptotic covariance matrix of $\stackrel{^}{\theta}$ can be constructed, see expression (2.15) and expression (2.13) given by Duchesne et al. [8] (pp. 406-407). Clearly, the results for version D as given by Duchesne et al. [8] can be reobtained using this unified approach.

Note that the property of asymptotic normality continues to hold even the parametric model fails to be continuous and is only hybrid as in the compound Poisson gamma case. Using the arguments of the next paragraph to establish asymptotic normality, the same conclusion can be reached for version S. The derivation of the asymptotic covariance matrix ${D}_{2}$ for the SCVM estimators is similar. We shall make use of the notion of bivariate statistical functional introduced by expression (1.6) given by Reid [18] (pp. 80-81). This leads to define the bivariate statistical functional $B\left({F}_{n},{F}_{{\theta}_{0}}^{S}\right)$ ,

$B\left({F}_{n},{F}_{{\theta}_{0}}^{S}\right)=-{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)={\displaystyle {\int}_{-\infty}^{\infty}\left({F}_{n}\left(u\right)-{F}_{{\theta}_{0}}^{S}\left(u\right)\right)\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial \theta}\text{d}{F}_{n}\left(u\right)}$

We have a representation which is similar to the representation given by expression (28) but using both $I{C}_{1}\left(x\right)$ and $I{C}_{2}\left(y\right)$ with

$I{C}_{2}\left(y\right)={\frac{\partial B\left({F}_{\u03f5},{F}_{\tau}\right)}{\partial \tau}|}_{\u03f5=0,\tau =0}$ , ${F}_{\u03f5}$ is as defined by expression (27) and ${F}_{\tau}$ is similarly defined with ${F}_{\tau}=F+\tau \left({\delta}_{y}-F\right)$ , ${\delta}_{y}$ is the degenerate distribution at $y$ and $0\le \tau \le 1$ . Note that $I{C}_{1}\left(x\right)$ as given by expression (29) can also be reobtained using the bivariate statistical functional with $I{C}_{1}\left(y\right)={\frac{\partial B\left({F}_{\u03f5},{F}_{\tau}\right)}{\partial \u03f5}|}_{\u03f5=0,\tau =0}$ .

Based on the expression defining $B\left({F}_{n},{F}_{{\theta}_{0}}^{S}\right)$ , we have $I{C}_{2}\left(y\right)=-I{C}_{1}\left(y\right)$ and $I{C}_{1}\left(x\right)$ is identical for version D and S. Therefore, for version S, we have the representation

$\begin{array}{l}\sqrt{n}B\left({F}_{n},{F}_{{\theta}_{0}}^{S}\right)\\ =-\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)=\frac{1}{\sqrt{n}}{\displaystyle {\sum}_{i=1}^{n}I{C}_{1}\left({x}_{i}\right)}+\frac{\sqrt{n}}{U}{\displaystyle {\sum}_{i=1}^{U}I{C}_{2}\left({y}_{i}\right)}+{o}_{p}\left(1\right)\end{array}$ . (31)

Note that the size of the random sample drawn from the model distribution is $U=\tau n$ and the ${y}_{i}$ ’s are iid and have the same distribution as the ${x}_{i}$ ’s but the ${y}_{i}$ ’s are independent of the ${x}_{i}$ ’s as the simulated sample is independent from the original sample represented by the data. Therefore,

$\sqrt{n}{\Gamma}^{\prime}{G}_{n}\left({\theta}_{0}\right)\stackrel{L}{\to}N\left(0,V\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}V=\left(1+\frac{1}{\tau}\right){V}_{1}$ . (32)

It is also clear that the elements of ${\Gamma}^{\prime}\Gamma $ are given by ${a}_{ij}={\displaystyle {\int}_{-\infty}^{\infty}\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial {\theta}_{i}}\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial {\theta}_{j}}\text{d}{F}_{n}\left(u\right)},i=1,\cdots ,m,j=1,\cdots ,m$ which converge in probability to the corresponding elements ${\stackrel{\xaf}{a}}_{ij}$ of the matrix $A$ with

${\stackrel{\xaf}{a}}_{ij}={\displaystyle {\int}_{-\infty}^{\infty}\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial {\theta}_{i}}\frac{\partial {F}_{{\theta}_{0}}\left(u\right)}{\partial {\theta}_{j}}\text{d}{F}_{{\theta}_{0}}\left(u\right)},i=1,\cdots ,n,j=1,\cdots ,m$ , i.e. (33)

$A=\left({\stackrel{\xaf}{a}}_{ij}\right),i=1,\cdots m,j=1,\cdots ,m\text{\hspace{0.05em}}.$

2.3. An Estimate for the Covariance Matrix for SCVM Estimators

The asymptotic covariance matrix of ${\stackrel{^}{\theta}}^{S}$ can be estimated if we can estimate $\Gamma =\Gamma \left({\theta}_{0}\right)$ . Using a result given by Pakes and Pollard (p. 1043), an estimate for $\Gamma $ is the matrix

${\stackrel{^}{\Gamma}}_{n}=\left[\frac{{G}_{n}\left({\stackrel{^}{\theta}}_{G}^{S}+{\u03f5}_{n}{e}_{1}\right)-{G}_{n}\left({\stackrel{^}{\theta}}_{G}^{S}\right)}{{\u03f5}_{n}},\cdots ,\frac{{G}_{n}\left({\stackrel{^}{\theta}}_{G}^{S}+{\u03f5}_{n}{e}_{m}\right)-{G}_{n}\left({\stackrel{^}{\theta}}_{G}^{S}\right)}{{\u03f5}_{n}}\right]$ (34)

${e}_{i}={\left(0,0,\cdots ,1,0,\cdots ,0\right)}^{\prime}$ with 1 occuring at the i^{th} entry of the vector
${e}_{i}$ and
${\u03f5}_{n}={n}^{-\delta}$ ,
$\delta \le \frac{1}{2}$ and in general we can let
$\delta =\frac{1}{2}$ . Note that the columns of
${\stackrel{^}{\Gamma}}_{n}$ estimate the corresponding columns of
$\Gamma \left({\theta}_{0}\right)$ with elements depend on
$\frac{\partial {F}_{{\theta}_{0}}\left({x}_{i}\right)}{\partial {\theta}_{j}},i=1,\cdots ,n,j=1,\cdots ,m$ as mentioned in section (2.2).

Therefore, using results of Corollary 1 we have the asymptotic for version S

$\sqrt{n}\left({\stackrel{^}{\theta}}^{S}-{\theta}_{0}\right)\stackrel{L}{\to}N\left(0,{D}_{2}\right)$ with ${D}_{2}=\left(1+\frac{1}{\tau}\right){\left(A\right)}^{-1}{V}_{1}{\left(A\right)}^{-1}$ . (35)

The factor $1+\frac{1}{\tau}$ represents the loss of overall efficiency due to simulations

and can be controlled if we let $\tau \ge 10$ . This factor is identical to the one for simulated unweighted minimum chi-square method or the one for simulated quasi-likelihood method, see Pakes and Pollard [13] (p. 1049), also see Smith [10] (p. S69). It suffices to estimate ${V}_{1}$ then we can have an estimate for the asymptotic covariance matrix of the SCVM estimators as clearly $A$ can be estimated by ${{\stackrel{^}{\Gamma}}^{\prime}}_{n}{\stackrel{^}{\Gamma}}_{n}$ .

Define $\stackrel{^}{I{C}_{1}}\left(x\right)$ with its elements given by

$\stackrel{^}{I{C}_{1l}}\left(x\right)=\frac{1}{n}{\displaystyle {\sum}_{j=1}^{n}\left(I\left[{x}_{j}\ge x\right]-{F}_{n}\left({x}_{j}\right)\right)\left(-\stackrel{^}{{b}_{jl}}\right)},l=1,\cdots ,m,$

$\stackrel{^}{{b}_{jl}}=-\stackrel{^}{\frac{\partial {F}_{{\theta}_{0}}\left({x}_{j}\right)}{\partial {\theta}_{l}}},j=1,\cdots ,n,l=1,\cdots ,m$ are as given by expression (20).

An estimate for ${V}_{1}$ for version S can then be defined as

$\stackrel{^}{{V}_{1}}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\left(\stackrel{^}{I{C}_{1}}\left({x}_{i}\right)\right){\left(\stackrel{^}{I{C}_{1}}\left({x}_{i}\right)\right)}^{\prime}}$ . (36)

Consequently, an estimate $\stackrel{^}{{D}_{2}}$ for ${D}_{2}$ can be defined as

$\stackrel{^}{{D}_{2}}=\left(1+\frac{1}{\tau}\right){\left({{\stackrel{^}{\Gamma}}^{\prime}}_{n}{\stackrel{^}{\Gamma}}_{n}\right)}^{-1}\stackrel{^}{{V}_{1}}{\left({{\stackrel{^}{\Gamma}}^{\prime}}_{n}{\stackrel{^}{\Gamma}}_{n}\right)}^{-1}$ (37)

Clearly with $\stackrel{^}{{D}_{2}}$ available, it will facilitate hypothesis testing for the parameters of the model.

3. Numerical Study

3.1. MM Estimation for the Compound Poisson Gamma Model

The MM method consists of matching the empirical cumulants with its model counterpart to form estimating equations and solutions will give the moment estimators. For the compound gamma model of example 1 this leads to the system of equations given by

${c}_{1n}=\stackrel{\xaf}{X}=\lambda \alpha \beta ,\text{\hspace{0.17em}}{s}^{2}=\lambda \alpha {\beta}^{2}\left(\alpha +1\right),\text{\hspace{0.17em}}{c}_{3n}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({X}_{i}-\stackrel{\xaf}{X}\right)}^{3}}=\lambda \alpha {\beta}^{3}\left(\alpha +1\right)\left(\alpha +2\right)$ .

The sample mean and variance are given respectively by $\stackrel{\xaf}{X}$ and ${s}^{2}$ , the moment estimators can be obtained explicitly. Note from these equations let ${r}_{3n}=\frac{{c}_{3n}}{{s}^{2}}=\beta \left(\alpha +2\right)$ and ${r}_{2n}=\frac{{s}^{2}}{\stackrel{\xaf}{X}}=\beta \left(\alpha +1\right)$ which implies $\frac{{r}_{3n}}{{r}_{2n}}=\frac{\alpha +2}{\alpha +1}$ and from the last equation, we can solve for $\alpha $ which gives $\stackrel{^}{{\alpha}_{M}}$ the MM estimator for $\alpha $ with $\stackrel{^}{{\alpha}_{M}}=\frac{2{r}_{2n}-{r}_{3n}}{{r}_{3n}-{r}_{2n}}$ . Since the parameter $\alpha \ge 1$ , we might want to define the moment estimator as $\stackrel{\u02dc}{{\alpha}_{M}}=\mathrm{min}\left(\stackrel{^}{{\alpha}_{M}},1\right)$ . It is not difficult to obtain $\stackrel{^}{{\beta}_{M}}=\frac{{r}_{2n}}{\stackrel{^}{{\alpha}_{M}}+1}$ and $\stackrel{^}{{\lambda}_{M}}=\frac{\stackrel{\xaf}{X}}{\stackrel{^}{{\alpha}_{M}}\stackrel{^}{{\beta}_{M}}}$ the corresponding MM estimators for $\beta $ and $\lambda $ and when we also consider the constraints imposed on $\beta $ and $\lambda $ , this leads to define

$\stackrel{\u02dc}{{\beta}_{M}}=\mathrm{min}\left(\stackrel{^}{{\beta}_{M}},0\right)$ and $\stackrel{\u02dc}{{\lambda}_{M}}=\mathrm{min}\left(\stackrel{^}{{\lambda}_{M}},0\right)$ .

3.2. MM Estimation for the KWT Model

For the KWT model, there are five parameters so beside the first three empirical cumulants as defined above we also need the fourth and fifth empirical cumulants with

${c}_{4n}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({X}_{i}-\stackrel{\xaf}{X}\right)}^{4}}-3{s}^{4}$ , ${c}_{5n}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({X}_{i}-\stackrel{\xaf}{X}\right)}^{5}}-10{c}_{3n}{s}^{2}$ and matching ${c}_{1n}={c}_{1},{c}_{2n}={c}_{2},{c}_{3n}={c}_{3},{c}_{4n}={c}_{4},{c}_{5n}={c}_{5}$ will give the moment estimators as in the previous example. It might be easier to let $\delta ={\eta}^{2}$ and from these estimating equations, it is not difficult to see that the following two equations $\frac{{c}_{3n}}{{c}_{4n}}=\frac{{c}_{3}}{{c}_{4}}$ and $\frac{{c}_{5n}}{{c}_{4n}}=\frac{{c}_{5}}{{c}_{4}}$ depend only on $\delta $ and $\omega $ and can be solved numerically to obtain the MM estimators for $\delta $ and $\omega $ which are given respectively by $\stackrel{^}{{\delta}_{M}}$ and ${\stackrel{^}{\omega}}_{M}$ . Also, using the first three equations we obtain

$\stackrel{^}{{\lambda}_{M}}=\frac{{c}_{3n}}{{\stackrel{^}{\omega}}_{M}^{3}+6\stackrel{^}{{\delta}_{M}}{\stackrel{^}{\omega}}_{M}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{^}{\sigma}}_{M}^{2}={c}_{2n}-\stackrel{^}{{\lambda}_{M}}\left(2\stackrel{^}{{\delta}_{M}}+{\stackrel{^}{\omega}}_{M}^{2}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{^}{{\mu}_{M}}={c}_{1n}-\stackrel{^}{{\lambda}_{M}}{\stackrel{^}{\omega}}_{M}$ .

We might want to redefine these MM estimators by imposing $\stackrel{^}{{\lambda}_{M}}\ge 0,{\stackrel{^}{\sigma}}_{M}^{2}\ge 0$ .

In the limited simulation study, we draw $M=100$ samples of size n=1000 for each sample and use $U=10000,\tau =10$ .

For the overall asymptotic relative efficiency (ARE) for the compound gamma model we use

$ARE=\frac{MSE\left({\stackrel{^}{\lambda}}^{S}\right)+MSE\left({\stackrel{^}{\alpha}}^{S}\right)+MSE\left({\stackrel{^}{\beta}}^{S}\right)}{MSE\left(\stackrel{\u02dc}{\lambda}\right)+MSE\left(\stackrel{\u02dc}{\alpha}\right)+MSE\left(\stackrel{\u02dc}{\beta}\right)}$ , the mean square errors (MSE) are estimated using random samples and displayed in Table 1. The mean square error of an estimator $\stackrel{^}{\pi}$ for ${\pi}_{0}$ is defined as

$MSE\left(\stackrel{^}{\pi}\right)=E{\left(\stackrel{^}{\pi}-{\pi}_{0}\right)}^{2}$ .

The range of the parameters being considered is given by

$2\le \alpha \le 10,1\le \lambda \le 10,1\le \beta \le 10$ .

We find that the SCVM method is more efficient than MM method, the order of ARE gained by using SCVM method is illustrated with results displayed in Table 1. We also test for various parameters outside the range and we also have

Table 1. Compound Poisson gamma model with $\beta =10$ asymptotic overall relative efficiency between SCVM estimators and MM estimators.

The overall efficiency used for comparisons used is $ARE=\frac{MSE\left({\stackrel{^}{\lambda}}^{S}\right)+MSE\left({\stackrel{^}{\alpha}}^{S}\right)+MSE\left({\stackrel{^}{\beta}}^{S}\right)}{MSE\left(\stackrel{\u02dc}{\lambda}\right)+MSE\left(\stackrel{\u02dc}{\alpha}\right)+MSE\left(\stackrel{\u02dc}{\beta}\right)}.$

Table 2. Model KWT ( $\mu =0.001,\sigma =0.001,\eta =0.02$ ) asymptotic overall relative efficiency between SCVM estimators and MM estimators.

The overall efficiency used for comparisons used is $ARE=\frac{MSE\left({\stackrel{^}{\mu}}^{S}\right)+MSE\left({\stackrel{^}{\sigma}}^{S}\right)+MSE\left({\stackrel{^}{\lambda}}^{S}\right)+MSE\left({\stackrel{^}{\omega}}^{S}\right)+MSE\left({\stackrel{^}{\eta}}^{S}\right)}{MSE\left(\stackrel{\u02dc}{\mu}\right)+MSE\left(\stackrel{\u02dc}{\sigma}\right)+MSE\left(\stackrel{\u02dc}{\lambda}\right)+MSE\left(\stackrel{\u02dc}{\omega}\right)+MSE\left(\stackrel{\u02dc}{\eta}\right)}.$

similar findings.

For the KWT model we use the corresponding asymptotic relative efficiency (ARE) and it is defined as

$ARE=\frac{MSE\left({\stackrel{^}{\mu}}^{S}\right)+MSE\left({\stackrel{^}{\sigma}}^{S}\right)+MSE\left({\stackrel{^}{\lambda}}^{S}\right)+MSE\left({\stackrel{^}{\omega}}^{S}\right)+MSE\left({\stackrel{^}{\eta}}^{S}\right)}{MSE\left(\stackrel{\u02dc}{\mu}\right)+MSE\left(\stackrel{\u02dc}{\sigma}\right)+MSE\left(\stackrel{\u02dc}{\lambda}\right)+MSE\left(\stackrel{\u02dc}{\omega}\right)+MSE\left(\stackrel{\u02dc}{\eta}\right)}$

The mean square errors (MSE) are similarly defined as in the case of the compound gamma model and again estimated using simulated samples. The ARE is a ratio with the total of mean square errors for the SCVM estimators appearing in the numerator and the total of mean square errors of MM estimators appearing in the denominator.

The key findings are illustrated using Table 2 and again SCVM method seems to perform much better than MM method for the common range of parameters used for modeling daily returns of stocks with

0 ≤ λ ≤ 0.010, 0.005 ≤ ω ≤ 0.010 and 0 ≤ μ ≤ 0.001, 0 ≤ σ ≤ 0.008. With the results displayed in Table 2 which give an idea of the order of the overall efficiency gained by using SCVM method, we can see that overall SCVM method is at least 100 time better than MM method for the range of parameters being considered. Clearly, more numerical studies are needed but we do not have the computer resources to conduct larger scale of study being in a small department equipped with only laptop personal computers. Despite the limited nature of the study it does point to better efficiency when using SCVM methods for models having at least three parameters, in general.

4. Conclusion

It appears that SCVM method has the potential to generate more efficient estimators than MM method especially for models with more than two parameters. Like SMHD method, it is also robust and easier to implement than SMHD method as it is based on sample distribution function instead of density estimates. It can handle continuous models with a few discontinuity points with probability masses attached to them where the SMHD method might not be suitable but it might be less efficient than SMHD method for continuous model, in general.

Acknowledgements

The helpful comments of an anonymous referee and the support of the staffs of OJS which lead to an improvement of the presentation of the paper are gratefully acknowledged.

Appendix

In this technical appendix, we shall prove that with the conditions of Assumption 1, the condition 4 of Theorem 2 will hold, i.e.,

${\mathrm{sup}}_{\Vert \theta ={\theta}_{0}\Vert \le {\delta}_{n}}\sqrt{n}\Vert {G}_{n}\left(\theta \right)-G\left(\theta \right)-{G}_{n}\left({\theta}_{0}\right)\Vert ={o}_{p}\left(1\right),$ i.e.,

$\sqrt{n}\Vert {G}_{n}\left(\theta \right)-G\left(\theta \right)-{G}_{n}\left({\theta}_{0}\right)\Vert \stackrel{p}{\to}0$ uniformly as $\theta \to {\theta}_{0}$ and $n\to \infty $

Now define the sequence of functions ${g}_{n}\left(\theta \right)=n{\Vert {G}_{n}\left(\theta \right)-G\left(\theta \right)-{G}_{n}\left({\theta}_{0}\right)\Vert}^{2}$ , it suffices to show

${g}_{n}\left(\theta \right)\stackrel{p}{\to}0$ uniformly as $\theta \to {\theta}_{0}$ and $n\to \infty $ .

Using Markov’s type inequality, for any $\u03f5>0$ , we have the following inequality

$P\left({g}_{n}\left(\theta \right)\ge \u03f5\right)\le \frac{{g}_{n}^{e}\left(\theta \right)}{\u03f5}$ with ${g}_{n}^{e}\left(\theta \right)=E\left({g}_{n}\left(\theta \right)\right)$ as given by expression (26).

Consequently, it suffices to have ${g}_{n}^{e}\left(\theta \right)\to 0$ uniformly as $\theta \to {\theta}_{0}$ and $n\to \infty $ . Clearly under Assumption 1 we have ${g}_{n}^{e}\left(\theta \right)\to 0$ pointwise but we need to strengthen it to uniform convergence for $\left\{{g}_{n}^{e}\left(\theta \right)\right\}$ . Therefore, it suffices to have equicontinuity for the sequence $\left\{{g}_{n}^{e}\left(\theta \right)\right\}$ as the domain of the sequence of functions is compact, see Rudin [19] (1974, p. 168). A sufficient condition for this property is the Lipschitz property which is related to the property of differentiability of the sequence of functions, see Davidson and Donsig [14] (2009, p. 88). Since the parameter space is compact and if the sequence $\left\{{g}_{n}^{e}\left(\theta \right)\right\}$ is differentiable hence Lipchitz then with Assumption 1, these properties implies equicontinuity for the sequences of functions $\left\{{g}_{n}^{e}\left(\theta \right)\right\}$ .

For the notion of stochastic equicontinuity a stochastic version of equicontinuity, see Newey and McFadden [20] (1994, pp. 2136-2138) equicontinuity which extend the notion of equicontinuity of deterministic functions of real analysis and section 7 in Newey and McFadden [20] (1994, pp. 2184-2193) on asymptotic normality with non-smooth objective function.

Cite this paper

*Open Journal of Statistics*,

**7**, 815-833. doi: 10.4236/ojs.2017.75058.

[1] | Klugman, S.A., Panjer, H.H. and Willmott, G.E. (2012) Loss Models: From Data to Decisions. New York, Wiley. |

[2] |
Luong, A. (2016) Cramér-Von Mises Distance Estimation for Some Positive Infinitely Divisible Parametric Families with Actuarial Applications. Scandinavian Actuarial Journal, 2016, 530-549. https://doi.org/10.1080/03461238.2014.977817 |

[3] |
Merton, R.C. (1976) Option Pricing When the Underlying Stock Returns Are Discontinuous. Journal of Financial Economics, 3, 125-144.
https://doi.org/10.1016/0304-405X(76)90022-2 |

[4] |
Kou, S.G. (2002) A Jump Diffusion Model for Option Pricing. Management Science, 48, 1086-1101. https://doi.org/10.1287/mnsc.48.8.1086.166 |

[5] |
Kou, S.G. and Wang, H. (2004) Option Pricing under a Double Exponential Jump Diffusion Model. Management Science, 50, 1178-1192.
https://doi.org/10.1287/mnsc.1030.0163 |

[6] | Tsay, R.S. (2016) The Analysis of Financial Time Series. 2rd Edition, Wiley, New York. |

[7] | Feuerverger, A. and McDunnough, P. (1981) On the Efficiency of Empirical Characteristic Function Procedures. Journal of the Royal Statistical Society, Series B, 43, 20-27. |

[8] |
Duchesne, T., Rioux, J. and Luong, A. (1997) Minimum Cramer-Von Mises Methods for Complete and Grouped Data. Communications in Statistics, Theory and Methods, 26, 401-420. https://doi.org/10.1080/03610929708831923 |

[9] |
Garcia, R., Reneault, E. and Veredas, D. (2004) Estimation of Stable Distribution by Indirect Inference. Journal of Econometrics, 161, 325-337.
https://doi.org/10.1016/j.jeconom.2010.12.007 |

[10] |
Smith Jr., A.A. (1993) Estimating Nonlinear Time Series Models Using Simulated Vector Autoregressions. Journal of Applied Econometrics, 8, S63-S84.
https://doi.org/10.1002/jae.3950080506 |

[11] |
Hogg, R.V. and Klugman, S.A. (1984) Loss Distributions. Wiley, New York.
https://doi.org/10.1002/9780470316634 |

[12] |
Luong, A. and Bilodeau, C. (2017) Simulated Minimum Hellinger Distance for Some Continuous Financial and Actuarial Models. Open Journal of Statistics, 7, 743-759. https://doi.org/10.4236/ojs.2017.74052 |

[13] |
Pakes, A. and Pollard, D. (1989) Simulation Asymptotic of Optimization Estimators. Econometrica, 57, 1027-1057. https://doi.org/10.2307/1913622 |

[14] | Davidson, K.R. and Donsig, A.P. (2009) Real Analysis and Applications: Theory and Practice. Springer, New York. |

[15] | Luenberger, D.G. (1968) Optimization by Vector Space Methods. Wiley, New York. |

[16] | Davidson, R. and McKinnon, J.G. (2004) Econometric Theory and Methods. Oxford. |

[17] | Chong, E.K.P. and Zak, S.H. (2013) An Introduction to Optimization. 4th Edition, Wiley, New York. |

[18] |
Reid, N. (1981) Influence Functions for Censored Data. Annals of Statistics, 9, 78-92. https://doi.org/10.1214/aos/1176345334 |

[19] | Rudin, W. (1976) Principles of Mathematical Analysis. McGraw-Hill, New York. |

[20] | Newey, W.K. and McFadden, D. (1994) Large Sample Estimation and Hypothesis Testing. In: Engle, R.F. and McFadden, D., Eds., Handbook of Econometrics, Vol. 4, North Holland, Amsterdam. |

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.