A Special Weight for Inverse Gaussian Mixing Distribution in Normal Variance Mean Mixture with Application

Normal Variance-Mean Mixture (NVMM) provides a general framework for deriving models with desirable properties for modelling financial market va-riables such as exchange rates, equity prices, and interest rates measured over short time intervals, i.e. daily or weekly. Such data sets are characterized by non-normality and are usually skewed, fat-tailed and exhibit excess kurtosis. The Generalised Hyperbolic distribution (GHD) introduced by Barndorff-Nielsen (1977) which act as Normal variance-mean mixtures with Generalised Inverse Gaussian (GIG) mixing distribution nest a number of special and limiting case distributions. The Normal Inverse Gaussian (NIG) distribution is obtained when the Inverse Gaussian is the mixing distribution, i.e., the index parameter of the GIG is 1 2 − . The NIG is very popular because of its analytical tractability. In the mixing mechanism, the mixing distribution cha-racterizes the prior information of the random variable of the conditional distribution. Therefore, considering finite mixture models is one way of ex-tending the work. The GIG is a three parameter distribution denoted by ( ) , , GIG λδγ and nest several special and limiting cases. When we have  which is called an Inverse Gaussian (IG) distribution. this work, we consider a finite mixture of and show that the mixture is also a weighted Inverse Gaussian distribution and use it to construct a NVMM. Due to the complexity of the likelihood, direct maximization is difficult. An EM type algorithm is provided for the Maximum Likelihood estimation of the parameters of the proposed model. We adopt an iterative scheme which is not based on explicit solution to the normal equations. This subtle approach reduces the computational difficulty of solving the complicated quantities involved directly to designing an iterative scheme based on a representation of the normal equation. The algorithm is easily programmable and we obtained a monotonic convergence for the data sets used.

non-normality and are usually skewed, fat-tailed and exhibit excess kurtosis. The Generalised Hyperbolic distribution (GHD) introduced by Barndorff-Nielsen (1977) which act as Normal variance-mean mixtures with Generalised Inverse Gaussian (GIG) mixing distribution nest a number of special and limiting case distributions. The Normal Inverse Gaussian (NIG) distribution is obtained when the Inverse Gaussian is the mixing distribution, i.e., the index parameter of the GIG is 1 2 − . The NIG is very popular because of its analytical tractability. In the mixing mechanism, the mixing distribution characterizes the prior information of the random variable of the conditional distribution. Therefore, considering finite mixture models is one way of extending the work. The GIG is a three parameter distribution denoted by ( ) , , GIG λ δ γ and nest several special and limiting cases. When

Introduction
Normal Inverse Gaussian (NIG) distribution introduced by Barndorff-Nielsen [1] is the popular normal variance-mean mixture, with an inverse Gaussian mixing distribution that has been used to handle non-normal distributed data. Efforts have been done and are still going on to identify alternatives to the NIG distributions: Aas and Haff [2] [3] considered Skew Student's t distribution as an alternative. Corlu and Corlu [4] used Generalized Lambda distribution, a generalization of Tukey's Lambda as an alternative.
Finite mixtures are more flexible (robust) than single mixing distributions; a result that has been stated a lot in statistical literature.
Very few studies have used finite mixtures as mixing distributions. Hence an attempt has been done here in this paper of ours. This idea is motivated by the fact that finite mixtures are more flexible than single distributions. Nadarajah, Zhang and Chan [5] have stated that finite mixtures of normal distributions are flexible than single normal distribution; finite mixtures of stable distributions are flexible than a single stable distribution; finite mixtures of student's t distributions are flexible than single student's distribution.
The second motivation is that very few studies on continuous mixtures have used finite mixtures as mixing distributions; with the exception of Lindley [6] distribution and its generalization in Poisson mixtures (e.g., Sankaran [7]; Mahmoudi and Zakerzadeh [8]). Lindley distribution and its generalizations are basically finite gamma mixtures.
GIG is a three parameter distribution denoted by which is also a finite mixture of

Normal Variance-Mean Mixture
A stochastic representation of a Normal Variance-Mean mixture is given by being the conditional pdf and g(z) the mixing distribution.

Construction of the Mixed Model
Suppose the mixing distribution follows formula (3) and noting that ( ) the mixed model becomes Note ( ) K λ ω denotes modified Bessel function of the third kind with index λ evaluated at ω .

( )
with the following properties a) ( ) ( ) 15 105 420 945 945 which are necessary in deriving the properties and estimates of the proposed models. For more definition and properties (see Abramowitz and Stegun [12]).

Parameter Estimation
Given a random sample of size n from our proposed model the log-likelihood function is given by The derivatives of the log-likelihood function involve the Bessel function and the parameters are difficult to separate. Direct maximization is not an easy. In this work we adopt the Expectation-Maximization (EM) algorithm. The EMalgorithm, introduced by Dempster et al. [13], is a powerful technique for maximum likelihood estimation for data containing missing values or data that can be considered as containing missing values.
Karlis [14] considered the mixing operation responsible for producing missing values. He applied the algorithm in estimating the parameters for the Normal Inverse Gaussian. The iterative scheme was based on explicit solution to the normal equations. In our present work the iterative scheme is based on a representation of the normal equation. This subtle approach overcomes the computational difficulty of solving the complicated quantities numerically.
Assume that the true data are made of an observed part X and unobserved part Z. This then ensures the log likelihood of the complete data ( )

M-Step for Conditional Distribution
Since the conditional distribution for the six models is normal distribution as presented in formula (6), we have Maximizing with respect to δ and γ we have the following representation ( ) Similarly,   Therefore the k-th iterations are: The ( ) 1 k + -th iteration of the log-likelihood function becomes

Application
Let ( t P ) denote the price process of a security at time t, in particular of a stock.
In order to allow comparison of investments in different securities we shall investigate the rates of return defined by 1 log log In this section we consider three datasets for data analysis: Range Resource Corporation (RRC), Shares of Chevron Corporation (CVX) and s&p500 index. The period 3/01/2000 to 1/07/2013 with 702 observations for each data set is considered. The histogram for the weekly log-returns in Figures 1-3 for s&p 500, RRC and CVX show that the data is negatively skewed and exhibiting heavy tails.
The Q-Q plot shows that the normal distribution is not a good fit for the data especially at the tails. This is typical the other data sets. Table 1 provides descriptive statistics of the data sets for the return series in consideration. We observe that the excess kurtosis that indicates the leptokurtic behaviour of the returns. The log-returns has a distributions with relatively heavier  tails than the normal distribution. We observe skewness for the data sets which indicates that the two tails of the returns behave slightly differently. We use Karlis [14] formulation to obtain the initial values for the EM algorithm. For this dataset, the values obtained are in Table 2.
The stopping criterion is when where tol is the tolerance level. Table 3 below shows the NIG maximum likelihood parameter estimates using the EM-algorithm at 3 10 tol − = . We now wish to obtain the maximum likelihood parameter estimates for the proposed model via the EM algorithm. Tables 3-5 illustrate monotonic convergence The finite mixture therefore is flexible in determining between 1 , , , , 2 GHD depending on the nature of the data. Table 6 below gives different values of weights for the three data sets considered based on the maximum likelihood estimates.

Conclusions
Two special cases of the Generalized Inverse Gaussian with indexes 1 2 and 3 2 have been used to construct a finite mixture model. The model has been used as a mixing distribution for Normal Variance-Mean mixture to a Normal Weighted Inverse Gaussian Model. The mean and variance of the proposed model have been obtained.
Three data sets: Range Resource Corporation (RRC), Shares of Chevron Corporation (CVX) and s&p500 index for the period 3/01/2000 to 1/07/2013 with 702 observations have been used for data analysis. An iterative scheme has been presented for parameter estimation by the EM algorithm. The iterative scheme demonstrates a monotonic convergence. The method of moment estimates for NIG worked well for the three data sets. The model fits the data sets well. Hence the model is a good alternative to NIG.