A Finite Mixture of Generalised Inverse Gaussian with Indexes − 1 2 and 3 2 as Mixing Distribution for Normal Variance Mean Mixture with Application

Mixture models have become more popular in modelling compared to standard distributions. The mixing distributions play a role in capturing the va-riability of the random variable in the conditional distribution. Studies have lately focused on finite mixture models as mixing distributions in the mixing mechanism. In the present work, we consider a Normal Variance Mean mixture model. The mixing distribution is a finite mixture of two special cases of Generalised Inverse Gaussian distribution with indexes 1 2 − and 3 2 . The parameters of the mixed model are obtained via the Expectation-Maximization (EM) algorithm. The iterative scheme is based on a presentation of the normal equations. An application to some financial data has been done.


Introduction
Mixture model provides a general framework for the construction new distribution. Often standard distribution has been considered as mixing distribution for the random variable. One way of extending this work is considering (finite) mixture models as mixing distributions. Jorgensen, Seshadri and Whitmore [1] in-  [2], Gupta and Akman [3]. Gupta and Kundu [4] showed the model is versatile compared to the inverse Gaussian.
Lindley distribution introduced by Lindley [5] is a finite mixture of exponential and gamma distribution. A detailed study of this one-parameter two component (finite) mixture was given by Ghitany et al. [6]. Sankara [7] had used it as a mixing distribution to Poisson distribution. Shanker and Hogos [8] applied the Poisson-Lindley distribution to Biological Science data.
Generalized Inverse Gaussian Distribution is a three parameter distribution based on the modified Bessel function of the third kind. Let Fisher [9] introduced the notion of "Weighted Distribution" which was later elaborated by Patil and Rao [10], Akaman and Gupta [2]; Gupta and Akaman [4]; Gupta and Kundu [4] showed that 1 , , In our present work, we show that a finite mixture of is also weighted Inverse Gaussian distributions. We have used the model as a mixing distribution in the Normal Variance Mean mixture. The mean and variance of the mixed model has been given. Parameter estimation has been done using the Expectation Maximization (EM) algorithm introduced by Dempster et al. [11]. Application to some financial data set has been performed and obtained satisfactory results.

Proposed Model
The Generalised Inverse Gaussian is a three parameter model presented as If 1 2 n λ = + , where n is a positive integer, then we have For more properties see Abramowitz and Stegun [12].
The GIG has a number of special cases when the parameter λ take specific values. In particular when 1 2 λ = − we obtain the Inverse Gaussian distribution presented as ( ) ( ) and when 3 2

λ =
we obtain the special case presented as which is also a finite mixture of The mean and variance for the weighted distribution are

Construction of the Mixed Model
Consider the distribution and the random variable Z following a weighted Inverse Gaussian distribution as given in formula (10).
In general the integral formulation for constructing Normal Weighted Inverse Gaussian (NWIG) distributions is presented as One of the attractive feature of constructing distributions using mixture approach is that properties of the mixed model can be expressed in terms of the properties of the mixing distribution. In the Normal Variance Mean mixing mechanism we obtain The mixing mechanism has also been used by Barndorff-Nielsen Barndorff-Nielsen [13] in constructing the Generalized Hyperbolic Distribution (GHD); Eberlein and Keller [14] worked on the hyperbolic distribution; Barndorff-Nielsen [15] introduced the Normal Inverse Gaussian (NIG) distribution; Aas and Haff [16] considered the Generalized hyperbolic skew Student's t distribution. It's our objective to construct a Normal Weighted Inverse Gaussian mixture. For our case ( ) ( ) 3 2 Therefore the mixed model becomes, With the following properties 1 E X δγ δγ δ γ δ γ δ µ β γ δγ γ δ The log-likelihood function

Maximum Likelihood Estimation via Expectation-Maximization (EM) Algorithm
EM algorithm is a powerful technique for maximum likelihood estimation for data containing missing values or data that can be considered as containing missing values. It was introduced by Dempster et al. [11]. Assume that the true data are made of an observed part X and unobserved part Z. This then ensures the log likelihood of the complete data ( )  [18] applied EM algorithm to mixtures which he considered to consist of two parts; the conditional pdf is for observed data and the mixing distribution is based on an unobserved data, the missing values.

M-Step for Conditional pdf
Since the conditional distribution for the six models is normal distribution as presented in formula (5.2), we have ( ) ( ) Maximizing with respect to δ and γ we have the following representa- Both equations are quadratic in γ and δ respectively.

E-Step
Posterior Expectation Similarly, The posterior expectations for the k-th iteration are: Now, define the iterative scheme as follows: let

Application
Let ( t P ) denote the price process of a security at time t, in particular of a stock.
In order to allow comparison of investments in different securities we shall investigate the rates of return defined by 1 log log The data used in this research is for the s & p 500 weekly returns for the period 3/01/2000 to 1/07/2013 with 702 observations. The histogram for the weekly log-returns shows that the data is negatively skewed and exhibiting heavy tails.
The Q-Q plot shows that the normal distribution is not a good fit for the data especially at the tails (Figure 1). Table 1 provides descriptive statistics for the return series in consideration. We observe that the excess kurtosis of 6.408709 indicates the leptokurtic behaviour of the returns. The log-returns has a distributions with relatively heavier tails than the normal distribution. We observe skewness of −0.7851156 which indicates that the two tails of the returns behave slightly differently.
We now fit the proposed model to s & p500 weekly log-returns. Using the sample estimates and the NIG estimators we obtain the following estimates as initial values for the EM algorithm (

Conclusions
In this work, we have considered a normal variance mean mixture when the mixing distribution is a weighted inverse Gaussian distribution. In specific, we have shown that a finite mixture of two special cases of generalised inverse Gaussian distribution of indexes 1 2 − and 3 2 is itself a weighted inverse Gaussian distribution.
Further, we have constructed a Normal Weighted Inverse Gaussian distribution, studied its properties and estimated the parameters using the Expectation Maximization (EM) algorithm. The initial values were based on Karlis [18] formulation method of moments estimates of the NIG distribution. We obtained a monotonic convergence for the iterative scheme proposed. The model is a good alternative for the NIG distribution.