A Finite Mixture of Generalised Inverse Gaussian with Indexes -1/2 and -3/2 as Mixing Distribution for Normal Variance Mean Mixture with Application

Abstract

Mixture models have become more popular in modelling compared to standard distributions. The mixing distributions play a role in capturing the variability of the random variable in the conditional distribution. Studies have lately focused on finite mixture models as mixing distributions in the mixing mechanism. In the present work, we consider a Normal Variance Mean mixture model. The mixing distribution is a finite mixture of two special cases of Generalised Inverse Gaussian distribution with indexes -1/2 and -3/2. The parameters of the mixed model are obtained via the Expectation-Maximization (EM) algorithm. The iterative scheme is based on a presentation of the normal equations. An application to some financial data has been done.

Share and Cite:

Maina, C. , Weke, P. , Ogutu, C. and Ottieno, J. (2021) A Finite Mixture of Generalised Inverse Gaussian with Indexes -1/2 and -3/2 as Mixing Distribution for Normal Variance Mean Mixture with Application. Open Journal of Statistics, 11, 963-976. doi: 10.4236/ojs.2021.116056.

1. Introduction

Mixture model provides a general framework for the construction new distribution. Often standard distribution has been considered as mixing distribution for the random variable. One way of extending this work is considering (finite) mixture models as mixing distributions. Jorgensen, Seshadri and Whitmore [1] introduced a finite mixture of Inverse Gaussian and Reciprocal Inverse Gaussian distribution. They studied some properties. Further characteristics were obtained by Akman and Gupta [2], Gupta and Akman [3]. Gupta and Kundu [4] showed the model is versatile compared to the inverse Gaussian.

Lindley distribution introduced by Lindley [5] is a finite mixture of exponential and gamma distribution. A detailed study of this one-parameter two component (finite) mixture was given by Ghitany et al. [6]. Sankara [7] had used it as a mixing distribution to Poisson distribution. Shanker and Hogos [8] applied the Poisson-Lindley distribution to Biological Science data.

Generalized Inverse Gaussian Distribution is a three parameter distribution based on the modified Bessel function of the third kind. Let us denote it by

G I G ( λ , δ , γ ) . It has several special and limiting cases. We have for example,

Inverse Gaussian distribution; i.e. G I G ( 1 2 , δ , γ ) ; reciprocal iverse Gaussian distribution, i.e., G I G ( 1 2 , δ , γ ) ; Gamma G I G ( λ ,0, γ ) distribution, Inverse Gamma G I G ( λ , δ ,0 ) ; G I G ( 3 2 , δ , γ ) , G I G ( 3 2 , δ , γ ) distributions, etc.

Fisher [9] introduced the notion of “Weighted Distribution” which was later elaborated by Patil and Rao [10], Akaman and Gupta [2]; Gupta and Akaman

[4]; Gupta and Kundu [4] showed that G I G ( 1 2 , δ , γ ) and the finite mixture p G I G ( 1 2 , δ , γ ) + ( 1 p ) G I G ( 1 2 , δ , γ ) are weighted inverse Gaussian distributions.

In our present work, we show that a finite mixture of G I G ( 1 2 , δ , γ ) and

G I G ( 3 2 , δ , γ ) is also weighted Inverse Gaussian distributions. We have used

the model as a mixing distribution in the Normal Variance Mean mixture. The mean and variance of the mixed model has been given. Parameter estimation has been done using the Expectation Maximization (EM) algorithm introduced by Dempster et al. [11]. Application to some financial data set has been performed and obtained satisfactory results.

2. Proposed Model

The Generalised Inverse Gaussian is a three parameter model presented as

g ( z ) = ( γ δ ) λ z λ 1 2 K λ ( δ γ ) exp { 1 2 ( δ 2 z + γ 2 z ) } (1)

z > 0 ; < λ < , δ > 0 , γ > 0

with the raw properties

E ( Z r ) = ( δ γ ) r K λ + r ( δ γ ) K λ ( δ γ )

where r can be positive or negative integers.

Where K λ ( ω ) is the modified Bessel function of the third kind of order λ evaluated at ω with the following properties:

K λ ( ω ) = K λ ( ω ) (2)

ω K λ ( ω ) = 1 2 [ K λ + 1 ( ω ) + K λ 1 ( ω ) ] (3)

ω K λ ( ω ) = λ ω K λ ( ω ) K λ + 1 ( ω ) (4)

2 λ ω K λ ( ω ) = K λ + 1 ( ω ) K λ 1 ( ω ) (5)

K λ ( ω ) = π 2 ω e ω ( 1 + i = 1 i = 1 n 4 γ 2 ( 2 i 1 ) 2 n ! ( 8 ω ) n ) (6)

If λ = n + 1 2 , where n is a positive integer, then we have

Corollary 1

K n + 1 2 ( ω ) = π 2 ω e ω [ 1 + i = 1 n ( n + i ) ! i ! ( n i ) ! ( 2 ω ) i ] (7)

For more properties see Abramowitz and Stegun [12].

The GIG has a number of special cases when the parameter λ take specific values. In particular when λ = 1 2 we obtain the Inverse Gaussian distribution presented as

f ( z ) = δ 2 π exp ( δ γ ) z 3 2 exp ( 1 2 ( δ 2 z + γ 2 z ) ) (8)

and when λ = 3 2 we obtain the special case presented as

f ( z ) = γ 3 e δ γ 2 π ( 1 + δ γ ) z 1 2 exp { 1 2 ( δ 2 z + γ 2 z ) } (9)

3. Weighted Inverse Gaussian Distribution

Let Z be a random variable with pdf f ( z ) . A function of Z, w ( Z ) is also a random variable with expectation

E [ w ( Z ) ] = w ( z ) f ( z ) d z

1 = w ( z ) E [ w ( Z ) ] f ( z ) d z

Thus

g ( z ) = w ( z ) E [ w ( Z ) ] f ( z ) , < x < (10)

is a weighted distribution. It was introduced by Fisher [9] and elaborated by Patil and Rao [10].

Now, suppose Z I G ( γ , δ ) the Inverse Gaussian distribution with parameters γ and δ , Let

w ( Z ) = 1 + Z 2 1 + δ γ (11)

E [ w ( Z ) ] = γ 3 + δ γ 3 (12)

g ( z ) = γ 3 γ 3 + δ ( 1 + z 2 1 + δ γ ) f ( z ) (13)

which is also a finite mixture of G I G ( 1 2 , δ , γ ) and G I G ( 3 2 , δ , γ ) . That is

g ( z ) = p G I G ( 1 2 , δ , γ ) + ( 1 p ) G I G ( 3 2 , δ , γ )

with

p = γ 3 γ 3 + δ

The mean and variance for the weighted distribution are

E [ Z ] = γ 3 γ 3 + δ [ δ γ 4 ( 1 + δ γ ) + δ 3 γ 2 + 3 δ 2 γ + 3 δ γ 5 ( 1 + δ γ ) ] (14)

= δ γ 4 ( 1 + δ γ ) + δ 3 γ 2 + 3 δ 2 γ + 3 δ γ 2 ( 1 + δ γ ) ( γ 3 + δ ) (15)

v a r ( Z ) = ( δ γ 6 ( 1 + δ γ ) 2 + δ 5 γ 4 + 10 δ 4 γ 3 + 45 δ 3 γ 2 + 105 δ 2 γ + 105 δ ) ( 1 + δ γ ) ( γ 3 + δ ) γ 6 ( 1 + δ γ ) 2 ( γ 3 + δ ) (16)

( δ γ 4 ( 1 + δ γ ) + δ 3 γ 2 + 3 δ 2 γ + 3 δ ) 2 γ 6 ( 1 + δ γ ) 2 ( γ 3 + δ ) (17)

4. Construction of the Mixed Model

Consider the distribution X | Z N ( μ + β z , z ) and the random variable Z following a weighted Inverse Gaussian distribution as given in formula (10).

In general the integral formulation for constructing Normal Weighted Inverse Gaussian (NWIG) distributions is presented as

f ( x ) = δ e δ γ e β ( x μ ) 2 π 0 w ( z ) E [ w ( Z ) ] z 2 e 1 2 { δ 2 ϕ x z + α 2 z } d z (18)

One of the attractive feature of constructing distributions using mixture approach is that properties of the mixed model can be expressed in terms of the properties of the mixing distribution. In the Normal Variance Mean mixing mechanism we obtain

M X ( t ) = e μ t M Z ( β t + t 2 2 )

E ( X ) = μ + β E ( Z )

V a r ( X ) = E ( Z ) + β 2 V a r ( Z )

μ 3 ( X ) = 3 β v a r ( Z ) + β 3 μ 3 ( Z )

μ 4 ( X ) = β 4 μ 4 ( Z ) + 6 β 2 μ 3 ( Z ) + 6 β 2 E [ Z ] v a r ( Z ) + 3 E [ Z 2 ]

The mixing mechanism has also been used by Barndorff-Nielsen Barndorff-Nielsen [13] in constructing the Generalized Hyperbolic Distribution (GHD); Eberlein and Keller [14] worked on the hyperbolic distribution; Barndorff-Nielsen [15] introduced the Normal Inverse Gaussian (NIG) distribution; Aas and Haff [16] considered the Generalized hyperbolic skew Student’s t distribution. It’s our objective to construct a Normal Weighted Inverse Gaussian mixture. For our case

w ( z ) E [ w ( Z ) ] = γ 3 γ 3 + δ ( 1 + Z 2 1 + δ γ ) (19)

Therefore the mixed model becomes,

f ( x ) = δ γ 3 e δ γ 2 π ( γ 3 + δ ) e β ( x μ ) 0 ( 1 + z 2 1 + δ γ ) z 2 e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z = δ γ 3 e δ γ 2 π ( γ 3 + δ ) e β ( x μ ) 0 ( z 1 1 + z 1 1 1 + δ γ ) e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z

f ( x ) = δ γ 3 e δ γ e β ( x μ ) π ( γ 3 + δ ) { [ δ ϕ ( x ) α ] 1 K 1 ( α δ ϕ ( x ) ) + [ δ ϕ ( x ) α ] K 1 ( α δ ϕ ( x ) ) 1 + δ γ } = δ γ 3 e δ γ e β ( x μ ) π ( γ 3 + δ ) { α δ ϕ ( x ) + δ ϕ ( x ) α ( 1 + δ γ ) } K 1 ( α δ ϕ ( x ) ) = δ γ 3 e δ γ e β ( x μ ) π ( γ 3 + δ ) α δ ( 1 + δ γ ) ϕ ( x ) { α 2 ( 1 + δ γ ) + δ 2 ϕ ( x ) } K 1 ( α δ ϕ ( x ) ) = γ 3 e δ γ e β ( x μ ) ( ϕ ( x ) ) 1 α π ( γ 3 + δ ) ( 1 + δ γ ) { α 2 ( 1 + δ γ ) + δ 2 ϕ ( x ) } K 1 ( α δ ϕ ( x ) ) (20)

With the following properties

E ( X ) = μ + β δ γ 4 ( 1 + δ γ ) + δ 3 γ 2 + 3 δ 2 γ + 3 δ γ 2 ( 1 + δ γ ) ( γ 3 + δ ) (21)

v a r ( X ) = δ γ 4 ( 1 + δ γ ) + δ 3 γ 2 + 3 δ 2 γ + 3 δ γ 2 ( 1 + δ γ ) ( γ 3 + δ ) β 2 [ ( δ γ 6 ( 1 + δ γ ) 2 + δ 5 γ 4 + 10 δ 4 γ 3 + 45 δ 3 γ 2 + 105 δ 2 γ + 105 δ ) ( 1 + δ γ ) ( γ 3 + δ ) γ 6 ( 1 + δ γ ) 2 ( γ 3 + δ ) ( δ γ 4 ( 1 + δ γ ) + δ 3 γ 2 + 3 δ 2 γ + 3 δ ) 2 γ 6 ( 1 + δ γ ) 2 ( γ 3 + δ ) ] (22)

The log-likelihood function

l = log L = i = 1 n log f ( x i ) = i = 1 n { 3 log γ + δ γ + β ( x i μ ) log ( α π ( γ 3 + δ ) ) ( 1 + δ γ ) 1 2 log ϕ ( x i ) + log [ α 2 ( 1 + δ γ ) + δ 2 ϕ ( x i ) ] + log K 1 ( α δ ϕ ( x i ) ) } = 3 n log γ + n δ γ + β i = 1 n x i n β μ n log ( α π ( γ 3 + δ ) ) ( 1 + δ γ ) 1 2 i = 1 n log ϕ ( x i ) + i = 1 n log [ α 2 ( 1 + δ γ ) + δ 2 ϕ ( x i ) ] + i = 1 n log K 1 ( α δ ϕ ( x i ) ) (23)

5. Maximum Likelihood Estimation via Expectation-Maximization (EM) Algorithm

EM algorithm is a powerful technique for maximum likelihood estimation for data containing missing values or data that can be considered as containing missing values. It was introduced by Dempster et al. [11].

Assume that the true data are made of an observed part X and unobserved part Z. This then ensures the log likelihood of the complete data ( x i , z i ) for i = 1 , 2 , 3 , , n factorizes into two parts [17]. i.e.,

log L = log i = 1 n f ( x i / z i ) + log i = 1 n g ( z i ) = i = 1 n log f ( x i / z i ) + i = 1 n log g ( z i )

where

l 1 = i = 1 n log f ( x i / z i )

and

l 2 = i = 1 n log g ( z i )

[18] applied EM algorithm to mixtures which he considered to consist of two parts; the conditional pdf is for observed data and the mixing distribution is based on an unobserved data, the missing values.

5.1. M-Step for Conditional pdf

Since the conditional distribution for the six models is normal distribution as presented in formula (5.2), we have

l 1 = n 2 log ( 2 π ) 1 2 i = 1 n log z i i = 1 n ( x i μ β z i ) 2 2 z i

Therefore

β l 1 = 0 i = 1 n ( x i μ ^ β ^ z i ) = 0

i.e., i = 1 n x i n μ ^ β ^ i = 1 n z i = 0

μ ^ = x ¯ β ^ z ¯

where x ¯ = i = 1 n x i n and z ¯ = i = 1 n z i n .

Similarly,

μ l 1 = 0 i = 1 n x i z i μ ^ i = 1 n 1 z i n β ^ = 0

i = 1 n x i z i x ¯ i = 1 n 1 z i + β ^ z ¯ i = 1 n 1 z i n β ^ = 0

β ^ = i = 1 n x i z i x ¯ i = 1 n 1 z i n z ¯ i = 1 n 1 z i

M-Step for the Mixing Distribution

l 2 = n log δ + 3 n log γ + n δ γ n 2 log ( 2 π ) n log ( 1 + δ γ ) n log ( γ 3 + δ ) + i = 1 n log ( 1 + δ γ + z i 2 ) 3 2 i = 1 n log z i δ 2 2 i = 1 n 1 z i γ 2 2 i = 1 n z i (24)

Maximizing with respect to δ and γ we have the following representation

( n δ 2 1 + δ γ i = 1 n z i ) γ 2 + i = 1 n δ γ 1 + δ γ + z i 2 + 3 n δ γ 3 + δ = 0 (25)

( n γ 2 1 + δ γ i = 1 n 1 z i ) δ 2 + i = 1 n δ γ 1 + δ γ + z i 2 + n γ 3 γ 3 + δ = 0 (26)

Both equations are quadratic in γ and δ respectively.

5.2. E-Step

Posterior Expectation

E ( Z / X ) = 0 z ( 1 + z 2 1 + δ γ ) z 2 e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z 0 ( 1 + z 2 1 + δ γ ) z 2 e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z = 1 2 0 ( z 0 1 + z 2 1 1 + δ γ ) e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z 1 2 0 ( z 1 1 + z 1 1 1 + δ γ ) e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z

= K 0 ( α δ ϕ ( x ) ) + [ δ ϕ ( x ) α ] 2 K 2 ( α δ ϕ ( x ) ) 1 + δ γ [ δ ϕ ( x ) α ] 1 K 1 ( α δ ϕ ( x ) ) + δ ϕ ( x ) α ( 1 + δ γ ) K 1 ( α δ ϕ ( x ) ) = K 0 ( α δ ϕ ( x ) ) + δ 2 ϕ ( x ) α 2 K 2 ( α δ ϕ ( x ) ) 1 + δ γ [ α δ ϕ ( x ) + δ ϕ ( x ) α ( 1 + δ γ ) ] K 1 ( α δ ϕ ( x ) ) = α δ ( 1 + δ γ ) ϕ ( x ) K 0 ( α δ ϕ ( x ) ) + δ 3 ( ϕ ( x ) ) 3 2 α K 2 ( α δ ϕ ( x ) ) [ α 2 ( 1 + δ γ ) + δ 2 ϕ ( x ) ] K 1 ( α δ ϕ ( x ) ) (27)

Similarly,

E ( 1 Z / X ) = 0 z 1 ( 1 + z 2 1 + δ γ ) z 2 e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z 0 ( 1 + z 2 1 + δ γ ) z 2 e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z = 1 2 0 ( z 2 1 + z 0 1 1 + δ γ ) e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z 1 2 0 ( z 1 1 + z 1 1 1 + δ γ ) e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z = [ δ ϕ ( x ) α ] 2 K 2 ( α δ ϕ ( x ) ) + K 0 ( α δ ϕ ( x ) ) 1 + δ γ [ ( δ ϕ ( x ) α ) 1 + ( δ ϕ ( x ) α ) 1 1 + δ γ ] K 1 ( α δ ϕ ( x ) )

= α 2 δ 2 ϕ ( x ) K 2 ( α δ ϕ ( x ) ) + K 0 ( α δ ϕ ( x ) ) 1 + δ γ [ α δ ϕ ( x ) + δ ϕ ( x ) α ( 1 + δ γ ) ] K 1 ( α δ ϕ ( x ) ) = α 2 K 2 ( α δ ϕ ( x ) ) + δ 2 ϕ ( x ) 1 + δ γ K 0 ( α δ ϕ ( x ) ) [ α δ ϕ ( x ) + δ 3 ( ϕ ( x ) ) 3 2 α ( 1 + δ γ ) ] K 1 ( α δ ϕ ( x ) ) = α 3 ( 1 + δ γ ) K 2 ( α δ ϕ ( x ) ) + α δ 2 ϕ ( x ) K 0 ( α δ ϕ ( x ) ) [ α 2 δ ( 1 + δ γ ) ϕ ( x ) + δ 3 ( ϕ ( x ) ) 3 2 ] K 1 ( α δ ϕ ( x ) ) (28)

Similarly

E ( z 2 / X ) = ( 1 + δ γ ) α 2 δ 2 ϕ ( x ) K 1 ( α δ ϕ ( x ) ) + δ 4 ( ϕ ( x ) ) 2 K 3 ( α δ ϕ ( x ) ) [ ( 1 + δ γ ) α 4 + α 2 δ 2 ϕ ( x ) ] K 1 ( α δ ϕ ( x ) ) (29)

The posterior expectations for the k-th iteration are:

s i ( k ) = α ( k ) δ ( k ) ( 1 + δ ( k ) γ ( k ) ) ϕ ( k ) ( x ) K 0 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + ( δ ( k ) ) 3 ( ϕ ( k ) ( x ) ) 3 2 α ( k ) K 2 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) [ ( α ( k ) ) 2 ( 1 + δ ( k ) γ ( k ) ) + ( δ ( k ) ) 2 ϕ ( k ) ( x ) ] K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) )

w i ( k ) = ( α ( k ) ) 3 ( 1 + δ ( k ) γ ( k ) ) K 2 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + α ( k ) ( δ ( k ) ) 2 ϕ ( k ) ( x ) K 0 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) [ ( α ( k ) ) 2 δ ( k ) ( 1 + δ ( k ) γ ( k ) ) ϕ ( k ) ( x ) + ( δ ( k ) ) 3 ( ϕ ( k ) ( x ) ) 3 2 ] K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) )

v i ( k ) = ( 1 + δ ( k ) γ ( k ) ) ( α ( k ) ) 2 ( δ ( k ) ) 2 ϕ ( k ) ( x i ) K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + ( δ ) 4 ( ϕ ( k ) ( x i ) ) 2 K 3 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) [ 1 + δ ( k ) γ ( k ) ( α ( k ) ) 4 + ( α ( k ) ) 2 ( δ ( k ) ) 2 ϕ ( k ) ( x i ) ] K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) )

Now, define the iterative scheme as follows:

let

a 1 ( k + 1 ) = n ( δ ( k ) ) 2 1 + δ ( k ) γ ( k ) i = 1 n s i ( k )

b 1 ( k + 1 ) = i = 1 n δ ( k ) 1 + δ ( k ) γ ( k ) + v i ( k )

c 1 ( k + 1 ) = 3 n δ ( k ) ( γ ( k ) ) 3 + δ ( k )

let

t ( k + 1 ) = b 1 ( k + 1 ) ( b 1 ( k + 1 ) ) 2 4 a 1 ( k + 1 ) c 1 ( k + 1 ) 2 a 1 ( k + 1 ) (30)

using the square root transformation, we have

γ k + 1 = t ( k + 1 ) (31)

Similarly, define

a 2 ( k + 1 ) = n ( γ ( k + 1 ) ) 2 1 + δ ( k ) γ ( k + 1 ) i = 1 n w i ( k ) (32)

b 2 ( k + 1 ) = i = 1 n δ ( k ) γ ( k + 1 ) 1 + δ ( k ) γ ( k + 1 ) + v i ( k ) (33)

c 2 ( k + 1 ) = n ( γ ( k + 1 ) ) 3 ( γ ( k + 1 ) ) 3 + δ ( k ) (34)

let

s ( k + 1 ) = b 2 ( k + 1 ) ( b 2 ( k + 1 ) ) 2 4 a 2 ( k + 1 ) c 2 ( k + 1 ) 2 a 2 ( k + 1 ) (35)

using the square root transformation, we have

δ k + 1 = s ( k + 1 ) (36)

and the k-th iteration for the loglikelihood is given by

l ( k ) = 3 n log γ ( k ) + n δ ( k ) γ ( k ) + β ( k ) i = 1 n ( x i μ ( k ) ) n log α ( k ) π n log ( 1 + δ ( k ) γ ( k ) ) n log ( ( γ ( k ) ) 3 + δ ( k ) ) 1 2 i = 1 n log ϕ ( k ) ( x i ) + i = 1 n log [ ( α ( k ) ) 2 ( 1 + δ ( k ) γ ( k ) ) + ( δ ( k ) ) 2 ϕ ( k ) ( x i ) ] + i = 1 n log K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x i ) ) (37)

6. Application

Let ( P t ) denote the price process of a security at time t, in particular of a stock. In order to allow comparison of investments in different securities we shall investigate the rates of return defined by

X t = log P t log P t 1

The data used in this research is for the s & p 500 weekly returns for the period 3/01/2000 to 1/07/2013 with 702 observations. The histogram for the weekly log-returns shows that the data is negatively skewed and exhibiting heavy tails. The Q-Q plot shows that the normal distribution is not a good fit for the data especially at the tails (Figure 1).

Table 1 provides descriptive statistics for the return series in consideration. We observe that the excess kurtosis of 6.408709 indicates the leptokurtic behaviour of the returns. The log-returns has a distributions with relatively heavier tails than the normal distribution. We observe skewness of −0.7851156 which indicates that the two tails of the returns behave slightly differently.

We now fit the proposed model to s & p500 weekly log-returns. Using the sample estimates and the NIG estimators we obtain the following estimates as initial values for the EM algorithm (Table 2).

α ^ = 0.6556607 , β ^ = 0.1257455 , δ ^ = 0.8310044 , μ ^ = 0.1690855 .

Stopping Criterion

The stopping criterion is when

l ( k ) l ( k 1 ) l ( k ) < t o l (38)

Table 1. Summary Statistics for RRC weekly log-returns.

Figure 1. Histogram and Q-Q plot for s & p500 weekly log-returns.

Table 2. Maximum likelihood parameter estimates via EM-algorithm.

Figure 2. Fitting Proposed Model to s & p500 weekly log returns.

where tol is the tolerance level chosen; e.g. 10−6.

The proposed model fits the data set well as illustrated in Figure 2. Expressing the proposed model in terms of its components we have

f ( x ) = γ 3 γ 3 + δ × N I G + δ γ 3 + δ × G H D ( 3 2 , α , δ , β , μ ) (39)

Using the parameter estimates p = 0.242793 . Therefore, the finite mixture for these data sets is more weighted to the GHD when λ = 3 2 than to the NIG.

7. Conclusions

In this work, we have considered a normal variance mean mixture when the mixing distribution is a weighted inverse Gaussian distribution. In specific, we have shown that a finite mixture of two special cases of generalised inverse Gaussian

distribution of indexes 1 2 and 3 2 is itself a weighted inverse Gaussian distribution.

Further, we have constructed a Normal Weighted Inverse Gaussian distribution, studied its properties and estimated the parameters using the Expectation Maximization (EM) algorithm. The initial values were based on Karlis [18] formulation method of moments estimates of the NIG distribution. We obtained a monotonic convergence for the iterative scheme proposed. The model is a good alternative for the NIG distribution.

Acknowledgements

The authors gratefully acknowledge the financial support from Kisii University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Jorgensen, B., Seshadri, V. and Whitmore, G.A. (1991) On the Mixture of the Inverse Gaussian Distribution with Its Complementary Reciproca. Scandinavian Journal of Statistics, 18, 77-89.
[2] Akman, H.O. and Gupta, R.C. (1992) A Comparison of Various Estimators of the Mean of an Inverse Gaussian Distribution. Journal of Statistical Computation and Simulation, 40, 71-81.
https://doi.org/10.1080/00949659208811366
[3] Gupta, R.C. and Akman, O. (1995) Estimation of Critical Points in the Mixture inverse Gaussian Model. Statistical Papers, 38, 445-452.
https://doi.org/10.1007/BF02925999
[4] Gupta, R.C. and Kundu, D. (2011) Weighted Inverse Gaussian—A Versatile Lifetime Model. Journal of Applied Statistics, 38, 2695-2708.
https://doi.org/10.1080/02664763.2011.567251
[5] Lindley, D.V. (1958) Fiducial Distributions and Bayes Theorem. Journal of the Royal Statistical Society: Series B, 20, 102-107.
https://doi.org/10.1111/j.2517-6161.1958.tb00278.x
[6] Ghitany, M.E., Atich, B. and Nadarajah, S. (2018) Lindley Distribution and Its Application. Mathematics and Computers in Simulations, 78, 493-506.
https://doi.org/10.1016/j.matcom.2007.06.007
[7] Sankaran, M. (1970) 275. Note: The Discrete Poisson-Lindley Distribution. Biometrics, 26, 145-149.
https://doi.org/10.2307/2529053
[8] Shanker, R. and Hogos, F. (2015) On Poisson Lindley Distribution and Its Applications to Biological Sciences. Biometrics and Biostatistics International Journal, 2, 103-107.
https://doi.org/10.15406/bbij.2015.02.00036
[9] Fisher, R.A. (1934) The Effect of Methods of Ascertainment upon the Estimation of Frequencies. Annals of Eugenics, 6, 13-25.
https://doi.org/10.1111/j.1469-1809.1934.tb02105.x
[10] Patil, G.P. and Rao, C.R. (1978) Weighted Distributions and Size-Biased Sampling with Applications to Wildlife Populations and Human Families. Biometrics, 34, 179-189.
https://doi.org/10.2307/2530008
[11] Dempster, A.P., Laird, N.M. and Rubin, D. (1977) Maximum Likelihood from Incomplete Data Using the EM Algorithm. Journal of the Royal Statistical Society: Series B, 39, 1-22.
https://doi.org/10.1111/j.2517-6161.1977.tb01600.x
[12] Abramowitz, M. and Stegun, I.A. (1972) Handbook of Mathematical Function. Dover, New York.
[13] Barndorff-Nielsen, O.E. (1977) Exponentially Decreasing Distributions for the Logarithm of Particle Size. Proceedings of the Royal Society A, 353, 409-419.
https://doi.org/10.1098/rspa.1977.0041
[14] Eberlein, E. and Keller, U. (1995) Hyberbolic Distributions in Finance. Bernoulli, 1, 281-299.
https://doi.org/10.2307/3318481
[15] Barndorff-Nielsen, O.E. (1997) Normal Inverse Gaussian Distribution and Stochastic Volatility Modelling. Scandinavian Journal of Statistics, 24, 1-13.
https://doi.org/10.1111/1467-9469.00045
[16] Aas, K. and Haff, H. (2006) The Generalised Hyperbolic Skew Student’s T-Distribution. Journal of Financial Econometrics, 4, 275-309.
https://doi.org/10.1093/jjfinec/nbj006
[17] Kostas, F. (2007) Tests of Fit for Symmetric Variance Gamma Distributions. UADPhilEcon, National and Kapodistrian University of Athens, Athens.
[18] Karlis, D. (2002) An EM Type Algorithm for Maximum Likelihood Estimation of the Normal-Inverse Gaussian Distribution. Statistics and Probability Letters, 57, 43-52.
https://doi.org/10.1016/S0167-7152(02)00040-8

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.