A Special Weight for Inverse Gaussian Mixing Distribution in Normal Variance Mean Mixture with Application

Abstract

Normal Variance-Mean Mixture (NVMM) provides a general framework for deriving models with desirable properties for modelling financial market variables such as exchange rates, equity prices, and interest rates measured over short time intervals, i.e. daily or weekly. Such data sets are characterized by non-normality and are usually skewed, fat-tailed and exhibit excess kurtosis. The Generalised Hyperbolic distribution (GHD) introduced by Barndorff-Nielsen (1977) which act as Normal variance-mean mixtures with Generalised Inverse Gaussian (GIG) mixing distribution nest a number of special and limiting case distributions. The Normal Inverse Gaussian (NIG) distribution is obtained when the Inverse Gaussian is the mixing distribution, i.e., the index parameter of the GIG is . The NIG is very popular because of its analytical tractability. In the mixing mechanism, the mixing distribution characterizes the prior information of the random variable of the conditional distribution. Therefore, considering finite mixture models is one way of extending the work. The GIG is a three parameter distribution denoted by and nest several special and limiting cases. When , we have which is called an Inverse Gaussian (IG) distribution. When , , , we have , and distributions respectively. These distributions are related to and are called weighted inverse Gaussian distributions. In this work, we consider a finite mixture of and and show that the mixture is also a weighted Inverse Gaussian distribution and use it to construct a NVMM. Due to the complexity of the likelihood, direct maximization is difficult. An EM type algorithm is provided for the Maximum Likelihood estimation of the parameters of the proposed model. We adopt an iterative scheme which is not based on explicit solution to the normal equations. This subtle approach reduces the computational difficulty of solving the complicated quantities involved directly to designing an iterative scheme based on a representation of the normal equation. The algorithm is easily programmable and we obtained a monotonic convergence for the data sets used.

Share and Cite:

Maina, C. , Weke, P. , Ogutu, C. and Ottieno, J. (2021) A Special Weight for Inverse Gaussian Mixing Distribution in Normal Variance Mean Mixture with Application. Open Journal of Statistics, 11, 977-992. doi: 10.4236/ojs.2021.116057.

1. Introduction

Normal Inverse Gaussian (NIG) distribution introduced by Barndorff-Nielsen [1] is the popular normal variance-mean mixture, with an inverse Gaussian mixing distribution that has been used to handle non-normal distributed data.

Efforts have been done and are still going on to identify alternatives to the NIG distributions: Aas and Haff [2] [3] considered Skew Student’s t distribution as an alternative. Corlu and Corlu [4] used Generalized Lambda distribution, a generalization of Tukey’s Lambda as an alternative.

Finite mixtures are more flexible (robust) than single mixing distributions; a result that has been stated a lot in statistical literature.

Very few studies have used finite mixtures as mixing distributions. Hence an attempt has been done here in this paper of ours. This idea is motivated by the fact that finite mixtures are more flexible than single distributions. Nadarajah, Zhang and Chan [5] have stated that finite mixtures of normal distributions are flexible than single normal distribution; finite mixtures of stable distributions are flexible than a single stable distribution; finite mixtures of student’s t distributions are flexible than single student’s distribution.

The second motivation is that very few studies on continuous mixtures have used finite mixtures as mixing distributions; with the exception of Lindley [6] distribution and its generalization in Poisson mixtures (e.g., Sankaran [7]; Mahmoudi and Zakerzadeh [8] ). Lindley distribution and its generalizations are basically finite gamma mixtures.

GIG is a three parameter distribution denoted by G I G ( λ , δ , γ ) . When λ = 1 2 , the mixing distribution is G I G ( 1 2 , δ , γ ) which is Inverse Gaussian and the mixture is Normal Inverse Gaussian (NIG). In our present work, we consider two other special cases: G I G ( 1 2 , δ , γ ) and G I G ( 3 2 , δ , γ ) . A finite mixture of these cases is shown to be a weighted Inverse Gaussian distribution. The concept of weighted distribution was introduced by Fisher [9] and elaborated by Patil and Rao [10]. The reciprocal Inverse Gaussian, i.e., G I G ( 1 2 , δ , γ ) and the finite mixture of G I G ( 1 2 , δ , γ ) and G I G ( 1 2 , δ , γ ) are shown to be weighted inverse Gaussian distribution by Gupta and Kundu [11].

The finite mixture model is used as a mixing distribution in Normal Variance Mean mixture to construct the proposed model. The maximum likelihood parameter estimates of the mixture are obtained via the Expectation-Maximization (EM) algorithm. For data analysis we have used three datasets: s&p500 index, Range Resource Corporation (RRC) and Shares of Chevron Corporation (CVX). We consider log returns for the period 3/01/2000 to 1/07/2013 with 702 observations for each dataset.

2. Weighted Inverse Gaussian Distribution

Consider a random variable Z with a probability distribution f ( z ) . Let w ( Z ) be a random function of Z. We can construct a new probability distribution

g ( z ) = w ( z ) E [ w ( Z ) ] f ( z ) , < x < (1)

which is a weighted distribution of f ( z ) . The concept of weighted distribution was introduced by Fisher [9] and elaborated by Patil and Rao [10].

In this work we consider weighted distribution for the Inverse Gaussian (IG) distribution.

Now, suppose Z I G ( γ , δ ) the Inverse Gaussian distribution with parameters γ and δ and probability density function given by

f ( z ) = δ 2 π exp ( δ γ ) z 3 2 exp ( 1 2 ( δ 2 z + γ 2 z ) ) (2)

Let

w ( Z ) = Z + Z 2 1 + δ γ

E [ w ( Z ) ] = δ ( γ 2 + 1 ) γ 3

g ( z ) = γ 3 δ ( γ 2 + 1 ) [ z + z 2 1 + δ γ ] f ( z ) (3)

which is also a finite mixture of G I G ( 1 2 , δ , γ ) and G I G ( 3 2 , δ , γ ) . That is

g ( z ) = p G I G ( 1 2 , δ , γ ) + ( 1 p ) G I G ( 3 2 , δ , γ )

with

p = γ 2 γ 2 + 1

The mean and variance for the weighted distribution are

E [ Z ] = γ 2 ( 1 + δ γ ) 2 + δ 2 γ 2 + 3 δ γ + 3 γ 2 ( 1 + δ γ ) ( γ 2 + 1 ) (4)

v a r ( Z ) = ( ( δ 2 γ 2 + 3 δ γ + 3 ) ( 1 + δ γ ) γ 2 + ( δ 3 γ 3 + 6 δ 2 γ 2 + 15 δ γ + 15 ) ) ( 1 + δ γ ) ( γ 2 + 1 ) γ 4 ( 1 + δ γ ) 2 ( γ 2 + 1 ) 2 ( γ 2 ( 1 + δ γ ) 2 + δ 2 γ 2 + 3 δ γ + 3 ) 2 γ 4 ( 1 + δ γ ) 2 ( γ 2 + 1 ) 2 (5)

3. Normal Variance-Mean Mixture

A stochastic representation of a Normal Variance-Mean mixture is given by

X = μ + β Z + Z Y

where

Y N ( 0,1 )

which gives the hierarchical representation

X / Z = z N ( μ + β z , z ) (6)

being the conditional pdf and g(z) the mixing distribution.

Construction of the Mixed Model

Suppose the mixing distribution follows formula (3) and noting that

w ( z ) E [ w ( Z ) ] = γ 3 δ ( γ 2 + 1 ) ( Z + Z 2 1 + δ γ ) (7)

the mixed model becomes

f ( x ) = δ e δ γ 2 π e β ( x μ ) 0 w ( z ) E [ w ( Z ) ] z 2 e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z = δ e δ γ 2 π e β ( x μ ) γ 3 δ ( γ 2 + 1 ) 0 ( z + z 2 1 + δ γ ) z 2 e 1 2 ( α 2 z + δ 2 ϕ ( x ) z ) d z = γ 3 e δ γ 2 π ( γ 2 + 1 ) e β ( x μ ) 0 ( z 0 1 + z 1 1 1 + δ γ ) e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z

f ( x ) = γ 3 e δ γ e β ( x μ ) π ( γ 2 + 1 ) { K 0 ( α δ ϕ ( x ) ) + δ ϕ ( x ) α ( 1 + δ γ ) K 1 ( α δ ϕ ( x ) ) } = γ 3 e δ γ e β ( x μ ) α π ( 1 + δ γ ) ( γ 2 + 1 ) { α ( 1 + δ γ ) K 0 ( α δ ϕ ( x ) ) + δ ϕ ( x ) K 1 ( α δ ϕ ( x ) ) } (8)

With the following properties

E ( X ) = μ + β γ 2 ( 1 + δ γ ) 2 + δ 2 γ 2 + 3 δ γ + 3 γ 2 ( 1 + δ γ ) ( γ 2 + 1 ) (9)

v a r ( X ) = γ 2 ( 1 + δ γ ) 2 + δ 2 γ 2 + 3 δ γ + 3 γ 2 ( 1 + δ γ ) ( γ 2 + 1 ) β 2 ( ( δ 2 γ 2 + 3 δ γ + 3 ) ( 1 + δ γ ) γ 2 + ( δ 3 γ 3 + 6 δ 2 γ 2 + 15 δ γ + 15 ) ) ( 1 + δ γ ) ( γ 2 + 1 ) γ 4 ( 1 + δ γ ) 2 ( γ 2 + 1 ) 2 ( γ 2 ( 1 + δ γ ) 2 + δ 2 γ 2 + 3 δ γ + 3 ) 2 γ 4 ( 1 + δ γ ) 2 ( γ 2 + 1 ) 2 (10)

Note

K λ ( ω ) denotes modified Bessel function of the third kind with index λ evaluated at ω .

K λ ( ω ) = 1 2 0 x λ 1 e 1 2 ( x + 1 x ) d x (11)

with the following properties

a) K 1 2 ( ω ) = K 1 2 ( ω ) = π 2 ω e ω (12)

b) K 3 2 ( ω ) = K 3 2 ( ω ) = π 2 ω e ω ( 1 + 1 ω ) (13)

c) K 5 2 ( ω ) = K 5 2 ( ω ) = π 2 ω e ω ( 1 + 3 ω + 3 ω 2 ) (14)

d) K 7 2 ( ω ) = K 7 2 ( ω ) = π 2 ω e ω ( 1 + 6 ω + 15 ω 2 + 15 ω 3 ) (15)

e) K 9 2 ( ω ) = K 9 2 ( ω ) = π 2 ω e ω ( 1 + 10 ω + 45 ω 2 + 105 ω 3 + 105 ω 4 ) (16)

f) K 11 2 ( ω ) = K 11 2 ( ω ) = π 2 ω e ω ( 1 + 15 ω + 105 ω 2 + 420 ω 3 + 945 ω 4 + 945 ω 5 ) (17)

which are necessary in deriving the properties and estimates of the proposed models. For more definition and properties (see Abramowitz and Stegun [12] ).

4. Parameter Estimation

Given a random sample of size n from our proposed model the log-likelihood function is given by

l = log L = i = 1 n log f ( x i ) = i = 1 n { 3 log γ + δ γ + β ( x i μ ) log ( α π ( γ 2 + 1 ) ( 1 + δ γ ) ) + log [ α ( 1 + δ γ ) K 0 ( α δ ϕ ( x ) ) + δ ϕ ( x ) K 1 ( α δ ϕ ( x ) ) ] } = 3 n log γ + n δ γ + β i = 1 n x i n β μ n log ( α π ( γ 2 + 1 ) ( 1 + δ γ ) ) + i = 1 n log [ α ( 1 + δ γ ) K 0 ( α δ ϕ ( x ) ) + δ ϕ ( x ) K 1 ( α δ ϕ ( x ) ) ] (18)

The derivatives of the log-likelihood function involve the Bessel function and the parameters are difficult to separate. Direct maximization is not an easy. In this work we adopt the Expectation-Maximization (EM) algorithm. The EM-algorithm, introduced by Dempster et al. [13], is a powerful technique for maximum likelihood estimation for data containing missing values or data that can be considered as containing missing values.

Karlis [14] considered the mixing operation responsible for producing missing values. He applied the algorithm in estimating the parameters for the Normal Inverse Gaussian. The iterative scheme was based on explicit solution to the normal equations. In our present work the iterative scheme is based on a representation of the normal equation. This subtle approach overcomes the computational difficulty of solving the complicated quantities numerically.

Assume that the true data are made of an observed part X and unobserved part Z. This then ensures the log likelihood of the complete data ( x i , z i ) for i = 1 , 2 , 3 , , n factorizes into two parts, i.e.,

log L = log i = 1 n f ( x i / z i ) + log i = 1 n g ( z i ) = i = 1 n log f ( x i / z i ) + i = 1 n log g ( z i )

where

l 1 = i = 1 n log f ( x i / z i )

and

l 2 = i = 1 n log g ( z i )

4.1. M-Step for Conditional Distribution

Since the conditional distribution for the six models is normal distribution as presented in formula (6), we have

l 1 = n 2 log ( 2 π ) 1 2 i = 1 n log z i i = 1 n ( x i μ β z i ) 2 2 z i

Therefore

β l 1 = 0 i = 1 n ( x i μ ^ β ^ z i ) = 0

i.e., i = 1 n x i n μ ^ β ^ i = 1 n z i = 0

μ ^ = x ¯ β ^ z ¯

where x ¯ = i = 1 n x i n and z ¯ = i = 1 n z i n .

Similarly,

μ l 1 = 0 i = 1 n x i z i μ ^ i = 1 n 1 z i n β ^ = 0

i = 1 n x i z i x ¯ i = 1 n 1 z i + β ^ z ¯ i = 1 n 1 z i n β ^ = 0

β ^ = i = 1 n x i z i x ¯ i = 1 n 1 z i n z ¯ i = 1 n 1 z i

4.2. M-Step for the Mixing Distribution

l 2 = 3 n log γ + n δ γ n 2 log ( 2 π ) n log ( γ 2 + 1 ) n log ( 1 + δ γ ) 1 2 i = 1 n log z i + i = 1 n [ log ( 1 + δ γ + z i ) ] γ 2 2 i = 1 n z i δ 2 2 i = 1 n 1 z i (19)

Maximizing with respect to δ and γ we have the following representation

3 n + 2 n δ γ γ ( 1 + δ ) γ ( 2 n γ 2 + 1 + i = 1 n z i ) + δ ( n + i = 1 n 1 1 + δ γ + z i ) = 0 (20)

n δ γ 2 1 + δ γ + γ i = 1 n 1 1 + δ γ + z i δ i = 1 n 1 z i = 0 (21)

There is need to estimate the values for Z i , 1 Z i and Z i 2 using the posterior expectations as follows:

Posterior Expectation

E ( Z / X ) = 0 z ( z + z 2 1 + δ γ ) z 2 e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z 0 ( z + z 2 1 + δ γ ) z 2 e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z

= 0 ( z 1 1 + z 2 1 1 + δ γ ) e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z 0 ( z 0 1 + z 1 1 1 + δ γ ) e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z = δ ϕ ( x ) α K 1 ( α δ ϕ ( x ) ) + [ δ ϕ ( x ) α ] 2 K 2 ( α δ ϕ ( x ) ) 1 + δ γ K 0 ( α δ ϕ ( x ) ) + δ ϕ ( x ) α ( 1 + δ γ ) K 1 ( α δ ϕ ( x ) ) = δ ϕ ( x ) ( 1 + δ γ ) K 1 ( α δ ϕ ( x ) ) + δ 2 ϕ ( x ) ( 1 + δ γ ) α K 2 ( α δ ϕ ( x ) ) α ( 1 + δ γ ) K 0 ( α δ ϕ ( x ) ) + δ ϕ ( x ) K 1 ( α δ ϕ ( x ) ) = α δ ϕ ( x ) ( 1 + δ γ ) K 1 ( α δ ϕ ( x ) ) + δ ϕ ( x ) ( 1 + δ γ ) K 2 ( α δ ϕ ( x ) ) α 2 ( 1 + δ γ ) K 0 ( α δ ϕ ( x ) ) + α δ ϕ ( x ) K 1 ( α δ ϕ ( x ) ) (22)

Similarly,

E ( 1 Z / X ) = 0 z 1 ( z + z 2 1 + δ γ ) z 2 e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z 0 ( z + z 2 1 + δ γ ) z 2 e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z = 1 2 0 ( z 1 1 + z 0 1 1 + δ γ ) e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z 1 2 0 ( z 0 1 + z 1 1 1 + δ γ ) e α 2 2 ( z + δ 2 ϕ ( x ) α 2 z ) d z = [ δ ϕ ( x ) α ] 1 K 1 ( α δ ϕ ( x ) ) + K 0 ( α δ ϕ ( x ) ) 1 + δ γ K 0 ( α δ ϕ ( x ) ) + δ ϕ ( x ) α ( 1 + δ γ ) K 1 ( α δ ϕ ( x ) )

E ( 1 Z / X ) = α 2 ( 1 + δ γ ) δ ϕ ( x ) K 1 ( α δ ϕ ( x ) ) + α K 0 ( α δ ϕ ( x ) ) α ( 1 + δ γ ) K 0 ( α δ ϕ ( x ) ) + α δ ϕ ( x ) K 1 ( α δ ϕ ( x ) ) = α 2 ( 1 + δ γ ) K 1 ( α δ ϕ ( x ) ) + α δ ϕ ( x ) K 0 ( α δ ϕ ( x ) ) α δ ϕ ( x ) K 0 ( α δ ϕ ( x ) ) + δ 2 ϕ ( x ) K 1 ( α δ ϕ ( x ) ) (23)

Similarly

E ( Z 2 / X ) = α ( 1 + δ γ ) δ 2 ϕ ( x ) K 2 ( α δ ϕ ( x ) ) + ( δ ϕ ( x ) ) 3 K 3 ( α δ ϕ ( x ) ) α 3 ( 1 + δ γ ) K 0 ( α δ ϕ ( x ) ) + α 2 δ ϕ ( x ) K 1 ( α δ ϕ ( x ) ) (24)

Now let s i = E ( Z i / X i ) , w i = E ( 1 Z i / X i ) and v i = E ( Z i 2 / X i ) and

s ¯ = 0 n s i n

w ¯ = 0 n w i n

Therefore the k-th iterations are:

s i ( k + 1 ) = α ( k ) δ ( k ) ϕ ( k ) ( x ) ( 1 + δ ( k ) γ ( k ) ) K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + δ ( k ) ϕ ( k ) ( x ) ( 1 + δ ( k ) γ ( k ) ) K 2 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) ( α ( k ) ) 2 ( 1 + δ ( k ) γ ( k ) ) K 0 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + α ( k ) δ ( k ) ϕ ( k ) ( x ) K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) )

w i ( k + 1 ) = ( α ( k ) ) 2 ( 1 + δ ( k ) γ ( k ) ) K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + α ( k ) δ ( k ) ϕ ( k ) ( x ) K 0 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) α ( k ) δ ( k ) ϕ ( k ) ( x ) K 0 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + ( δ ( k ) ) 2 ϕ ( k ) ( x ) K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) )

v i ( k + 1 ) = α ( k ) ( 1 + δ ( k ) γ ( k ) ) ( δ ( k ) ) 2 ϕ ( k ) ( x ) K 2 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + ( δ ( k ) ϕ ( k ) ( x ) ) 3 K 3 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) ( α ( k ) ) 3 ( 1 + δ ( k ) γ ( k ) ) K 0 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) ) + ( α ( k ) ) 2 δ ( k ) ϕ ( k ) ( x ) K 1 ( α ( k ) δ ( k ) ϕ ( k ) ( x ) )

These can be used to obtain the ( k + 1 ) -th values as follows

γ ^ ( k + 1 ) = 3 + 2 δ ( k ) γ ( k ) γ ( k ) ( 1 + δ ( k ) γ ( k ) ) + δ ( k ) [ 1 + 1 n i = 1 n 1 1 + δ ( k ) γ ( k ) + s i ( k ) ] 2 ( γ ( k ) ) 2 + 1 + s ¯ ( k ) (25)

δ ^ ( k + 1 ) = n δ ( k ) ( γ ( k + 1 ) ) 2 1 + δ ( k ) γ ( k + 1 ) + γ ( k + 1 ) i = 1 n 1 1 + δ ( k ) γ ( k + 1 ) + s i n w ¯ ( k ) (26)

β ^ ( k + 1 ) = i = 1 n x i w i ( k + 1 ) x ¯ i = 1 n w i ( k + 1 ) n s ¯ ( k + 1 ) i = 1 n w i ( k + 1 ) (27)

μ ^ ( k + 1 ) = x ¯ β ^ ( k + 1 ) s ¯ ( k + 1 ) (28)

α ^ ( k + 1 ) = ( β ^ ( k + 1 ) ) 2 + ( γ ^ ( k + 1 ) ) 2 (29)

The ( k + 1 ) -th iteration of the log-likelihood function becomes

l ( k + 1 ) = 3 n log γ ( k + 1 ) + n δ ( k + 1 ) γ ( k + 1 ) + β ( k + 1 ) i = 1 n x i n β ( k + 1 ) μ ( k + 1 ) n log ( α ( k + 1 ) π ( ( γ ( k + 1 ) ) 2 + 1 ) ( 1 + δ ( k + 1 ) γ ( k + 1 ) ) ) + i = 1 n log [ α ( k + 1 ) ( 1 + δ ( k + 1 ) γ ( k + 1 ) ) K 0 ( α ( k + 1 ) δ ( k + 1 ) ϕ ( k + 1 ) ( x ) ) + δ ( k + 1 ) ϕ ( k + 1 ) ( x ) K 1 ( α ( k + 1 ) δ ( k + 1 ) ϕ ( k + 1 ) ( x ) ) ] (30)

5. Application

Let ( P t ) denote the price process of a security at time t, in particular of a stock. In order to allow comparison of investments in different securities we shall investigate the rates of return defined by

X t = log P t log P t 1

In this section we consider three datasets for data analysis: Range Resource Corporation (RRC), Shares of Chevron Corporation (CVX) and s&p500 index. The period 3/01/2000 to 1/07/2013 with 702 observations for each data set is considered. The histogram for the weekly log-returns in Figures 1-3 for s&p 500, RRC and CVX show that the data is negatively skewed and exhibiting heavy tails. The Q-Q plot shows that the normal distribution is not a good fit for the data especially at the tails. This is typical the other data sets.

Table 1 provides descriptive statistics of the data sets for the return series in consideration. We observe that the excess kurtosis that indicates the leptokurtic behaviour of the returns. The log-returns has a distributions with relatively heavier

Figure 1. Histogram and Q-Q plot for s&p500 weekly log-returns.

Table 1. Summary statistics for RRC weekly log-returns.

Figure 2. Histogram and Q-Q plot for RRC weekly log-returns.

tails than the normal distribution. We observe skewness for the data sets which indicates that the two tails of the returns behave slightly differently.

We use Karlis [14] formulation to obtain the initial values for the EM algorithm. For this dataset, the values obtained are in Table 2.

The stopping criterion is when

l ( k ) l ( k 1 ) l ( k ) < t o l (31)

where tol is the tolerance level. Table 3 below shows the NIG maximum likelihood parameter estimates using the EM-algorithm at t o l = 10 3 .

We now wish to obtain the maximum likelihood parameter estimates for the proposed model via the EM algorithm. Tables 3-5 illustrate monotonic convergence

Figure 3. Histogram and Q-Q plot for CVX weekly log-returns.

Table 2. NIG method of moment estimates for the data sets.

Table 3. Maximum likelihood estimates of the proposed model for RRC data set.

at different levels. The log-likelihood and AIC for each data sets are also provided.

Figures 4-6 show how the proposed models fit the data sets. It is clear that the proposed model is a good fit compared to the normal distribution and hence a good alternative to the Normal Inverse Gaussian distribution.

Remark:

Expressing the proposed model in terms of its components we have

f ( x ) = γ 2 γ 2 + 1 × G H D ( 1 2 , α , δ , β , μ ) + 1 γ 2 + 1 × G H D ( 3 2 , α , δ , β , μ ) (32)

The finite mixture therefore is flexible in determining between G H D ( 1 2 , α , δ , β , μ ) and G H D ( 3 2 , α , δ , β , μ ) depending on the nature of the data. Table 6 below gives different values of weights for the three data sets considered based on the maximum likelihood estimates.

Table 4. Maximum likelihood estimates of the proposed model for CVX data set.

Table 5. Maximum likelihood estimates of the proposed model for s&p500 index.

Table 6. Estimates for proposition p of the data sets.

Figure 4. Fitting Proposed Model to RRC weekly returns.

Figure 5. Fitting proposed model to CVX weekly returns.

Figure 6. Fitting proposed model to s&p500 index weekly returns.

6. Conclusions

Two special cases of the Generalized Inverse Gaussian with indexes 1 2 and 3 2

have been used to construct a finite mixture model. The model has been used as a mixing distribution for Normal Variance-Mean mixture to a Normal Weighted Inverse Gaussian Model. The mean and variance of the proposed model have been obtained.

Three data sets: Range Resource Corporation (RRC), Shares of Chevron Corporation (CVX) and s&p500 index for the period 3/01/2000 to 1/07/2013 with 702 observations have been used for data analysis. An iterative scheme has been presented for parameter estimation by the EM algorithm. The iterative scheme demonstrates a monotonic convergence. The method of moment estimates for NIG worked well for the three data sets. The model fits the data sets well. Hence the model is a good alternative to NIG.

Acknowledgements

The authors gratefully acknowledge the financial support from Kisii University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Barndorff-Nielsen, O. E. (1997) Normal Inverse Gaussian Distribution and Stochastic Volatility Modelling. Scandinavian Journal of Statistics, 24, 1-13.
https://doi.org/10.1111/1467-9469.00045
[2] Aas, K. and Haff, H. (2005) NIG and Skew Student’s t: Two Special Cases of the Generalised Hyperbolic. Norwegian Computing Center, Oslo, Norway.
[3] Aas, K. and Haff, H. (2006) The Generalised Hyperbolic Skew Student’s t-Distribution. Journal of Financial Econometrics, 4, 275-309.
https://doi.org/10.1093/jjfinec/nbj006
[4] Corlu, C.G. and Corlu, A. (2014) Modeling Exchange Rate Returns: Which Flexible Distributions to Use? Quantitative Finance, 15, 1851-1864.
https://doi.org/10.1080/14697688.2014.942231
[5] Nadarajah, S., Zhang, B. and Chan, S. (2014) Estimation Methods for Expected Shortfall. Quantitative Finance, 14, 271-291.
https://doi.org/10.1080/14697688.2013.816767
[6] Lindley, D.V. (1958) Fiducial Distributions and Bayes Theorem. Journal of the Royal Statistical Society: Series B, 20, 102-107.
https://doi.org/10.1111/j.2517-6161.1958.tb00278.x
[7] Sankaran, M. (1970) The Discrete Poisson-Lindley Distribution. Biometrics, 26, 145-149.
https://doi.org/10.2307/2529053
[8] Mahmoudi, E. and Zakerzadeh, H. (2010). Generalized Poisson-Lindley Distribution. Communications in Statistics: Theory and Methods, 39, 1785-1798.
https://doi.org/10.1080/03610920902898514
[9] Fisher, R.A. (1934) The Effect of Methods of Ascertainment upon the Estimation of Frequencies. The Annals of Human Genetics, 6, 13-25.
https://doi.org/10.1111/j.1469-1809.1934.tb02105.x
[10] Patil, G.P. and Rao, C.R. (1978) Weighted Distributions and Size-Biased Sampling with Applications to Wildlife Populations and Human Families. Biometrics, 34, 179-189.
https://doi.org/10.2307/2530008
[11] Gupta, R.C. and Kundu, D. (2011) Weighted Inverse Gaussian—A Versatile Lifetime Model. Journal of Applied Statistics, 38, 2695-2708.
https://doi.org/10.1080/02664763.2011.567251
[12] Abramowitz, M. and Stegun, I.A. (1972) Handbook of Mathematical Function. Dover, New York.
[13] Dempster, A.P., Laird, N.M. and Rubin, D. (1977) Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society: Series B, 39, 1-38.
https://doi.org/10.1111/j.2517-6161.1977.tb01600.x
[14] Karlis, D. (2002) An EM Type Algorithm for Maximum Likelihood Estimation of the Normal-Inverse Gaussian Distribution. Statistics and Probability Letters, 57, 43-52.
https://doi.org/10.1016/S0167-7152(02)00040-8

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.