Bayes Shrinkage Minimax Estimation in Inverse Gaussian Distribution

In present paper, the properties of the Bayes Shrinkage estimator is studied for the measure of dispersion of an inverse Gaussian model under the Minimax estimation criteria.


Introduction
The Inverse Gaussian distribution plays an important role in Reliability theory and Life testing problems.It has useful applications in a wide variety of fields such as Biology, Economics, and Medicine.It is used as an important mathematical model for the analysis of positively skewed data.The review article by Folks & Chhikara [1,2] and Seshadri [3] have proposed many interesting properties and applications of this distribution.
Let 1 2 be a random sample of size drawn from the inverse Gaussian distribution , , , , ; , exp ; 2 2 0, , 0 Here, μ stands for the mean and for the inverse measure of dispersion.The maximum likelihood estimates of θ μ and θ are given as: The unbiased estimates of μ and θ are respecttively x and  1 3 and being stochastically inde-pendent ( [1,4,5,]).Schuster [6] showed that is known, the uniformly minimum variance unbiased (UMVU) estimator for measure of dispersion, and follows a chi-square distribution with degrees of freedom.

U n θ n
The choice of the loss function may be crucial.It has always been recognized that the most commonly used loss function, squared error loss function (SELF) is in appropriate in many situations.If the SELF is taken as a measure of inaccuracy then the resulting risk is often too sensitive to the assumptions about the behavior of the tail of the probability distribution.In addition, in some estimation problems overestimation is more serious than the underestimation, or vice-versa [7].To deal with such cases, a useful and flexible class of asymmetric loss function (LINEX loss function (LLF)) was introduced by Varian [8].The reparameterized version of LLF ( [9]) for any parameter is given as The sign and magnitude of 'a' represents the direction and degree of asymmetry respectively.The positive (negative) value of 'a' is used when overestimation is v x more (less) serious than underestimation.  L  is approximately square error and almost symmetric if a near to zero.
Thompson [10] suggested a procedure, which makes use of a prior information of the parameter in form of a guessed value by shrinking the usual unbiased estimator towards the guess value of the parameter with the help of a shrinkage factor .The experimenter according to his belief in the guess value specifies the values of shrinkage factor.The shrinkage estimator for the measure of dispersion of when a guess value of say is available, is given by Some shrinkage estimators for measure of dispersion have been obtained by Pandey & Malik [11] and have studied their properties under SELF-criterion.Prakash and Singh [12] have studied the properties of different shrinkage testimators for under the LINEX loss function.Palmer [13] and Banerjee & Bhattacharya [14] have discussed the Bayesian inference about the parameters of the inverse Gaussian distribution.
The present article proposed Bayes Shrinkage estimator based on the Minimax criteria for the measure of dispersion.A Bayes estimator for the measure of dispersion under the vague prior has been obtained in the Section 2. Under the Minimax criteria the Bayes Minimax estimator has been obtained in the Section 3. A Shrinkage estimator construct by utilizing the Bayes Minimax estimator in the Section 4. Further, a numerical study has been presented in Section 5 and draws a conclusion about the Bayes Shrinkage Minimax estimator in Section 6.

Bayes Estimator for Measure of Dispersion
We are not going into debate or to justify the questions of the proper choice of the prior distribution.We consider a vague prior for the parameter  which is an increasing function of the parameter  and is given as Therefore, the posterior density of parameter  is defined as After simplifying, the posterior density of parameter The Bayes estimator for the measure of dispersion 1 θ  ity under the LLF is obtained by simplifying the equal- Here, the suffix indicates that the expectation is taken under po ensity.After simplification the Bayes estimator for

The Minimax Bayes Estimator
The basic principle of this approach is to minimize the n a theorem, n be stated as Here, the risk of the Bayes estimator  given in (6) for the parameter 1 θ  with respect to LLF is de ned as fi Since, n U  is distributed as a Chi-square with n degrees of freedom.Then by making a transform n atio Using equation (8) in the expression we have The Equation ( 9) represents the risk of the Bayes estimator of the measure of dispersion, which is independent with the parameter  .Hence, the Bayes estimator  is η and ˆ * θ for all  and θ so that, the following relation holds: The number  

The Shrinkage Bayes Minimax Estimator
Now, we construct a Shrinkage Bayes Minimax estimator as The risk of the Shrinkage Bayes Minimax estimator θ under the LLF is obtain by using Equation ( 8) as where .
The risk of the estimator T under the LINEX loss is given by , which minimizes is given by ng the with the risk under the LLF is given as Thus, the improved estimators amo class T is is lies between zero and one for the selected pa tric set of values which are considered later for the n e ings.Therefore, rical find- is considered as the shrinkage factor.max estimator

A Numerical Study
The relative efficiencies for the Shrinkage Bayes Miniθ relative to the improved estimator T  is defined as served that the Shrinka θ is performs better then the improved estimator T  for the all selected parametr 0 25 1 75 .
ic set of values for   . Furth mple size increases the re er, as sa n lative efficiency decreases for all considered parametric set values and attains maximum efficiency at the 1 δ of point  .Further, it is also observed that the relative eases as d increases when δ lie between 0 50 1 50 .
efficiency incr   .It is seen also that, as 'a' increase relative efficiency first increases for 0 75 δ . and the crease for the other values of δ .

Con
In  e rang δ hich is defined here as the ratio between the true value and guess (prior point) value of the unknown parameter under the LLF.Thus, we suggest using the Minimax estimator under LLF for estimating the measure of dispersion under the Shrinkage setup.

eR
Minimax estimator under the LLF loss criterion.The following statistical problem (Minimax Estimation) is equivalent to some two person zero sum game between the Statistician (Player-II) and Nature (Pla er-I).Here the pure strategies of Nature are the diffe values of  in the interval   0, and the mixed strategies of Nature are the prior densities of  in the interval   0, .The pure strategies of Statistician are all possible decision functions in the interval   0, .The expected value of the loss function is the risk function and it is the gain of the Player-I.Fur her, η θ E R θ Here, the expectation has been taken under the prior density of parameter  .If the loss function is continu-

Table 1 . Relative efficiency for the estimator with respect to es .
* T