Estimations of Weibull-Geometric Distribution under Progressive Type II Censoring Samples

This paper deals with the Bayesian inferences of unknown parameters of the progressively Type II censored Weibull-geometric (WG) distribution. The Bayes estimators cannot be obtained in explicit forms of the unknown parameters under a squared error loss function. The approximate Bayes estimators will be computed using the idea of Markov Chain Monte Carlo (MCMC) method to generate from the posterior distributions. Also the point estimation and confidence intervals based on maximum likelihood and bootstrap technique are also proposed. The approximate Bayes estimators will be obtained under the assumptions of informative and non-informative priors are compared with the maximum likelihood estimators. A numerical example is provided to illustrate the proposed estimation methods here. Maximum likelihood, bootstrap and the different Bayes estimates are compared via a Monte Carlo Simulation study.


Introduction
The Weibull distribution is one of the most popular widely usable models of failure time in life testing and reliability theory.The Weibull distribution has been shown to be useful for modeling and analysis of life time data in medical, biological and engineering sciences.Some applications of the Weibull distribution in forestry are given in Green et al. [1].Several distributions have been proposed in the literature to extend the Weibull distribution.Adamidis and Loukas [2] introduce the two-parameter exponential-geometric (EG) distribution with decreasing failure rate.Marshall and Olkin [3] present a method for adding a parameter to a family of distributions with application to the exponential and Weibull families.Adamidis et al. [4] introduce the extended exponential-geometric (EEG) distribution which generalizes the EG distribution and discuss variety of its statistical properties along with its reliability features.The hazard function of the EEG distribution can be monotone decreasing, increasing or constant.Kus [5] proposes the exponential-Poisson distribution (following the same idea of the EG distribution) with decreasing failure rate and discusses its various properties.Souza et al [6] introduce the Weibull-geometric (WG) distribution that contains the EEG, EG and Weibull distributions as special submodels and discuss some of its properties.For more details about Weibull-geometric (WG) distribution and its properties, see Barreto-Souza [7] and Hamedani and Ahsanullah [8].
Let X follows a WG distribution, then the probability density function (pdf) ( ) ( ) Some special sub-models of the WG distribution (1) are obtained as follows.If 0 p = , we have the Weibull distribution.When 1 p → , the WG distribution tends to a distribution degenerate in zero.Hence, the parameter p can be interpreted as a concentration parameter.The EG distribution corresponds to 1 α = and 0 1 p < < , whereas the EEG distribution is obtained by taking 1 α = for any 1 p < .Clearly, the EEG distribution extends the EG distribution.WG density functions are displayed.For 1 1 p − ≤ < , the WG density is unimodal if 1 α > and strictly decreasing if 1 α ≤ .The mode is obtained by solving the nonlinear equation ( ) For 1 p < − , the WG density can be unimodal.For example, the EEG distribution ( 1 The survival and hazard functions of X are and Suppose that n independent items are put on a life test with continuous identically distributed failure times 1 2 , , , n X X X  .Let further that a censoring scheme ( ) is previously fixed such that immediately following the first failure 1 where ( )( ) ( ) Progressive Type II censored sampling is an important scheme of obtaining data in lifetime studies.For more details on the progressive censored samples see Aggarwala and Balakrishnan [10].

Markov Chain Monte Carlo Techniques
MCMC methodology provides a useful tool for realistic statistical modeling (Gilks et al. [11]; Gamerman, [12]), and has become very popular for Bayesian computation in complex statistical models.Bayesian analysis requires integration over possibly high-dimensional probability distributions to make inferences about model parameters or to make predictions.MCMC is essentially Monte Carlo integration using Markov chains.The integration draws samples from the required distribution, and then forms sample averages to approximate expectations (see Geman and Geman, [13]; Metropolis et al., [14]; Hastings, [15]).

Gibbs Sampler
The Gibbs sampling algorithm is one of the simplest Markov chain Monte Carlo algorithms.It was introduced by Geman [13].The paper by Gelfand and Smith [16] helped to demonstrate the value of the Gibbs algorithm for a range of problems in Bayesian analysis.Gibbs sampling is a MCMC scheme where the transition kernel is formed by the full conditional distributions.
The Gibbs sampler is applicable for certain classes of problems, based on two main criterions.Given a target distribution ( ) , , , , The first criterion is 1) that it is necessary that we have an analytic (mathematical) expression for the conditional distribution of each variable in the joint distribution given all other variables in the joint.Formally, if the target distribution ( ) | , , ; , , , | , .
 Each of these expressions defines the probability of the i-th dimension given that we have values for all other ( i j ≠ ) dimensions.Having the conditional distribution for each variable means that we don't need a proposal distribution or an accept/reject criterion, like in the Metropolis-Hastings algorithm.Therefore, we can simply sample from each conditional while keeping all other variables held fixed.So that we must be able to sample from each conditional distribution if we want an implementable algorithm.
To define the Gibbs sampling algorithm, let the set of full conditional distributions be Now one cycle of the Gibbs sampling algorithm is completed by simulating { } 1 recursively refreshing the conditioning variables. Algorithm: 1) Choose an arbitrary starting point ( ) 2) Obtain ( ) 3) Obtain ( ) 5) Repeat of steps 2 -4 thousands (or millions) of times for the number of samples M.
The results of the first M or so iterations should be ignored, as this is a "burn-in" period for the algorithm to set itself up.
In this paper, we obtain and compare several techniques of estimation based on progressive Type II censoring for the three unknown parameter of WG distribution.In Bayesian technique, we use the idea of Markov chain Monte Carlo (MCMC) techniques to generate from the posterior distributions.Finally, we will give an example to illustrate our proposed method.

Maximum Likelihood Estimation
Let be the progressive first-failure censored order statistics from a Weibull-geometric distribution, with censored scheme R, where n independent items are put on a life test with continuous identically distributed failure times 1 2 , , , n X X X  . Suppose further that a censoring scheme ( ) is previously fixed.From (1), ( 2) and (3), the likelihood function is given by where C is given by (7).The logarithm of the likelihood function l may then be written as Calculating the first partial derivatives of ( 9) with respect to , α β and p equating each to zero, we get the likelihood equations as in the following: Since (10-12) cannot be solved analytically for , α β and p , some numerical methods such Newton's me- thod must be employed.
Approximate confidence intervals for , α β and p can be found by to be bivariately normal distributed with mean ( )

Bootstrap Confidence Intervals
The bootstrap is a resampling method for statistical inference.It is commonly used to estimate confidence inter-vals, but it can also be used to estimate bias and variance of an estimator or calibrate hypothesis tests.In this section, we use the parametric bootstrap percentile method suggested by Efron [17] [18] to construct confidence intervals for the parameters.The following steps are followed to obtain progressive first failure censoring bootstrap sample from Weibull-geometric distribution with parameters , α β and p based on simulated progres- sively first-failure censored data set.
Algorithm: • From an original data set  • Arrange all , j j α β * * and ˆj p * in an ascending order to obtain bootstrap sample , , , , for given z.The approximate bootstrap 100 ( )

Bayesian Estimation Using MCMC
In this section, we consider the Bayes estimation of the unknown parameter(s).In many practical situations, the information about the parameters are available in an independent manner.Thus, here it is assumed that the parameters are independent a priori and assumed that α and β have the following gamma prior distributions ( ) ( ) .
Here all the hyper parameters a, b, c, d are assumed to be known and non-negative and let the NIP for parameter p which represented by the limiting form of the appropriate natural conjugate prior, the NIP for the acceleration factor p is given by ( ) Therefore, the joint prior of the three parameters can be expressed by Therefore, the Bayes estimate of any function of , α β and p say ( ) ˆ, , , , .
The MCMC method to generate samples from the posterior distributions and then compute the Bayes estimator of ( ) , , p ϕ α β under the SEL function.A wide variety of MCMC schemes are available, and it can be difficult to choose among them.An important subclass of MCMC methods are Gibbs sampling and more general Metropolis-Hastings (M-H) algorithm.The advantage of using the MCMC method over the MLE method is that we can always obtain a reasonable interval estimate of the parameters by constructing the probability intervals based on the empirical posterior distribution.This is often unavailable in maximum likelihood estimation.Indeed, the MCMC samples may be used to completely summarize the posterior uncertainty about the parameters , α β and p, through a kernel estimate of the posterior distribution.This is also true of any function of the parameters.When practically possible, we give prior and posterior distributions in terms of known densities, such as the Gaussian, binomial, beta, gamma and others.The joint posterior density function of , α β and p can be ob- tained by multiply the likelihood function (multivariate normal) with the prior which can be written as: We obtain the Bayes MCMC point estimate of l where M is the burn-in period (that is, a number of iterations before the stationary distribution is achieved), and posterior variance of l ϕ becomes ( )

2 z
v are the elements on the main diagonal of the covariance matrix α is the percentile of the standard normal distribution with right-tail probability 2 α .
Balakrishnan and Sandhu [2]; • As in step 1 based on x * compute the bootstrap sample estimates of , α and p say , α β * * and p * ; • Repeat steps 2 -3 N times representing N bootstrap MLE's of , α β and p based on N different bootstrap samples; is approximately the mean of gamma distribution (21).Also for given values is approximately the mean of gamma distribution.We have considered a progressive Type II sample is generated from WG distribution with parameters (

Figure 1 .
Figure 1.Simulation number of α generated by MCMC method and its histogram.

Figure 2 .
Figure 2. Simulation number of β generated by MCMC methodand its histogram.

Figure 3 .
Figure 3. Simulation number of p generated by MCMC methodand its histogram.

Table 1 .
Different estimates of parameters of WG distribution.

Table 2 .
MLE, percentile bootstrap CIs and Bootstrap-t CIs based on 500 replications.