Estimations of Weibull-Geometric Distribution under Progressive Type II Censoring Samples

Abstract

This paper deals with the Bayesian inferences of unknown parameters of the progressively Type II censored Weibull-geometric (WG) distribution. The Bayes estimators cannot be obtained in explicit forms of the unknown parameters under a squared error loss function. The approximate Bayes estimators will be computed using the idea of Markov Chain Monte Carlo (MCMC) method to generate from the posterior distributions. Also the point estimation and confidence intervals based on maximum likelihood and bootstrap technique are also proposed. The approximate Bayes estimators will be obtained under the assumptions of informative and non-informative priors are compared with the maximum likelihood estimators. A numerical example is provided to illustrate the proposed estimation methods here. Maximum likelihood, bootstrap and the different Bayes estimates are compared via a Monte Carlo Simulation study

Share and Cite:

Elhag, A. , Ibrahim, O. , El-Sayed, M. and Abd-Elmougod, G. (2015) Estimations of Weibull-Geometric Distribution under Progressive Type II Censoring Samples. Open Journal of Statistics, 5, 721-729. doi: 10.4236/ojs.2015.57072.

Received 23 October 2015; accepted 15 December 2015; published 18 December 2015

1. Introduction

The Weibull distribution is one of the most popular widely usable models of failure time in life testing and reliability theory. The Weibull distribution has been shown to be useful for modeling and analysis of life time data in medical, biological and engineering sciences. Some applications of the Weibull distribution in forestry are given in Green et al. [1] . Several distributions have been proposed in the literature to extend the Weibull distribution. Adamidis and Loukas [2] introduce the two-parameter exponential-geometric (EG) distribution with decreasing failure rate. Marshall and Olkin [3] present a method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Adamidis et al. [4] introduce the extended exponential-geometric (EEG) distribution which generalizes the EG distribution and discuss variety of its statistical properties along with its reliability features. The hazard function of the EEG distribution can be monotone decreasing, increasing or constant. Kus [5] proposes the exponential-Poisson distribution (following the same idea of the EG distribution) with decreasing failure rate and discusses its various properties. Souza et al [6] introduce the Weibull-geometric (WG) distribution that contains the EEG, EG and Weibull distributions as special sub- models and discuss some of its properties. For more details about Weibull-geometric (WG) distribution and its properties, see Barreto-Souza [7] and Hamedani and Ahsanullah [8] .

Let X follows a WG distribution, then the probability density function (pdf) and distribution function (cdf) of WG distribution are given respectively by

(1)

and

(2)

Some special sub-models of the WG distribution (1) are obtained as follows. If, we have the Weibull distribution. When, the WG distribution tends to a distribution degenerate in zero. Hence, the parameter p can be interpreted as a concentration parameter. The EG distribution corresponds to and, whereas the EEG distribution is obtained by taking for any. Clearly, the EEG distribution extends the EG distribution. WG density functions are displayed. For, the WG density is unimodal if and strictly decreasing if. The mode is obtained by solving the nonlinear equation

(3)

For, the WG density can be unimodal. For example, the EEG distribution () is unimodalif.

The survival and hazard functions of X are

(4)

and

(5)

Suppose that n independent items are put on a life test with continuous identically distributed failure times. Let further that a censoring scheme is previously fixed such that immediately following the first failure, surviving items are removed at random from the test, after the next failure, surviving items are removed at random from the test; this process continues until, at the time of the m-th observed failure, the remaining items are removed from the test. The m is ordered observed failure times denoted by, are called progressive Type II right censored order statistics of size m from a sample of size n with progressive censoring scheme. If the failure times of the n items, originally on the test are from a continuous population with pdf and cdf, the joint probability density function for is given (see Balakrishnan and Sandhu [9] ) by

(6)

where

(7)

Progressive Type II censored sampling is an important scheme of obtaining data in lifetime studies. For more details on the progressive censored samples see Aggarwala and Balakrishnan [10] .

2. Markov Chain Monte Carlo Techniques

MCMC methodology provides a useful tool for realistic statistical modeling (Gilks et al. [11] ; Gamerman, [12] ), and has become very popular for Bayesian computation in complex statistical models. Bayesian analysis requires integration over possibly high-dimensional probability distributions to make inferences about model parameters or to make predictions. MCMC is essentially Monte Carlo integration using Markov chains. The integration draws samples from the required distribution, and then forms sample averages to approximate expectations (see Geman and Geman, [13] ; Metropolis et al., [14] ; Hastings, [15] ).

Gibbs Sampler

The Gibbs sampling algorithm is one of the simplest Markov chain Monte Carlo algorithms. It was introduced by Geman [13] . The paper by Gelfand and Smith [16] helped to demonstrate the value of the Gibbs algorithm for a range of problems in Bayesian analysis. Gibbs sampling is a MCMC scheme where the transition kernel is formed by the full conditional distributions.

The Gibbs sampler is applicable for certain classes of problems, based on two main criterions. Given a target distribution, where The first criterion is 1) that it is necessary that we have an analytic (mathematical) expression for the conditional distribution of each variable in the joint distribution given all other variables in the joint. Formally, if the target distribution is d-dimensional, we must have d in-

dividual expressions for

Each of these expressions defines the probability of the i-th dimension given that we have values for all other () dimensions. Having the conditional distribution for each variable means that we don’t need a proposal distribution or an accept/reject criterion, like in the Metropolis-Hastings algorithm. Therefore, we can simply sample from each conditional while keeping all other variables held fixed. So that we must be able to sample from each conditional distribution if we want an implementable algorithm.

To define the Gibbs sampling algorithm, let the set of full conditional distributions be .

Now one cycle of the Gibbs sampling algorithm is completed by simulating from these distributions, recursively refreshing the conditioning variables.

Algorithm:

1) Choose an arbitrary starting point for which;

2) Obtain from conditional distribution;

3) Obtain from conditional distribution;

4) Obtain from conditional distribution;

5) Repeat of steps 2 - 4 thousands (or millions) of times for the number of samples M.

The results of the first M or so iterations should be ignored, as this is a “burn-in” period for the algorithm to set itself up.

In this paper, we obtain and compare several techniques of estimation based on progressive Type II censoring for the three unknown parameter of WG distribution. In Bayesian technique, we use the idea of Markov chain Monte Carlo (MCMC) techniques to generate from the posterior distributions. Finally, we will give an example to illustrate our proposed method.

3. Maximum Likelihood Estimation

Let, be the progressive first-failure censored order statistics from a Weibull-geometric

distribution, with censored scheme R, where n independent items are put on a life test with continuous identically distributed failure times. Suppose further that a censoring scheme is previously fixed. From (1), (2) and (3), the likelihood function is given by

(8)

where C is given by (7). The logarithm of the likelihood function l may then be written as

(9)

Calculating the first partial derivatives of (9) with respect to and p equating each to zero, we get the likelihood equations as in the following:

(10)

(11)

and

(12)

Since (10-12) cannot be solved analytically for and, some numerical methods such Newton’s method must be employed.

Approximate confidence intervals for and p can be found by to be bivariately normal distributed with mean and covariance matrix. Thus, the approximate confidence intervals for and p are

(13)

respectively, where, and are the elements on the main diagonal of the covariance matrix and is the percentile of the standard normal distribution with right-tail probability.

4. Bootstrap Confidence Intervals

The bootstrap is a resampling method for statistical inference. It is commonly used to estimate confidence intervals, but it can also be used to estimate bias and variance of an estimator or calibrate hypothesis tests. In this section, we use the parametric bootstrap percentile method suggested by Efron [17] [18] to construct confidence intervals for the parameters. The following steps are followed to obtain progressive first failure censoring bootstrap sample from Weibull-geometric distribution with parameters and based on simulated progressively first-failure censored data set.

Algorithm:

・ From an original data set, compute the ML estimates of parameters and from Equation (9) and Equation (10);

・ Use and to generate a bootstrap sample with the same values of using the algorithm of Balakrishnan and Sandhu [2] ;

・ As in step 1 based on compute the bootstrap sample estimates of and say and;

・ Repeat steps 2 - 3 N times representing N bootstrap MLE’s of and based on N different bootstrap samples;

・ Arrange all and in an ascending order to obtain bootstrap sample where ();

・ Let be cumulative distribution function of;

・ Define for given z. The approximate bootstrap 100 confidence interval of given by

(14)

5. Bayesian Estimation Using MCMC

In this section, we consider the Bayes estimation of the unknown parameter(s). In many practical situations, the information about the parameters are available in an independent manner. Thus, here it is assumed that the parameters are independent a priori and assumed that and have the following gamma prior distributions

(15)

(16)

Here all the hyper parameters a, b, c, d are assumed to be known and non-negative and let the NIP for parameter p which represented by the limiting form of the appropriate natural conjugate prior, the NIP for the acceleration factor p is given by

(17)

Therefore, the joint prior of the three parameters can be expressed by

(18)

Therefore, the Bayes estimate of any function of and p say, under squared error loss function (SEL) is

(19)

The MCMC method to generate samples from the posterior distributions and then compute the Bayes estimator of under the SEL function.

A wide variety of MCMC schemes are available, and it can be difficult to choose among them. An important subclass of MCMC methods are Gibbs sampling and more general Metropolis-Hastings (M-H) algorithm. The advantage of using the MCMC method over the MLE method is that we can always obtain a reasonable interval estimate of the parameters by constructing the probability intervals based on the empirical posterior distribution. This is often unavailable in maximum likelihood estimation. Indeed, the MCMC samples may be used to completely summarize the posterior uncertainty about the parameters and p, through a kernel estimate of the posterior distribution. This is also true of any function of the parameters.

When practically possible, we give prior and posterior distributions in terms of known densities, such as the Gaussian, binomial, beta, gamma and others. The joint posterior density function of and p can be obtained by multiply the likelihood function (multivariate normal) with the prior which can be written as:

(20)

We obtain the Bayes MCMC point estimate of (and) as

(21)

where M is the burn-in period (that is, a number of iterations before the stationary distribution is achieved), and posterior variance of becomes

(22)

6. Illustrative Example

To illustrative the estimation techniques developed in this article, for given hybrid parameters generate random sample of size 10, from gamma distribution the mean of the random sample, is computed and considered as the actual population value of That is, the prior parameters are selected to satisfy, that is approximately the mean of gamma distribution (21). Also for given values, generate according the last, from gamma distribution. The prior parameters are selected to satisfy, that is approximately the mean of gamma distribution. We have considered a progressive

Type II sample is generated from WG distribution with parameters (, , , , and) using the algorithm of Balakrishnan and Sandhu [9] , the data given in by: 0.0212, 0.0463, 0.0568, 0.0686, 0.0764, 0.0832, 0.0933, 0.1031, 0.1496, 0.1485, 0.1511, 0.1536, 0.1603, 0.1685, 0.1985, 0.2097, 0.2176, 0.2643, 0.2696, 0.2809, 0.3156, 0.3744, 0.3941, 0.4196, 0.5236.

Table 1. Different estimates of parameters of WG distribution.

Table 2. MLE, percentile bootstrap CIs and Bootstrap-t CIs based on 500 replications.

Figure 1. Simulation number of generated by MCMC method and its histogram.

Figure 2. Simulation number of generated by MCMC methodand its histogram.

Figure 3. Simulation number of p generated by MCMC methodand its histogram.

Under these data, we compute the approximate MLEs, bootstrap and Bayes estimates of and p using MCMC method results are given in Table 1 and Table 2. Note that Table 2 gives the 95%, approximate MLE confidence intervals, two bootstrap confidence intervals and approximate credible intervals based on the MCMC samples. Figures 1-3 show simulation number of WG parameters generated by MCMC method and the corresponding histogram.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Green, E.J., Roesh Jr., F.A., Smith, A.F.M. and Strawderman, W.E. (1994) Bayes Estimation for the Three Parameter Weibull Distribution with Tree Diameter Data. Biometrics, 50, 254-269.
[2] Adamidis, K. and Loukas, S. (1998) A Lifetime Distribution with Decreasing Failure Rate. Statistics & Probability Letters, 39, 35-42.
[3] Marshall, A.W. and Olkin, I. (1997) A New Method for Adding a Parameter to a Family of Distributions with Application to the Exponential and Weibull Families. Biometrika, 84, 641-652.
http://dx.doi.org/10.1093/biomet/84.3.641
[4] Adamidis, K., Dimitrakopoulou, T. and Loukas, S. (2005) On a Generalization of the Exponential-Geometric Distribution. Statistics & Probability Letters, 73, 259-269.
[5] Kus, C. (2007) A New Lifetime Distribution. Computational Statistics & Data Analysis, 51, 4497-4509.
http://dx.doi.org/10.1016/j.csda.2006.07.017
[6] Souza, W.A., Morais, A.L. and Cordeiro, G.M. (2010) The Weibull-Geometric Distribution. Journal of Statistical Computation and Simulation, 81, 645-657.
http://dx.doi.org/10.1080/00949650903436554
[7] Barreto-Souza, W. (2011) The Weibull-Geometric Distribution. Journal of Statistical Computation and Simulation, 81, 645-657.
http://dx.doi.org/10.1080/00949650903436554
[8] Hamedani, G.G. and Ahsanullah, M. (2011) Characterizations of Weibull-Geometric Distribution. Journal of Statistical Theory and Applications, 10, 581-590.
[9] Balakrishnan, N. and Sandhu, R.A. (1995) A Simple Simulation Algorithm for Generating Progressively Type-II Censored Samples. The American Statistician, 49, 229-230.
[10] Balakrishnan, N. and Aggarwala, R. (2000) Progressive Censoring: Theory, Methods, and Applications. Birkhauser, Boston.
http://dx.doi.org/10.1007/978-1-4612-1334-5
[11] Gilks, W.R., Richardson, S. and Spiegelhalter, D.J. (1996) Markov Chain Monte Carlo in Practices. Chapman and Hall, London.
[12] Gamerman, D. (1997) Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference. Chapman and Hall, London.
[13] Geman, S. and Geman, D. (1984) Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Transactions on Pattern Analysis and Mathematical Intelligence, 6, 721-741.
http://dx.doi.org/10.1109/TPAMI.1984.4767596
[14] Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H. and Teller, E. (1953) Equations of State Calculations by Fast Computing Machines. Journal Chemical Physics, 21, 1087-1091.
http://dx.doi.org/10.1063/1.1699114
[15] Hastings, W.K. (1970) Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika, 57, 97-109.
http://dx.doi.org/10.1093/biomet/57.1.97
[16] Gelfand, A.E. and Smith, A.F.M. (1990) Sampling Based Approach to Calculating Marginal Densities. Journal of the American Statistical Association, 85, 398-409.
http://dx.doi.org/10.1080/01621459.1990.10476213
[17] Efron, B. and Tibshirani, R.J. (1993) An Introduction to the Bootstrap. Chapman and Hall, New York.
http://dx.doi.org/10.1007/978-1-4899-4541-9
[18] Efron, B. (1982) The Jackknife, the Bootstrap and Other Resampling Plans. CBMS-NSF Regional Conference Series in Applied Mathematics, Phiadelphia, 38.
http://dx.doi.org/10.1137/1.9781611970319

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.