Estimation Based on Progressive First-failure Censored Sampling with Binomial Removals

Copyright © 2013 Ahmed A. Soliman et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ABSTRACT In this paper, the inference for the Burr-X model under progressively first-failure censoring scheme is discussed. Based on this new censoring were the number of units removed at each failure time has a discrete binomial distribution. The maximum likelihood, Bootstrap and Bayes estimates for the Burr-X distribution are obtained. The Bayes estimators are obtained using both the symmetric and asymmetric loss functions. Approximate confidence interval and highest posterior density interval (HPDI) are discussed. A numerical example is provided to illustrate the proposed estimation methods developed here. The maximum likelihood and the different Bayes estimates are compared via a Monte Carlo simulation study.


Introduction
Censoring is common in life-distribution work because of time limits and other restrictions on data collection.Censoring occurs when exact lifetimes are known only for a portion of the individuals or units under study, while for the remainder of the lifetimes information on them is partial.However, when the lifetimes of products are very high, the experimental time of a type II censoring life test can be still too long.A generalization of type II censoring is progressive type II censoring, which is useful when the loss of live test units at points other than the termination point is unavoidable.Johnson [1] described a life test in which the experimenter might decide how to group the test units into several sets, each as an assembly of test units, and then run all the test units simultaneously until occurrence of the first failure in each group.Such a censoring scheme is called first-failure censoring.Wu and Kuş [2] obtained maximum likelihood estimates, exact confidence intervals and exact confidence regions for the parameters of Weibull distribution under the progressive first-failure censored sampling.Note that a firstfailure censoring scheme is terminated when the first failure in each set is observed.If an experimenter desires to remove some sets of test units before observing the first failures in these sets this life test plan is called a progressive first-failure censoring scheme which recently was introduced by Wu and Kuş [2].Recently, the estimation of Parameters from different lifetime distribution based on progressive type II censored samples is studied by several authors including Gupta et al. [3], Childs and Balakrishnan [4], Siu keung tse et al. [5], Mosa and Jaheen [6], Ng et al. [7], Wu and Chang [8], Balakrishnan et al. [9], Wu [10], Soliman [11], and Sarhan and Abuammoh [12].But in some reliability experiments, the number of patients dropped out the experiment cannot be pre-fixed and it is random.In such situations, the progressive censoring schemes with random removals are needed.Therefore, the purpose of this paper is to develop a Bayes estimation (symmetric and asymmetric loss functions) for the parameters of Burr-X distribution under the progressive first-failure censoring plan with random removals and construct the bootstrap confidence interval for the parameters.
If X follows a Burr-X distribution, then the probability density function (pdf) and cumulative distribution   function (cdf) of X are given respectively by The rest of this paper is organized as follows.In Section 2, we describe the formulation of a progressive firstfailure censoring scheme as described by Wu and Kuş [2].The point estimation of the parameters of Burr-X distribution and binomial distribution based on the progressive first-failure censoring scheme is investigated in Section 3. In Section 4, we discuss the approximate interval estimation and highest posterior density interval (HPDI) for the Burr-X distribution under the progressive first-failure censored sampling plan.A numerical examples are presented in Section 5, for illustration.In Section 6 we provide some simulation results in order to give an assessment of the performance of the estimation method.

A Progressive First-Failure Censoring Scheme
In this section, first-failure censoring is combined with progressive censoring as in Wu and Kuş [2].Suppose that n independent groups with items within each group are put in a life test, 1 groups and the group in which the first failure is observed are randomly removed from the test as soon as the first failure (say 1: : : m n k ) has occurred, 2 groups and the group in which the second failure is observed are randomly removed from the test as soon as the second failure (say 2: : : m n k ) has occurred, and finally m groups and the group in which the failure is observed are randomly removed from the test as soon as the failure (say ) has occurred.The m .If the failure times of the items originally in the test are from a continuous population with distribution function , 2: 1,2, , 1: : : 2: : : : : 1: : : 2: : : where There are four special cases: The first one if 0, , 0

Point Estimation
In many cases, there will be an obvious or natural candidate for a point estimator of a particular parameter.For example, the sample mean is a natural candidate for a point estimator of the population mean.In this section, we estimate  and , by considering maximum likelihood, bootstrap and Bayes estimates.In Bayesian technique, we consider both symmetric (Squares Error, SE) loss function and asymmetric (Linear Exponential, LINEX and General Entropy, GE) loss functions.

Maximum Likelihood Estimation (MLE)
Let : : : , be the progressively first-failure censored order statistics from a Burr-X distribution, with censoring scheme from (3), the likelihood function is given by where where is defined in (5)  with the same probability .Then, the number of groups removed at each failure time follows a binomial distribu- tion with parameters and where is predetermined before the testing.Therefore and for , Suppose further that i is independent of x .Then the likelihood function takes the following form Using ( 6), ( 12) and ( 13) we can write the likelihood function as where 1 .
It is obvious that in Equation ( 16) does not in-volve .Thus the maximum likelihood estimate (MLE) of  can be derived by maximizing Equation ( 16) di- rectly.On the other hand, in Equation ( 17) does not depend on the parameter 2 L ,  then the MLE of can be obtained directly by maximizing Equation (17).In particular, after taking the logarithms of  and can be found by solving the following equations Thus we find and

Bootstrap Confidence Intervals
In this subsection, we use the parametric bootstrap percentile method suggested by Efron [13] to construct confidence intervals for the parameters.The following steps are followed to obtain a progressive first-failure censoring bootstrap sample from Burr-X distribution with parameter  and binomial distribution with parameter based on simulated progressively first-failure censored data with random removals set.
p  From the original data 1: : : 2: : : : : : , using algorithm presented in Balakrishnan and Sandhu [14] with distribution func- in an ascending order to obtain the bootstrap sample

Bayes Estimation
The Bayesian inference procedures have been developed under the usual SE loss function (quadratic loss), which is symmetrical, and associates equal importance to the losses due to overestimation and underestimation of equal magnitude.However, such a restriction may be impractical.For example, in the estimation of reliability and failure rate functions, an overestimation is usually much more serious than an underestimation; in this case the use of asymmetrical loss function might be inappropriate, as has been recognized by Basu and Ebrahimi [15], and Canfield [16].
A useful asymmetric loss known as the LINEX loss function, was introduced by Zimmer et al. [17], and was widely used in several papers by Balasooriya and Balakrishnan [18], Soliman [19] and Soliman [20].This function rises approximately exponentially on one side of zero, and approximately linearly on the other side.Under the assumption that the minimal loss occurs at   , the LINEX loss function for where   is an estimate of .u


The sign, and magnitude of  a 0 a  0 a represent the direction, and degree of symmetry.( means overestimation is more serious than underestimation, and  means the opposite).For   a closed to zero, the LINEX loss function is approximately the Squared Error (SE) loss, and therefore almost symmetric The posterior-expectation of the LINEX loss function of ( 23) is where u is equivalent to the posterior-expectation with respect to the posterior .The Bayes estimator provided that exists, and is finite.
whose minimum occurs at  .This loss function is a generalization of the Entropy-loss used in several papers where 1 b  by Dey et al. [21], Dey and Liu [22].When , a positive error causes more serious consequences than a negative error.The Bayes estimate Now, we assume that the parameters exists, and is finite.
while has Beta prior distribution with known para- the likelihood function and   the prior density function.Applying ( 16) and (28), the marginal posterior (pdf) of  given by We notes that the posterior distribution of  is Gamma with parameters   m   and j q p Similarly, the posterior (pdf) of is where L R r  the likelihood function and 2 p  p the prior density function.Applying ( 17) and (29), the marginal posterior pdf of given by where We notes that the posterior distribution of is Beta with parameters   and   .

Symmetric Bayes
where and j q are defined in ( 8) and (32).Similar- ly, where   and   are defined in (35).

Asymmetric Bayes Estimation
LINEX loss function: If in (25), u   , then the Bayes estimator BL   , of the parameter  relative to LINEX loss function is and from (31), we get Similarly, if in (25), , then the Bayes estimator BL p  , of the parameter relative to LINEX loss function is and from (34), we obtain One can use a numerical integration technique to get the integration in (43).

General Entropy loss function:
and from (31), we obtain

Approximate Interval Estimation
The asymptotic variances and covariances of the MLE for parameters  and are given by elements of the inverse of the Fisher information matrix Unfortunately, the exact mathematical expressions for the above expectations are very difficult to obtain.Therefore, we give the approximate (observed) asymptotic varaince-covariance matrix for the MLE, which is obtained by dropping the expectation operator E The asymptotic normality of the MLE can be used to compute the approximate confidence intervals for parameters  and .Therefore, where 2 Z  is the percentile of the standard normal distribution with right-tail probability 2  .

Highest Posterior Density Interval (HPDI)
In general, the Bayesian interval estimation is much more direct than frquentest classical method.Now, having obtained the posterior distribution   p Data  we ask, "How likely is it that the parameter  lies within the specified interval   , L U   ?"Bayesian call this interval based on the posterior distribution a "credible interval".
For the shortest credible interval, we have to minimize which simultaneously satisfies (53) and ( 54) is called the "shortest"   credible interval.A highest posterior density interval (HPDI) is such that the posterior density for every point inside the interval is greater than that for every point outside of it.For a unimodal, but not necessarily symmetrical posterior density, the shortest credible and the HPD intervals are identical.We now proceed to obtain the   given by the simultaneous solution of the equations Similarly, using the posterior pdf of in (34), the p p for the parameter is given by the simultaneous solution of the equations , 1 and , , .
To obtain the HPDI from ( 55) and ( 56), one may employ any mathematical package such as Mathematica, to get the intervals.

Numerical Example
Based on the above data, a progressive first-failure censored data with binomial removals were generated using the algorithm described in Balakrishnan and Sandhu [14] with distribution function see Wu and Kuş [2].

Simulation Study
In order to compare the different estimators of the parameters, we simulated 1000 progressively first-fail- ure-censored samples from a Burr type X distribution with the values of parameters , and different combinations of , and censoring random schemes The samples were simulated by using the algorithm described in Balakrishnan and Sandhu [14].A simulation was conducted in order to study the properties and compare the performance of the Bayes estimator with maximum likelihood estimator.

R
The mean square error (MSE) of the Bayes estimations and maximum likelihood estimations are computed over different combination of the censored random scheme as shown in Tables 3 and 4. To asses the effect of the shape parameters a and b from Tables 3 and 4  relative to both LINEX loss, and GE loss (for close to 0, and ) are the same as the SE loss Bayes estimates.This one of the useful properties of working with the LINEX loss function we found that for different choices of k , , and censoring random scheme the MSE of the Bayes estimates based on symmetric and asymmetric loss functions perform better than MSE of the MLEs.when the effective sample proportion

Conclusion
The purpose of this paper is to develop a Bayesian analy- first-failure-censored order statistics with the progressive censoring scheme .It is clear that 1 2

,
as independent random variables, and we use gamma prior distribution with known parameters p The prior pdf of  takes the form for the parameter  is

Example 1 : 1 . 1 ) 2 ) 5 )
(simulated data) To illustrate the use of the estimation methods proposed in this paper.A set of data consisting of 75 observations were generated from a Burr-X distribution with parameter   , and randomly grouped into 15 sets.The generated data are listed below: 0.3194 0.4661 0.8348 0.1150 0.1230 0.2136 0.1373 0.2053 0.7253 0.7738 0.9407 0.1516 0.5111 0.4148 0.1599 0.3227 0.9233 0.8316 0.9615 0.2006 0.9758 0.6618 0.4116 0.9088 0.9787 0.8461 0.9795 0.7353 1.2692 0.8632 0.4503 1.1337 0.9956 0.9732 1.2067 1.0935 1.8144 0.9698 1.0344 2.0543 0.1775 0.3165 0.2732 0.2832 0.2752 0.2814 0.2761 0.3363 0.7871 0.4714 0.6613 0.5764 0.7273 0.5616 0.5353 0.8052 0.5134 0.6790 0.7441 0.7602 0.6529 0.8312 0.8695 0.9356 0.9049 0.9939 1.2178 0.9328 1.9019 1.0115 1.1291 1.5136 1.2023 1.7956 1.2521 Now, we consider the following cases: Case I: Progressive first-failure censored data with binomial removals.Algorithm Specify the value of .n m Specify the value of 3) Generate the value of the parameters  and using the prior densities (28) and (29), for some given values of the prior parameters p Generate a random numbers from for each i 6) Set according to the following relation, , one can see that the asymmetric Bayes estimates (BL, BG) of the (MSE) of the parameters  and are overestimates for ( using approximate confidence interval (ACI), confidence interval based on bootstrap re-sampling method (Boot CI), and the highest posterior density interval (HPDI).All the results are listed in Table 2. p  0 b  ), and when ( ) the (MSE) of the parameters are underestimates.Also, the MSE of Bayes estimates are smaller than MSE of the MLE, when 0GE the MSE of Bayes estimates relative to loss are the same as the MSE relative to SE loss Bayes estimates.As anticipated, all MSE of Bayes estimates MSE of each the Bayes estimation and maximum likelihood estimations reduce.Also the censoring scheme  , ,0 R n m    is most efficient For all choices, it seems to usually provide the smallest MSE for each estimates of  and p . p the ML estimates of the parameters   and p  to generate a bootstrap sample x  and R  with the same values of Wu and Kuş [2]. 

Table 1 . Different point estimates of  and for all cases with
L..