Algorithms of Confidence Intervals of WG Distribution Based on Progressive Type-II Censoring Samples ()
1. Introduction
The statistical distributions have a very important location of computer branches because of the great number of their particular applications. Being applied to images using Weibull distribution, the structured masks yield good results for the diagnosis of the early Alzheimer’s disease [1] . The paper [2] explores the relationship between the visible content and the real image statistics sampled by the integrated Weibull distribution. It presents a strong relationship between the brain and the parameters’ values using brain images. Moreover, the study discusses a simulated model of parameters estimated from the producer of EEG responses [3] .
The Weibull distributions display significant statistics―because of their large number of particular features, and practitioners―due to their efficiency to suit data from several scopes, beginning with real data in life, to observation made in economics, weather data, acceptance sampling, hydrology, biology etc. [4] . The article deals with the Weibull-geometric (WG) distribution.
The Weibull-geometric (WG), Exponential-Poisson (EP), Weibull-Power-Se- ries (WPS), Complementary-Exponential-geometric (CEG), Exponential-Geo- metric (EG), Generalized-Exponential-Power-Series (GEPS), Exponential Weibull-Poisson (EWP), and Generalized-Inverse-Weibull-Poisson (GIWP) distributions are introduced and presented by Adamidis and Loukas [5] , Kus [6] , Chahkandi and Ganjali [7] , Tahmasbi and Rezaei [8] , Barreto [9] , Morais and Barreto [10] , Barreto and Cribari [11] , Louzada et al. [12] , and Cancho et al. [13] . Hamedani and Ahsanullah [14] studied and discussed many properties of WG, such as moments, hazard functions, and functions of order statistics.
Barreto-Souza [9] suggested and studied the WG distribution. The modified Weibull geometric distribution introduced by composing the modified Weibull and geometric distributions and studied as class of lifetime distributions [15] . MohieEl-Din et al. [16] [17] and Elhag et al. [18] studied the confidence intervals for parameters of inverse Weibull distribution based on MLE and bootstrap.
The paper is organized as follows: the probability density function and cumulative functions of the WG distribution are presented in Section 2. Section 3 provides Markov chain Monte Carlo’s algorithms. The maximum likelihood estimates of the parameters of the WG distribution, the point and interval estimates of the parameters, as well as the approximate joint confidence region are studied in Section 4. The parametric bootstrap confidence intervals of parameters are discussed in Section 5. Bayes estimation of the model parameters and Gibbs sampling algorithm are provided in Section 6. Data analysis and Monte Carlo simulation results are presented in Section 7. Section 8 concludes the paper.
2. WG Distributions
It is assumed that there are n groups, independent and separated. Each group contains k items that are put in a lifetime test. Consider that the progressive censored scheme
such that:
represents a set of groups isolated and deleted from the current test, randomly, when the first failure
takes place. Similarly,
represents a combination of groups and the group that the second failure is observed is deleted from the current test as soon as the second failure
occurs randomly. In final
, groups are randomly deleted from the current test when there is an m-th failure
. Therefore,
are known as progressively 1-failure censoring order statistics, where m is the number of the 1-failures
. The relation of the distribution function
and probability density function
are founded in the function of joint probability density for
. The failure times of the
items from a continuous population are defined by: (see Balakrishnan and Sandhu [19] )
(1)
and
(2)
There are special cases of the progressive first-failure censoring scheme of Equation (1) as follows:
1) When
, the first-failure censoring scheme is obtained.
2) When
, the censoring order statistics of progressive Type II is found.
3) When
and
, sampling case in the complete form is obtained.
Generally, the progressively first-failure censoring order statistics
can be represented as a censoring order statistics of progressive Type II from the size of a population with function of distribution
. Hence, the results of progressive type II can be expanded to progressive first-failure censoring order statistic easily. The testing time in the progressive first-failure-censoring plan is reduced with
items, which contains only m failures.
The probability density function (pdf) of the WG distribution is represented by the following equation:
(3)
and the cumulative distribution function (cdf) of the WG distribution is shown by:
(4)
where
and
are parameters. The parameters
and
stand for the shape and scale while p stands for the mixing parameters, respectively.
The WG distribution in Equation (3) produces some special models as follows:
1) Weibull distribution, when
.
2) WG distribution tends to a distribution that is degenerated in zero, when
.
Hence, the parameter p can be explained as a focus parameter or concentration parameter. Figure 1 and Figure 2 show the density and cumulative plots
![]()
Figure 1. Shows Weibull-geometric density functions.
![]()
Figure 2. Displays Weibull-geometric cumulative functions.
respectively, with
and
for the various rates of p. The EG distribution related to two-parameter with decreasing failure rate is introduced by Adamidis and Loukas [5] . When
and
, the exponential geometric (EG) distribution is obtained, and at
for any
the EEG distribution is achieved. Therefore, the EEG distribution expands the EG distribution. The Weibull
distribution is obtained when p goes to zero. Figure 1 plots the WG density for some values of the vector
when
. For all values of parameters, the density tends to zero as
. The density functions of WG are shown. It is noted that the WG density is strictly decreasing when
and
, and is multimodal when
and
. The form
is obtained when solution is arrived of the following nonlinear formulation:
(5)
The WG density can be unimodal when
. For instance, when
and
, the EEG distribution is unimodal. The hazard and survival functions of X are:
(6)
and
(7)
3. Markov Chain Monte Carlo Algorithms
Markov chain Monte Carlo (MCMC) technique has spread widely for Bayesian calculation in compound statistical modeling. In general, it gives a beneficial application for real statistical modeling (Gilks et al. [20] ; Gamerman, [21] ).
Markov Chain is a randomly determined and stochastic process, having a random probability distribution or pattern that may be resolved statistically in that future cases are independent of previous cases specified the current case.
Monte Carlo chain is an emulation and simulation, therefore; it used to solve integrals to some extent rather than analyze performance, a procedure named integration of Monte Carlo. In this way, interested quantities of a distribution can be picked from emulated draws and charts from the distribution. Bayesian test needs integration over probably high-dimension of probability distributions to produce predictions or to yield inference and deduction about parameters of model. Basically, Monte Carlo integration is utilized with chains of Markov in MCMC techniques. The patterns of integration draw from the desired distribution, and then form pattern rates to sacrificial expectations (see Geman [22] ; Metropolis et al. [23] ; and Hastings [24] ).
3.1. MH Procedure
The Metropolis-Hastings (MH) procedure is employed by Metropolis et al. [23] . It is assumed that the main target here is to design samples from the distribution
, where
is the normal fixed value which may be hard to calculate or found. MH procedure gives a method of sampling from
without the need to inform
. Suppose that
is an optional transition kernel, where the probability of jumping, or moving, from existing case
to
, known as the suggestion or proposal distribution. The MH Algorithm generates values sequence
form a Markov chain with stable distribution given by
.
When the proposal distribution is symmetric, so
for all
possible
and
then, in particular, the result is
,
so that the acceptance probability (5) is given by
(9)
3.2. GS Procedure
Gibbs’ sampler (GS) procedure is a straightforward branch of MCMC algorithms. This procedure was implemented by Geman [22] . The significance of Gibbs’ procedure for area of issues in Bayesian analysis is explained by Gelfand and Smith [25] . The complete conditional distribution forms the transition kernel, so Gibbs sampler procedure is a MCMC planner.
The three unknown parameters of WG distribution will be studied through the various algorithms of estimation based on progressive Type-II censoring. The MCMC procedures are used with Bayesian technique to produce from the posterior distributions.
4. MLE of WG Distribution
This section determines the maximum likelihood estimates (MLEs) of the WG distribution parameters. Let’s assume that
are the progressive first-failure censoring order statistics from a WG distribution, with censoring plane R. Using Equations (1)-(3), the function of likelihood is shown by:
(10)
where
is given in (2). The logarithm of the function of likelihood may be obtained as follow:
(11)
Compute the derivatives
,
and
, then put each equation equal
to zero, the likelihood equations can be obtained in the following:
(12)
(13)
and
(14)
The analytical solution of
and
in Equations (12)-(14) is very difficult. Hence, some numerical techniques like Newton’s method may be used.
From the function of log-likelihood in (11), the Fisher information matrix
is obtained by taking expectation of minus Equations (12)-(14). Under some mild regularity conditions,
are approximately normal bivariate with the means
and covariance matrix
. Commonly, in practice,
is estimated by
. This procedure is simpler and valid to employ the approximation.
(15)
where
is observed as information matrix.
(16)
Confidence intervals can be calculated approximately for
and p to be bivariate normal distributed with the means
and covariance matrix
. Hence, the
confidence intervals approximately for
and p are
(17)
respectively, where the values
,
and
are on the major diagonal of the covariance matrix
and
is the percentage of the standard
normal distribution with right-tail probability
.
5. Intervals of Bootstrap Confidence
The bootstrap technique is used for resampling in statistical inference cases. It is usually utilized to evaluate confidence regions and it can be applied to evaluate bias and variance of a calibrator or estimator assumption tests. Additional scanning of the parametric and nonparametric bootstrap technique is applied, see Davison and Hinkley [26] , and Elhag et al. [27] . The parametric bootstrap technique of the two confidence intervals is suggested. The algorithm for evaluating the confidence intervals of parameters uses both Efron and Tibshirani procedures [28] , and bootstrap-t Hall procedure [20] . The Bootstrap sampling algorithm for estimating the confidence intervals of parameters is illustrated below.
![]()
![]()
Percentile bootstrap confidence interval: Assume that
is the cumulative distribution function of
. Determine
for the given y. The bootstrap confidence interval approximately with
of
may be obtained as follows:
(18)
First, locate the sort statistics
wherever
(19)
and
Consider that
is the cumulative distribution function of
. If y is given, then
(20)
6. Bayes Estimation of the Model Parameters
In the consideration that each of the parameters
and p are unknown, it may be considered that the joint prior density is a product of gamma density of
and
uniform prior of p, where
(21)
(22)
and
(23)
By multiplying
by
and
, we get the joint prior density of
and
computed by
(24)
Based on the prior of joint distribution of
and
the posterior of joint density function of
and p known as the data, indicated by
can be expressed as follows:
(25)
Hence, using squared error loss function (SEL) of any function
, the Bayes estimate of
and
can be expressed as
(26)
In general, the value of two integrals specified by (26) cannot be acquired in a cleared and closed format. In this situation, the MCMC procedure is used to create patterns from the posterior distributions and; therefore, is calculated the Bayes estimator of
along with the function of SEL. A wide diversity of MCMC techniques is available and can be troublesome to select any of them. A significant type of MCMC technique is Gibbs samplers and widespread Metropolis within-Gibbs samplers.
The MCMC procedure has the advantage over the MLE procedure that we can permanently gain an appropriate estimation of intervals of the parameters by building the probability intervals and using the experimental posterior distribution.
This, sometimes, is not obtainable in MLE. The samples of MCMC can be utilized to fully brief the uncertainty of posterior about the parameters
and
, by using a kernel estimation of the posterior distribution.
The function of joint posterior density of
and
may be described as
(27)
The conditional posterior PDF’s of
and
are shown as
(28)
(29)
and
(30)
The Metropolis-Hastings procedure [23] with normal proposal distribution under the Gibbs sampler algorithm is described as follows:
7. Illustrative Example and Simulation Studies
To explain the procedures evolved of estimation in this paper, gamma distribution for given hybrid parameters (
) is used and produce sample of
space 10, randomly (21), the average of the sample
, is computed
and supposed as the real population rate of
. So that they are obtained to
verify
with the past parameters is nearly the average of gamma
distribution. Similarly, when the valued
and
are given, create
based on the last
, from gamma distribution (22). The previously
parameters selected to verify
, are nearly the average of gamma
distribution. A progressive Type II samples are created by employing the procedures of Balakrishnan and Sandhu [19] from WG distribution with the available data: 0.0409, 0.0552, 0.0561, 0.0726, 0.0776, 0.0840, 0.0906, 0.1108, 0.1291, 0.1502, 0.1513, 0.1540, 0.1624, 0.1691, 0.1930, 0.2175, 0.2188, 0.2700, 0.2709, 0.2994, 0.3219, 0.3342, 0.4065, 0.4396, 0.5385, and under the parameters; (
,
,
,
and
).
The approximate bootstrap, Bayes estimates and MLEs are calculated of
, and
under these data utilizing MCMC algorithm outputs are explained in Table 1 and Table 2. Table 2 yield the 95%, approximate confidence intervals of two bootstrap, approximate credible and MLE under the MCMC samples. Studies of simulation have been executed employing Mathematica ver. 9.0 for explaining the theoretic outcomes of estimates issue. The accomplishment of the performing estimators of the parameters has been supposed in valued of their mean square error (MSE) and average (AVG), where
(34)
and
(35)
In studies of simulation, the researchers assume that the population parameter rates
, various sample values n, different effected sample size m and different censored scheme
. For computing Bayes estimators, without loss of generality using non-informative priors, (
,
,
,
). Under function of squared error loss, the researchers calculate the Bayes estimations. The estimations of Bayes and 95% credible intervals using 11,000 sets of MCMC are also calculated. The mean Bayes estima-
![]()
Table 1. Show the parameters estimation of WG distribution.
![]()
Table 2. Show the CIs using Bootstrap, Bootstrap-t and MLE according to 500 times.
tions, MSEs, coverage percentages, and average lengths of confidence interval based on 500 times are reported.
Comparatively, the MLEs with the 95% confidence intervals are calculated based on the observation of Fisher information matrix and two bootstrap confidences. Table 3 and Table 4 report the outputs based on MLEs and the Bayes estimations utilizing both the Gibbs sampling algorithm and MH algorithm:
1) From Table 3 and Table 4, in parts of MSEs and credible intervals lengths, the estimators of Bayes depend on non-informative implement more effective than the MLEs and bootstrap.
2) From Table 3 and Table 4, comparing the models, the MSEs, average confidence interval lengths of the MLEs, and Bayes estimators for parameters are less significant for censored models
.
3) The MSE and average confidence interval lengths nearly reduce the estimators in whole situations when the performance sample rate
raises.
8. Conclusion
Several algorithms of estimation of WG distribution, based on the progressive Type II censored sampling plan, are discussed. The joint confidence intervals for the parameters are also studied. The approximate confidence regions, percentile bootstrap confidence intervals, as well as approximate joint confidence region
![]()
Table 3. Show the various estimators average values and the identical MSEs when
and
.
![]()
Table 4. Show the coverage percentages and average confidence interval, when
and
.
for the parameters are expanded and developed. Some numerical examples with actual data set and simulated data are used to compare the proposed joint confidence regions. The parts of MSEs and credible intervals lengths, the estimators of Bayes depend on non-informative implement more effective than the MLEs and bootstrap. Comparing the models, the MSEs, average confidence interval lengths of the MLEs, and Bayes estimators for parameters are less significant for censored models.