Inference on Constant-Partially Accelerated Life Tests for Mixture of Pareto Distributions under Progressive Type-II Censoring

The main purpose of this paper is to obtain the inference of parameters of heterogeneous population represented by finite mixture of two Pareto (MTP) distributions of the second kind. The constant-partially accelerated life tests are applied based on progressively type-II censored samples. The maximum likelihood estimates (MLEs) for the considered parameters are obtained by solving the likelihood equations of the model parameters numerically. The Bayes estimators are obtained by using Markov chain Monte Carlo algorithm under the balanced squared error loss function. Based on Monte Carlo simulation, Bayes estimators are compared with their corresponding maximum likelihood estimators. The two-sample prediction technique is considered to derive Bayesian prediction bounds for future order statistics based on progressively type-II censored informative samples obtained from constant-partially accelerated life testing models. The informative and future samples are assumed to be obtained from the same population. The coverage probabilities and the average interval lengths of the confidence intervals are computed via a Monte Carlo simulation to investigate the procedure of the prediction intervals. Analysis of a simulated data set has also been presented for illustrative purposes. Finally, comparisons are made between Bayesian and maximum likelihood estimators via a Monte Carlo simulation study.

Share and Cite:

Abushal, T. and AL-Zaydi, A. (2017) Inference on Constant-Partially Accelerated Life Tests for Mixture of Pareto Distributions under Progressive Type-II Censoring. Open Journal of Statistics, 7, 323-346. doi: 10.4236/ojs.2017.72024.

Type-II Censoring, Bayesian Estimation, Maximum Likelihood Estimation, Bayesian Prediction, the Two-Sample Prediction, MCMC

1. Introduction

Accelerated life tests (ALTs) are used to obtain information quickly on the lifetime distribution of materials or products. The test units are run at higher-than- usual levels of stress to induce early failures. A model relating life length to stress is fitted to the accelerated failure times and then extrapolated to estimate the failure time distribution under the normal use condition. ALTs are preferred to be used in manufacturing industries to obtain enough failure data, in a short period of time, necessary to make inferences regarding its relationship with external stress variables.

According to  , there are mainly three ALT methods. The first method is called the constant stress ALT; the stress is kept at a constant level throughout the life of test products, (see for example     ). The second one is referred to as progressive stress ALT; the stress applied to a test product is continuously increasing in time (see for example,    ).

The third is the step-stress ALT, in which the test condition changes at a given time or upon the occurrence of a specified number of failures, has been studied by several authors.  obtained the optimal simple step-stress ALT plans for the case, where test products had exponentially distributed lives and were observed continuously until all test products failed;  extended their results to the case of censoring. The optimal step-stress test under progressive type-I censoring, assuming exponential lifetime distribution was considered by  . For more recent research on step-stress ALTs, see     .

When the acceleration factor cannot be assumed as a known value, the partially accelerated life test (PALT) will be a good choice to perform the life test. In ALTs, the units are tested only at accelerated conditions (see  ) whereas in partially ALTs (PALTs), the units are tested at both accelerated and normal conditions. PALTs include two types; one is called step PALTs (see  ) and the other is called constant PALTs (see  ).

From the Bayesian viewpoint, few studies have been considered on PALT such as  used the Bayesian approach for estimating the acceleration factor and the parameters in the case of step-stress PALT with complete sampling for items having exponential and uniform distributions.  investigated the optimal Bayesian design of a PALT in the case of the exponential distribution under complete sampling.  discussed the Bayesian approach to estimate the parameters of Weibull distribution in step-stress PALT with censoring.  considered the Baye- sian estimates of the Pareto distribution parameters under step-stress PALT with censored data.  considered the Bayesian estimates of the parameters, reliability and hazard rate functions by using an approximate form due to Tierney and Kadane of a mixtures of two Weibull components under ALT. Finally,  obtained the Bayesian estimation of Gompertz distribution parameters in the case of step-stress PALT with two stress levels and Type-I censoring and the appro- ximation Bayes estimates are computed using the method of Lindley.

Pareto distribution of the second type (also known as the Lomax distribution) has been widely used in economic studies and to analyze business failure data. The Pareto distribution has been studied by several authors. According to  , the Pareto distribution is well adapted for modeling reliability problems, since many of its properties are interpretable in that context and could be an alternative to the well-known distributions used in reliability. This distribution was used for modeling size spectra data in aquatic ecology by  .  considered order statistics from non-identical right-truncated Lomax distributions and provided applications for this situation.  used the Pareto distribution as a mixing distribution for the Poisson parameter and obtained the discrete Poisson- Pareto distribution.

 investigated the Bayesian estimation of the Pareto survival function. More recently,  discussed some Bayesian inferences based on censored samples from the Pareto distribution.  determined the optimal times of changing stress level for simple stress plans under a cumulative exposure model using the Pareto distribution. Finite mixture of distributions has proved to be of considerable interest in recent years in terms of both the methodological development and multiple applications. Mixture distribution modeling was studied as early as 1890s by  , see also    .   used a finite mixture model to study the effect of a constant stress on the parameters, reliability and hazard rate functions.  considers the progressive stress ALT applied to a product whose lifetime under design condition is assumed to follow a mixture of k components each of which represents a different cause of failure.

A random variable T is said to have a Mixture of two Pareto distributions (MTPD) if its probability density function (PDF) is given by

${f}_{1\Theta }\left(t\right)={p}_{1}{f}_{11}\left(t,{\theta }_{1}\right)+{p}_{2}{f}_{12}\left(t,{\theta }_{2}\right),$ (1)

where $\Theta =\left({\theta }_{1},{\theta }_{2},{p}_{1},{p}_{2}\right)$ and for $j=1,2$ ,

${\theta }_{j}=\left({\alpha }_{j},{\beta }_{j}\right)$ ,

$\begin{array}{l}{f}_{1j}\left(t;{\theta }_{j}\right)={\alpha }_{j}{\beta }_{j}^{{\alpha }_{j}}{\left({\beta }_{j}+t\right)}^{-\left({\alpha }_{j}+1\right)},\\ t>0,\left({\alpha }_{j},{\beta }_{j}>0\right),\text{}0\le {p}_{j}\le 1,\text{}{p}_{1}+{p}_{2}=1.\end{array}$ (2)

Also, the cumulative distribution function (CDF), the reliability function (RF) and the hazard rate function (HRF) take the forms.

${F}_{1j}\left(t;{\theta }_{j}\right)=1-{\beta }_{j}^{{\alpha }_{j}}{\left({\beta }_{j}+t\right)}^{-{\alpha }_{j}},$ (3)

${R}_{1j}\left(t;{\theta }_{j}\right)={\beta }_{j}^{{\alpha }_{j}}{\left({\beta }_{j}+t\right)}^{-{\alpha }_{j}},$ (4)

${H}_{1j}\left(t;{\theta }_{j}\right)={\alpha }_{j}{\left({\beta }_{j}+t\right)}^{-1},$ (5)

where ${H}_{1j}\left(\cdot \right)=\frac{{f}_{1j}\left(\cdot \right)}{{R}_{1j}\left(\cdot \right)}.$ (2) is a special form of Pearson type VI distribution. In

life-testing and reliability studies, the experimenter may not always obtain complete information on failure times for all experimental units. Data obtained from such experiments are called censored data. Saving the total time on test and the cost associated with it are some of the major reasons for censoring. A censoring scheme, which can balance between total time spent for the experiment, number of units used in the experiment and the efficiency of statistical inference based on the results of the experiment, is desirable. The most common censoring schemes are Type-I (time) censoring, and Type-II (item) censoring. The conventional Type-I and Type-II censoring schemes do not have the exorability of allowing removal of units at points other than the terminal point of the experiment. Because of that, a more general censoring scheme called progressive Type- II right censoring has been used in this article. Censored data are of progressively Type II right type when they are censored by the removal of a prospected number of survivors whenever an individual fails; this continues until a fixed number of failures has occurred, at which stage the remainder of the surviving individuals are also removed or censored. This scheme includes ordinary Type II censoring and complete scheme as special cases. A general account of theoretical developments and applications concerning progressive censoring is given in the book by   .

An important problem that may face the experimenter in life testing experiments is the prediction of unknown observations that belong to a future sample, based on the current available sample, known in the literature as the informative sample. For example, the experimenters or the manufacturers would like to have the bounds for the life of their products so that their warranty limits could be plausibly set and customers purchasing manufactured products would like to know the bounds for the life of the product to be purchased. For different application areas, the reader can see   . The prediction of progressive Type-II censored data from the Gompertz and Rayleigh distributions has considered, respectively, by   .  presented methods for constructing prediction limits for a step-stress model in ALT. Bayesian inference and prediction for the inverse Weibull distribution and Weibull distribution under Type-II censored data are described by  and by  , respectively.

The novelty of this paper is to consider the constant PALT applied to items whose life-times under design condition are assumed to follow MTPD under a progressive Type-II censoring and the main aim is to obtain the Bayes estimators (BEs) and prediction of the acceleration factor and the parameters under consi- deration using the method of MCMC. The rest of this paper is organized as follows. In Section 2, a description of the model is presented and the MLEs of the parameters are derived. In Section 3, Bayes estimates are obtained using the balanced square error loss (BSEL) function. Bayesian two-sample prediction is presented in Section 4. Monte Carlo simulation results are presented in Section 5. Finally, some concluding remarks are introduced in Section 6.

2. Model Description and Basic Assumptions

2.1. Model Description

In a constant-PALT, ${n}_{1}$ items randomly chosen among $n$ test items sampled are allocated to use condition and ${n}_{2}=n-{n}_{1}$ remaining items are subjected to an accelerated condition progressive type-II censoring is performed as follows.

At the time of the first failure ${t}_{s1:{m}_{s}:{n}_{s}}^{{R}_{s}},{R}_{s1}$ items are randomly withdrawn from the remaining ${n}_{s}-1$ surviving items. At the second failure ${t}_{s2:{m}_{s}:{n}_{s}}^{{R}_{s}},{R}_{s2}$ items from the remaining ${n}_{s}-2-{R}_{s1}$ items are randomly withdrawn. The test continues until the ${m}_{s}-\text{th}$ failure ${t}_{s{m}_{s}:{m}_{s}:{n}_{s}}^{{R}_{s}}$ at which time, all remaining

${R}_{s{m}_{s}}={n}_{s}-{m}_{s}-{\sum }_{\upsilon =1}^{{m}_{s}-1}{R}_{s\upsilon }$ items are withdraws for $s=1,2$ . In our study, ${R}_{si}$

are fixed prior and ${m}_{s}<{n}_{s}$ .

If the failure times of the ${n}_{s}$ items originally in the test are from a continuous population with distribution function ${F}_{j}\left(x\right)$ and probability density function ${f}_{j}\left(x\right)$ , the joint probability density function for ${t}_{s1:{m}_{s}:{n}_{s}}^{{R}_{s}}<{t}_{s2:{m}_{s}:{n}_{s}}^{{R}_{s}}<\cdots <{t}_{s{m}_{s}:{m}_{s}:{n}_{s}}^{{R}_{s}}$ and $s=1,2$ is given by

$L\left(\theta ;t\right)=\underset{s=1}{\overset{2}{\prod }}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{\prod }}{f}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\},$ (6)

where $t=\left({t}_{1},{t}_{2}\right)$ and, for $s=1,2$ , ${t}_{s}=\left({t}_{s1},\cdots ,{t}_{s{m}_{s}}\right)$ and

${A}_{s}={n}_{s}\left({n}_{s}-1-{R}_{s1}\right)\left({n}_{s}-2-{R}_{s1}-{R}_{s2}\right)\cdots \left({n}_{s}-{m}_{s}+1-{R}_{s1}-{R}_{s2}\cdots -{R}_{s\left({m}_{s}-1\right)}\right).$

It is clear from (6) that the constant PALTs progressively Type-II censored scheme containing the following censoring schemes as special cases:

1) Type-II censored scheme when $R=\left\{0,0,\cdots ,0,{n}_{s}-{m}_{s}\right\}.$

2) The complete sample case when $R=\left\{0,0,\cdots ,0\right\}$ and ${n}_{s}={m}_{s}$ .

2.2. Assumptions

1) The lifetimes ${T}_{1i}\equiv T,\text{}i=1,\cdots ,{n}_{1}$ of items allocated to use condition, are independent and identically distributed random variables (i.i.d. r.v.’s) and follows a mixture of MTP distribution with PDF, given in (1).

2) The lifetimes ${T}_{2i}\equiv X,\text{}i=1,\cdots ,{n}_{2}$ of items allocated to accelerated condition, are i.i.d r.v.’s.

3) The PDF, RF, CDF and HRF of an item tested at accelerated condition are given, respectively, by

$\begin{array}{l}{f}_{2\Theta }\left(x\right)={p}_{1}{f}_{21}\left(x;{\theta }_{1}\right)+{p}_{2}{f}_{22}\left(x;{\theta }_{2}\right),\hfill \\ {R}_{2\Theta }\left(x\right)={p}_{1}{R}_{21}\left(x;{\theta }_{1}\right)+{p}_{2}{R}_{22}\left(x;{\theta }_{2}\right),\hfill \\ {F}_{2\Theta }\left(x\right)={p}_{1}{F}_{21}\left(x;{\theta }_{1}\right)+{p}_{2}{F}_{22}\left(x;{\theta }_{2}\right),\hfill \\ {H}_{2\Theta }\left(x\right)=\frac{{f}_{2\Theta }\left(x\right)}{{R}_{2\Theta }\left(x\right)},\hfill \end{array}\right\}$ , (7)

where for $j=1,2,{\theta }_{j}=\left({\alpha }_{j},{\beta }_{j},{\lambda }_{j}\right),$ and

${H}_{2j}\left(x;{\theta }_{j}\right)={\lambda }_{j}{H}_{1j}\left(x,{\theta }_{j}\right)={\lambda }_{j}{\alpha }_{j}{\left({\beta }_{j}+x\right)}^{-1},$ (8)

${R}_{2j}\left(x;{\theta }_{j}\right)={\beta }_{j}^{{\lambda }_{j}{\alpha }_{j}}{\left({\beta }_{j}+x\right)}^{-{\lambda }_{j}{\alpha }_{j}},$ (9)

${F}_{2j}\left(x;{\theta }_{j}\right)=1-{\beta }_{j}^{{\lambda }_{j}{\alpha }_{j}}{\left({\beta }_{j}+x\right)}^{-{\lambda }_{j}{\alpha }_{j}},$ (10)

${f}_{2j}\left(x;{\theta }_{j}\right)={\alpha }_{j}{\lambda }_{j}{\beta }_{j}^{{\lambda }_{j}{\alpha }_{j}}{\left({\beta }_{j}+x\right)}^{-\left({\lambda }_{j}{\alpha }_{j}+1\right)},$ (11)

where ${\lambda }_{j}$ is an accelerated factor satisfying ${\lambda }_{j}>1$ .

4) The i.i.d lifetimes ${T}_{1i}$ and ${T}_{2i}$ , $i=1,2,\cdots ,{n}_{j}$ are mutually statistically- independent.

2.3. ML Estimation

Let, for $s=1,2$ , ${T}_{s1:{m}_{s}:{n}_{s}}^{\left({R}_{s1},\cdots ,{R}_{s{m}_{s}}\right)}<{T}_{s2:{m}_{s}:{n}_{s}}^{\left({R}_{s1},\cdots ,{R}_{s{m}_{s}}\right)}<\cdots <{T}_{s{m}_{s}:{m}_{s}:{n}_{s}}^{\left({R}_{s1},\dots ,{R}_{s{m}_{s}}\right)}$ denote two progressively type-II censored samples from two populations whose PDFs are as given by (1) and (2), respectively, with ${R}_{s}=\left({R}_{s1},\cdots ,{R}_{s{m}_{s}}\right)$ being the two progressive censoring schemes. We denote also the observed values by, ${t}_{s1:{m}_{s}:{n}_{s}}<{t}_{s2:{m}_{s}:{n}_{s}}<\cdots <{t}_{s{m}_{s}:{m}_{s}:{n}_{s}}$ . The log-likelihood function $l\left(\alpha ,\beta ,\lambda |\stackrel{¯}{x}\right)=\mathrm{log}L\left(\alpha ,\beta ,\lambda |\stackrel{¯}{x}\right)$ without normalized constant is then given by

$\begin{array}{c}l\equiv \mathrm{ln}L\left(\theta ;t\right)={\sum }_{s=1}^{2}\mathrm{ln}{A}_{s}+{\sum }_{s=1}^{2}{\sum }_{i=1}^{{m}_{s}}\mathrm{ln}{f}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right)\\ \text{}+{\sum }_{s=1}^{2}{\sum }_{i=1}^{{m}_{s}}{R}_{si}\mathrm{ln}{R}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right).\end{array}$ (12)

Assuming that the parameters $p,\text{}{\lambda }_{j}$ and ${\beta }_{j}$ are unknown and ${\alpha }_{j}$ , is known, the likelihood equations are given, for $j=1,2$ , by

$\begin{array}{c}\frac{\partial \mathcal{l}}{\partial {p}_{j}}={\sum }_{s=1}^{2}{\sum }_{i=1}^{{m}_{s}}{\psi }_{s}\left({t}_{si}\right)+{\sum }_{s=1}^{2}{\sum }_{i=1}^{{m}_{s}}{R}_{si}{\psi }_{s}^{*}\left({t}_{si}\right)=0,\\ \frac{\partial l}{\partial {\lambda }_{j}}={\sum }_{i=1}^{{m}_{2}}\frac{{p}_{j}{\xi }_{j}\left({t}_{2i}\right)}{{f}_{2\Theta }\left({t}_{2i:{m}_{2}:{n}_{2}}\right)}+{\sum }_{i=1}^{{m}_{2}}\frac{{R}_{2i}{p}_{j}{\xi }_{j}^{*}\left({t}_{2i}\right)}{{R}_{2\Theta }\left({t}_{2i:{m}_{2}:{n}_{2}}\right)}=0,\text{}j=1,2\\ \frac{\partial l}{\partial {\beta }_{j}}=\underset{s=1}{\overset{2}{\sum }}\underset{i=1}{\overset{{m}_{s}}{\sum }}\frac{{p}_{j}{\vartheta }_{sj}\left({t}_{si}\right)}{{f}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right)}+\underset{s=1}{\overset{2}{\sum }}\underset{i=1}{\overset{{m}_{s}}{\sum }}\frac{{R}_{si}{p}_{j}{\vartheta }_{sj}^{*}\left({t}_{si}\right)}{{R}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right)}=0,\text{}j=1,2\end{array}\right\}$ (13)

where, for $j=1,2$

$\begin{array}{c}{\psi }_{s}\left({t}_{si}\right)=\frac{{f}_{s1}\left({t}_{si};{\theta }_{1}\right)-{f}_{s2}\left({t}_{si};{\theta }_{2}\right)}{{f}_{s\text{Θ}}\left({t}_{si:{m}_{s}:{n}_{s}}\right)},\\ {\psi }_{s}^{*}\left({t}_{si}\right)=\frac{{R}_{s1}\left({t}_{si};{\theta }_{1}\right)-{R}_{s2}\left({t}_{si};{\theta }_{2}\right)}{{R}_{s\text{Θ}}\left({t}_{si:{m}_{s}:{n}_{s}}\right)},\\ {\xi }_{j}\left({t}_{2i}\right)=\frac{\partial {f}_{2j}\left({t}_{2i};{\theta }_{j}\right)}{\partial {\lambda }_{j}}={\alpha }_{j}^{2}{\lambda }_{j}{\beta }_{j}^{{\lambda }_{j}{\alpha }_{j}}{\left({\beta }_{j}+{t}_{2i}\right)}^{-\left({\lambda }_{j}{\alpha }_{j}+1\right)}\left[\mathrm{ln}\frac{{\beta }_{j}}{{\beta }_{j}+{t}_{2i}}+\frac{1}{{\lambda }_{j}{\alpha }_{j}}\right]\\ {\xi }_{j}^{*}\left({t}_{2i}\right)=\frac{\partial {R}_{2j}\left({t}_{2i};{\theta }_{j}\right)}{\partial {\lambda }_{j}}={\alpha }_{j}{\beta }_{j}^{{\lambda }_{j}{\alpha }_{j}}{\left({\beta }_{j}+{t}_{2i}\right)}^{-{\lambda }_{j}{\alpha }_{j}}\left[\mathrm{ln}\frac{{\beta }_{j}}{{\beta }_{j}+{t}_{2i}}\right],\\ {\vartheta }_{1j}\left({t}_{1i}\right)=\frac{\partial {f}_{1j}\left({t}_{1i};{\theta }_{j}\right)}{\partial {\beta }_{j}}={\alpha }_{j}{\beta }_{j}^{{\alpha }_{j}-1}{\left({\beta }_{j}+{t}_{1i}\right)}^{-\left({\alpha }_{j}+2\right)}\left({\alpha }_{j}{t}_{1i}-{\beta }_{j}\right),\\ {\vartheta }_{2j}\left({t}_{2i}\right)=\frac{\partial {f}_{2j}\left({t}_{2i};{\theta }_{j}\right)}{\partial {\beta }_{j}}={\alpha }_{j}{\lambda }_{j}{\beta }_{j}^{{\alpha }_{j}{\lambda }_{j}-1}{\left({\beta }_{j}+{t}_{2i}\right)}^{-\left({\lambda }_{j}{\alpha }_{j}+2\right)}\left({\alpha }_{j}{\lambda }_{j}{t}_{2i}-{\beta }_{j}\right),\\ {\vartheta }_{1j}^{*}\left({t}_{1i}\right)=\frac{\partial {R}_{1j}\left({t}_{1i};{\theta }_{j}\right)}{\partial {\beta }_{j}}={\alpha }_{j}{\beta }_{j}^{{\alpha }_{j}-1}{t}_{1i}{\left({\beta }_{j}+{t}_{1i}\right)}^{-\left({\alpha }_{j}+1\right)},\\ {\vartheta }_{2j}^{*}\left({t}_{2i}\right)=\frac{\partial {R}_{2j}\left({t}_{2i};{\theta }_{j}\right)}{\partial {\beta }_{j}}={\alpha }_{j}{\lambda }_{j}{\beta }_{j}^{{\alpha }_{j}{\lambda }_{j}-1}{t}_{2i}{\left({\beta }_{j}+{t}_{2i}\right)}^{-\left({\lambda }_{j}{\alpha }_{j}+1\right)},\end{array}\right\}.$ (14)

Equations (13) do not yield explicit solutions for $p,\text{}{\lambda }_{j}$ and ${\beta }_{j},\text{}j=1,2,$ and have to be solved numerically to obtain the ML estimates of the five parameters, Newton-Raphson iteration is employed to solve (13).

3. Bayes Estimation of the Model Parameters

For Bayesian approach, in order to select a single value as representing our “best” estimators of the unknown parameter, a loss function must be specified. A wide variety of loss functions have been developed in the literature to describe various types of loss structures. The balanced loss function which is introduced  .  introduced an extended class of the balanced loss function of the form

${L}_{\Phi ,\Omega ,{\delta }_{o}}\left(\Psi \left(\theta \right),\delta \right)=\Omega \Upsilon \left(\theta \right)\Phi \left({\delta }_{o},\delta \right)+\left(1-\Omega \right)\Upsilon \left(\theta \right)\Phi \left(\Psi \left(\theta \right),\delta \right),$ (15)

where $\Upsilon \left(\cdot \right)$ is a suitable positive weight function and $\Phi \left(\Psi \left(\theta \right),\delta \right)$ is an arbitrary loss function when estimating $\Psi \left(\theta \right)$ by $\delta$ . The parameter ${\delta }_{o}$ is a chosen prior estimator of $\Psi \left(\theta \right)$ , obtained for instance from the criterion of ML, least squares or unbiasedness among others. They give a general Bayesian connection between the case of $\Omega >0$ and $\Omega =0$ where $0\le \Omega <1$ .

This section deals with studying the Bayes estimates of the parameters under consideration using the balanced square error loss (BSEL) function using the non-informative prior NIP distribution. It follows that a NIP for the acceleration factor ${\lambda }_{j}$ is given by

${\pi }_{1}\left({\lambda }_{j}\right)\propto \frac{1}{{\lambda }_{j}},\left({\lambda }_{j}>1\right).$ (16)

Also, the NIP’s for the scale parameter ${\beta }_{j}$ and the parameter ${p}_{j}$ are, respectively, as

${\pi }_{2}\left({\beta }_{j}\right)\propto \frac{1}{{\beta }_{j}}\text{,}\left({\beta }_{j}>0\right),$ (17)

${\pi }_{3}\left({p}_{j}\right)\propto \frac{1}{{p}_{j}},\text{}\left({p}_{j}>0\right).$ (18)

Therefore, the joint NIP of the three parameters can be expressed by

$\pi \left(\text{Θ}\right)={\pi }_{1}\left({\lambda }_{j}\right){\pi }_{2}\left({\beta }_{j}\right){\pi }_{3}\left({p}_{j}\right)\propto \frac{1}{{p}_{j}{\lambda }_{j}{\beta }_{j}},\text{}\left({\lambda }_{j}>1,{\beta }_{j},{p}_{j}>0\right),$ (19)

where $\text{Θ}=\left({p}_{j},{\lambda }_{j},{\beta }_{j}\right).$

It is to be noted that our objective is to consider vague priors so that the priors do not have any significant roles in the analyses that follow. However, if one uses the prior beliefs different from (19) and resorts to sample based approaches for analyzing the posterior, one may use the concept of sampling-importance-re- sampling without working afresh with the new prior-likelihood setup (see,  ).

3.1. Bayes Estimation Based on BSEL Function

The symmetric square-error loss (SE) is one of the most popular loss functions. By choosing $\Phi \left(\Psi \left(\theta \right),\delta \right)={\left(\delta -\Psi \left(\theta \right)\right)}^{2}$ and $\Upsilon \left(\theta \right)=1$ , in (15), the balanced loss function reduced to the BSEL function, used by   , in the form

${L}_{\Omega ,{\delta }_{o}}\left(\Psi \left(\theta \right),\delta \right)=\Omega {\left(\delta -{\delta }_{o}\right)}^{2}+\left(1-\Omega \right){\left(\delta -\Psi \left(\theta \right)\right)}^{2},$ (20)

and the corresponding Bayes estimate of the function $\Psi \left(\theta \right)$ is given by

${\delta }_{\Omega ,\Psi ,{\delta }_{o}}\left(t\right)=\Omega {\delta }_{o}+\left(1-\Omega \right)E\left(\Psi \left(\theta \right)|t\right).$ (21)

Under the BSEL function, the estimator of a parameter (or a given function of the parameters) is the posterior mean. Thus, Bayes estimators of the parameters are obtained by using the loss function (20). The Bayes estimators of a function $u\equiv u\left({p}_{j},{\lambda }_{j},{\beta }_{j}\right)={p}_{j},{\lambda }_{j}\text{or}{\beta }_{j}$ is given by

${\stackrel{^}{u}}_{BS}=\Omega {\stackrel{^}{u}}_{ML}+\left(1-\Omega \right){\int }_{0}^{\infty }u{\pi }^{*}\left({p}_{j},{\lambda }_{j},{\beta }_{j}|t\right)\text{d}\Theta ,$ (22)

where, ${\stackrel{^}{u}}_{ML}$ is the ML estimate of $u$ . It is not possible to compute (22) analytically, therefore, we propose to approximate (22) by using MCMC technique to generate samples from the posterior distributions and then compute the Bayes estimators of the individual parameters.

3.2. MCMC Method

The MCMC method is a useful technique for computing Bayes estimates of the function $u\equiv u\left({p}_{j},{\lambda }_{j},{\beta }_{j}\right)$ . A wide variety of MCMC schemes are available, and it can be difficult to choose among them. An important sub-class of MCMC methods is Gibbs sampling and more general Metropolis within- Gibbs samplers. The advantage of using the MCMC method over the MLE method is that we can always obtain a reasonable interval estimate of the parameters by constructing the probability intervals based on the empirical posterior distribution. This is often unavailable in maximum likelihood estimation. Indeed, the MCMC samples may be used to completely summarize the posterior uncertainty about the parameters ${p}_{j},\text{}{\lambda }_{j}$ and ${\beta }_{j}$ , through a kernel estimate of the posterior distribution. This is also true of any function of the parameters. For more detailes about the MCMC methods see, for example,    .

The Metropolis-Hasting algorithm generates sampling from an (essentially) arbitrary proposal distribution (i.e. a Markov transition kernel). From the product of Equations (19) and (6), the joint posterior density function of ${p}_{j},\text{}{\lambda }_{j}$ and ${\beta }_{j}$ given the data can be written as

${\pi }^{*}\left({p}_{j},{\lambda }_{j},{\beta }_{j}|t\right)={B}_{1}{\left({p}_{j}{\lambda }_{j}{\beta }_{j}\right)}^{-1}\underset{s=1}{\overset{2}{\prod }}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{\prod }}{f}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\},$ (23)

where

${B}_{1}^{-1}={\int }_{\text{Θ}}\pi \left(\text{Θ}\right)L\left({p}_{j},{\lambda }_{j},{\beta }_{j}\right)\text{dΘ}.$

$t=\left({t}_{i1},{t}_{i2},\cdots ,{t}_{i{m}_{i}}\right).$ The conditional posterior distribution of the parameters ${p}_{j},\text{}{\lambda }_{j}$ and ${\beta }_{j}$ can be computed and written, respectively, by

${\pi }^{*}\left({p}_{j}|{\lambda }_{j},{\beta }_{j},t\right)\propto {p}_{j}^{-1}\underset{s=1}{\overset{2}{\prod }}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{\prod }}{f}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\},$ (24)

${\pi }^{*}\left({\lambda }_{j}|{p}_{j},{\beta }_{j},t\right)\propto {\lambda }_{j}^{-1}\underset{s=1}{\overset{2}{\prod }}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{\prod }}{f}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\},$ (25)

${\pi }^{*}\left({\beta }_{j}|{p}_{j},{\lambda }_{j},t\right)\propto {\beta }_{j}^{-1}\underset{s=1}{\overset{2}{\prod }}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{\prod }}{f}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta }\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\}.$ (26)

The posterior of ${p}_{j},\text{}{\lambda }_{j}$ and ${\beta }_{j}$ in (24), (25) and (26) is not known, but the plot of it shows that it is similar to normal distribution. Therefore to generate from this distribution, we use the Metropolis {Hastings method (  with normal proposal distribution)}. For details regarding the implementation of Metropolis-Hastings algorithm, the readers may refer to  . To run the Gibbs sampler algorithm we started with the ML estimates. We then drew samples from various full conditionals, in turn, using the most recent values of all other conditioning variables unless some systematic pattern of convergence was achieved. The following algorithm of Gibbs sampling is proposed to compute Bayes estimators of $u\equiv u\left({p}_{j},{\lambda }_{j},{\beta }_{j}\right)$ based on BSEL function.

1) Start with initial guess of $\left({p}_{j},{\lambda }_{j},{\beta }_{j}\right)$ say $\left({p}_{j}^{0},{\lambda }_{j}^{0},{\beta }_{j}^{0}\right),$ respectively.

2) Set $i=1$ .

3) Generate ${p}^{i}$ from (24) and ${\lambda }^{i}$ from (25).

4) Generate ${\beta }^{i}$ from (26).

5) Set $i=i+1.$

6) Repeat steps 3 - 5 N times.

7) An approximate Bayes estimator of $u$ under BSEL function is given by

$E\left(u|t\right)=\left(1/\left(N-\nu \right)\right)\underset{i=\nu +1}{\overset{N}{\sum }}u\left({p}^{i},{\lambda }^{i},{\beta }^{i}\right),$ (27)

where $\text{ν}$ is the burn-in period. So that, the Bayes estimators of $u$ based on BSEL function is given by

${\stackrel{^}{u}}_{BS}=\Omega {\stackrel{^}{u}}_{ML}+\left(1-\Omega \right)E\left(u|t\right).$ (28)

4. Bayesian Two-Sample Prediction

The two-sample prediction technique is considered to derive Bayesian prediction bounds for future order statistics based on progressively Type-II censored informative samples obtained from constant-PALT models. The coverage probabilities and the average interval lengths of the confidence intervals are computed via a Monte Carlo simulation to investigate the procedure of the prediction intervals. Suppose that, for $S=1,2,$ the two sample scheme is used in which the informative sample $\left({T}_{s1:{m}_{s}:{n}_{s}}<{T}_{s2:{m}_{s}:{n}_{s}}<\cdots <{T}_{s{m}_{s}:{m}_{s}:{n}_{s}}\right)$ re- presents an observed informative progressively type-II right censored sample of size ${m}_{s}$ obtained from a sample of size ${n}_{s}$ with progressive CS ${R}_{s}=\left({R}_{s1},\cdots ,{R}_{s{m}_{s}}\right)$ drawn from a population whose PDFs are as given by (1) and (7). Suppose also that ${Y}_{1:M:N},{Y}_{2:M:N},\cdots ,{Y}_{M:M:N}$ represents a future (unobserved) independent progressively type-II right censored sample of size $M$ obtained from a sample of size $N$ with progressive CS ${R}^{*}=\left({R}_{1}^{*},\cdots ,{R}_{M}^{*}\right),$ drawn from the population whose CDF is (9). We want to predict any future (unobserved) ${Y}_{b},\text{}b=1,2,\cdots ,M,$ in the future sample of size $M$ . The PDF of ${Y}_{b},\text{}b=1,2,\cdots ,M,$ given the vector of parameters $\theta$ , is obtained as (see  ):

${g}^{*}\left({y}_{b}|\theta \right)=={C}_{b-1}{f}_{2\Theta }\left({y}_{b}\right)\underset{i=1}{\overset{b}{\sum }}{\kappa }_{i}{\left[1-{F}_{2\Theta }\left({y}_{b}\right)\right]}^{{\gamma }_{i}-1},$ (29)

where

$\begin{array}{c}{\gamma }_{i}=\underset{j=i}{\overset{M}{\sum }}\left({R}_{j}^{*}+1\right)=N-\underset{j=i}{\overset{i-1}{\sum }}\left({R}_{j}^{*}+1\right),\text{}{C}_{b-1}=\underset{i=1}{\overset{b}{\prod }}{\gamma }_{i},\\ {\kappa }_{i}=\underset{j=1}{\overset{b}{\prod }}\frac{1}{{\gamma }_{j}-{\gamma }_{i}},\forall i\ne j,b>1,\text{and}{\kappa }_{1}=1\text{for}b=1.\end{array}$

Substituting from (7) and (9) in (29), we have:

$\begin{array}{l}{g}^{*}\left({y}_{b}|\theta \right)\\ ={C}_{b-1}\left({p}_{1}{f}_{21}\left(y;{\theta }_{1}\right)+{p}_{2}{f}_{22}\left(y;{\theta }_{2}\right)\right){\sum }_{i=1}^{b}{\kappa }_{i}{\left[1-\left({p}_{1}{F}_{21}\left(y;{\theta }_{1}\right)+{p}_{2}{F}_{22}\left(y;{\theta }_{2}\right)\right)\right]}^{{\gamma }_{i}-1}.\end{array}$ (30)

4.1. Maximum Likelihood Prediction When ${\alpha }_{j}$ Is Known

Maximum likelihood prediction (MLP) can be obtained using (30) by replacing the parameters $\theta =\left(p,{\beta }_{1},{\beta }_{2},{\lambda }_{1},{\lambda }_{2}\right)$ by ${\stackrel{^}{\theta }}_{\left(ML\right)}=\left({\stackrel{^}{p}}_{\left(ML\right)},{\stackrel{^}{{\beta }_{1}}}_{\left(ML\right)},{\stackrel{^}{{\beta }_{2}}}_{\left(ML\right)},{\stackrel{^}{{\lambda }_{1}}}_{\left(ML\right)},{\stackrel{^}{{\lambda }_{2}}}_{\left(ML\right)}\right).$

1) Interval prediction:

The maximum likelihood prediction interval (MLPI) for any future observation ${y}_{b},\text{}1\le b\le M$ can be obtained by

$\mathrm{Pr}\left[{y}_{b}\ge \upsilon |t\right]={\int }_{\upsilon }^{\infty }{g}^{*}\left({y}_{b}|{\stackrel{^}{\theta }}_{\left(ML\right)}\right)\text{d}{y}_{b}.$ (31)

A $\left(1-\tau \right)×100%$ MLPI $\left(L,U\right)$ of the future observation ${y}_{b}$ is given by solving the following two nonlinear equations

$\mathrm{Pr}\left[{y}_{b}\ge L\left(t\right)|t\right]=1-\frac{\tau }{2},\text{}\mathrm{Pr}\left[{y}_{b}\ge U\left(t\right)t|\right]=\frac{\tau }{2}.$ (32)

2) Point prediction:

The maximum likelihood prediction point (MLPP) for any future observation ${y}_{b}$ can be obtained by replacing the parameters $\theta =\left(p,{\beta }_{1},{\beta }_{2},{\lambda }_{1},{\lambda }_{2}\right)$ by ${\stackrel{^}{\theta }}_{\left(ML\right)}=\left({\stackrel{^}{p}}_{\left(ML\right)},{\stackrel{^}{{\beta }_{1}}}_{\left(ML\right)},{\stackrel{^}{{\beta }_{2}}}_{\left(ML\right)},{\stackrel{^}{{\lambda }_{1}}}_{\left(ML\right)},{\stackrel{^}{{\lambda }_{2}}}_{\left(ML\right)}\right).$

${\stackrel{^}{y}}_{b\left(ML\right)}=E\left[{y}_{b}|t\right]={\int }_{0}^{\infty }{y}_{b}{g}^{*}\left({y}_{b}|{\stackrel{^}{\theta }}_{\left(ML\right)}\right)\text{d}{y}_{b}.$ (33)

4.2. Bayesian Prediction When ${\alpha }_{j}$ Is Known

The predictive density function of ${Y}_{b},\text{}1\le b\le M$ is given by:

${\Psi }^{*}\left({y}_{b}|t\right)={\int }_{0}^{\infty }{g}^{*}\left({y}_{b}|\theta \right){\pi }^{*}\left(\theta |t\right)\text{d}\theta ,\text{}{y}_{b}>0,$ (34)

1) Interval prediction:

Bayesian prediction interval (BPI), for the future observation ${Y}_{b},\text{}1\le b\le M,$ can be computed using (34) which can be approximated using MCMC algorithm by the form

${\Psi }^{\star }\left({y}_{b}|t\right)=\frac{{\sum }_{i=1}^{\mu }{g}^{*}\left({y}_{b}|{\theta }^{i}\right)}{{\sum }_{i=1}^{\mu }{\int }_{0}^{\infty }{g}^{*}\left({y}_{b}|{\theta }^{i}\right)\text{d}{y}_{b}},$ (35)

where ${\theta }^{i},\text{}i-1,2,\cdots ,\mu$ are generated from the posterior density function (23) using Gibbs sampler and Metropolis-Hastings techniques.

A $\left(1-\tau \right)×100%$ BPI $\left(L,U\right)$ of the future observation ${y}_{b}$ is obtained by solving the following two nonlinear equations

$\frac{{\sum }_{i=1}^{\mu }{\int }_{L}^{\infty }{g}^{*}\left({y}_{b}|{\theta }^{i}\right)\text{d}{y}_{b}}{{\sum }_{i=1}^{\mu }{\int }_{0}^{\infty }{g}^{*}\left({y}_{b}|{\theta }^{i}\right)\text{d}{y}_{b}}=1-\frac{\tau }{2},$ (36)

$\frac{{\sum }_{i=1}^{\mu }{\int }_{U}^{\infty }{g}^{*}\left({y}_{b}|{\theta }^{i}\right)\text{d}{y}_{b}}{{\sum }_{i=1}^{\mu }{\int }_{0}^{\infty }{g}^{*}\left({y}_{b}|{\theta }^{i}\right)\text{d}{y}_{b}}=\frac{\tau }{2}.$ (37)

Numerical methods such as Newton-Raphson are necessary to solve the above two nonlinear Equations (36) and (37), to obtain $L$ and $U$ for a given.

2) Point prediction:

a) Bayesian prediction point (BPP) for the future observation ${y}_{b}$ based on BSEL function can be obtained using

${\stackrel{˜}{y}}_{b\left(BS\right)}=\Omega {\stackrel{^}{y}}_{b\left(ML\right)}+\left(1-\Omega \right)E\left({y}_{b}|t\right),$ (38)

where ${\stackrel{^}{y}}_{b\left(ML\right)}$ is the ML prediction for the future observation ${y}_{b}$ which can be obtained using (36) and $E\left({y}_{b}|t\right)$ can be obtained using

$E\left({y}_{b}|t\right)={\int }_{0}^{\infty }{y}_{b}{\Psi }^{*}\left({y}_{b}|t\right)\text{d}{y}_{b}.$ (39)

b) BPP for the future observation ${y}_{b}$ based on BLINX loss function can be obtained using

${\stackrel{^}{y}}_{b\left(BL\right)}=-\frac{1}{a}\mathrm{ln}\left[\Omega \mathrm{exp}\left[-a{\stackrel{^}{y}}_{b\left(ML\right)}\right]+\left(1-\Omega \right)E\left({\text{e}}^{-a{y}_{b}}|t\right)\right],$ (40)

where ${\stackrel{^}{y}}_{b\left(ML\right)}$ is the ML prediction for the future observation ${y}_{b}$ which can be obtained using (36) and $E\left({\text{e}}^{-a{y}_{b}}|t\right)$ can be obtained using

$E\left({\text{e}}^{-a{y}_{b}}|t\right)={\int }_{0}^{\infty }{\text{e}}^{-a{y}_{b}}{\Psi }^{*}\left({y}_{b}|t\right)\text{d}{y}_{b}.$ (41)

5. Simulation Studies

In this subsection, numerical examples are provided to demonstrate the theoretical results given in this paper. All computations were performed using (MA- THEMATICA ver. 8.0).

To generate progressively type-II censored Pareto samples, we used the algorithm proposed by  . The MLEs and Bayes estimates of the parameters are computed and compared based on Monte Carlo simulation study according to the following steps:

1) For given values of the parameters, ${n}_{s}$ and ${m}_{s}\left(1\le {m}_{s}\le {n}_{s}\right),\text{}s=1,2$ we generate type II progressively samples from the MTP distribution as follows:

a) For given values of ${m}_{s}$ , we generate two independent random samples of sizes m1 and ${m}_{2}$ from Uniform (0,1) distribution $\left({U}_{s1},{U}_{s2},\cdots ,{U}_{s{m}_{s}}\right),\text{}s=1,2.$

b) For given values of the progressive censoring scheme

${R}_{si},s=1,2,\text{}i=1,\cdots ,{m}_{s},$ we set ${E}_{si}=1/\left(i+{\sum }_{\kappa ={m}_{s}-i+1}^{{m}_{s}}{R}_{s\kappa }\right)$ where

$s=1,2,i=1,\cdots ,{m}_{s}.$

c) Set ${V}_{si}={U}_{si}^{{E}_{si}}.$

d) Set ${U}_{si}^{*}=1-{\prod }_{\kappa ={m}_{s}-i+1}^{{m}_{s}}{V}_{s\kappa },\text{}s=1,2,i=1,\cdots ,{m}_{s}.$

e) For given values of $p,\text{}{\alpha }_{j},\text{}{\beta }_{j},\text{}{\lambda }_{j}$ and ${n}_{s},\text{}{m}_{s}$ , set:

${U}_{si}^{*}=p\left[1-{\beta }_{1}^{{\lambda }_{1}^{\left(s-1\right)}{\alpha }_{1}}{\left({\beta }_{1}+{t}_{si}\right)}^{-{\lambda }_{1}^{\left(s-1\right)}{\alpha }_{1}}\right]+\left(1-p\right)\left[1-{\beta }_{2}^{{\lambda }_{2}^{\left(s-1\right)}{\alpha }_{2}}{\left({\beta }_{2}+{t}_{si}\right)}^{-{\lambda }_{2}^{\left(s-1\right)}{\alpha }_{2}}\right],$

which is the required progressive Type II censored samples of sizes ${m}_{s}$ from MTP distribution under constant PALT.

2) The MLEs of the parameters are obtained by solving the nonlinear equations (13) numerically.

3) Based on BSEL loss function the Bayes estimates of the parameters are computed, from (28) according to the above MCMC method.

Simulation studies have been performed using (Mathematica ver. 8.0) for illustrating the theoretical results of estimation problem. The performance of the resulting estimators of the acceleration, shape and scale parameters has been considered in terms of their average (AVG), relative absolute bias (RAB) and mean square error (MSE), where

$\begin{array}{c}{\stackrel{¯}{\stackrel{^}{\Phi }}}_{k}=\left(1/M\right)\underset{i=1}{\overset{M}{\sum }}{\stackrel{^}{\Phi }}_{k}^{\left(i\right)}\\ k=1,2,\cdots ,5,\left({\Phi }_{1}=p,{\Phi }_{2}={\lambda }_{1},{\Phi }_{3}={\lambda }_{2},{\Phi }_{4}={\beta }_{1},{\Phi }_{5}={\beta }_{2}\right),\end{array}$

$RAB=\frac{|{\stackrel{¯}{\stackrel{^}{\Phi }}}_{k}-{\Phi }_{k}|}{{\Phi }_{k}},$

$MSE=\left(1/M\right){\sum }_{i=1}^{M}{\left({\stackrel{^}{\Phi }}_{k}^{\left(i\right)}-{\Phi }_{k}\right)}^{2}$ .

In our study, we have used three different censoring schemes (C.S), namely:

Scheme I: ${R}_{m}={n}_{s}-{m}_{s},{R}_{i}=0$ for $i\ne {m}_{s}$ .

Scheme II: ${R}_{1}={n}_{s}-{m}_{s},{R}_{i}=0$ for $i\ne 1$ .

Scheme III: ${R}_{\left(\left({m}_{s}+1\right)/2\right)}={n}_{s}-{m}_{s},{R}_{i}=0$ for $i\ne \left({m}_{s}+1\right)/2$ ; if ${m}_{s}$ odd, and ${R}_{\left({m}_{s}/2\right)}={n}_{s}-{m}_{s},{R}_{i}=0$ for $i\ne \left({m}_{s}/2\right)$ ; if ${m}_{s}$ even.

In simulation studies, we consider two case separately:

a) The population parameter values $\left({\alpha }_{1}=1.1,{\alpha }_{2}=2.3,{\beta }_{1}=0.3,{\beta }_{2}=0.7,{\lambda }_{1}=1.5,{\lambda }_{2}=2,p=0.5\right)$ , the sample sizes $\left({n}_{1}={n}_{2}=n\right)$ and observed failure times $\left({m}_{1}={m}_{2}=m\right)$ the results shown in Table 1. The progressive censoring schemes used in this case are displaying in Table 2.

b) The population parameter values

$\left({\alpha }_{1}=1.1,{\alpha }_{2}=2.3,{\beta }_{1}=0.3,{\beta }_{2}=0.7,{\lambda }_{1}=1.5,{\lambda }_{2}=2,p=0.5\right)$ , the sample sizes

Table 1. MLEs and Bayes estimates of the parameters and their MSEs and RABs at $\left({\alpha }_{1}=1.1,{\alpha }_{2}=2.3,{\beta }_{1}=0.3,{\beta }_{2}=0.7,{\lambda }_{1}=1.5,{\lambda }_{2}=2,\Omega =0.5,p=0.5\right)$ .

Table 2. Progressive censoring schemes used in simulation study at ${n}_{1}={n}_{2}=n$ and ${m}_{1}={m}_{2}=m$ .

$\left({n}_{1}\ne {n}_{2}\right)$ and observed failure times $\left({m}_{1}\ne {m}_{2}\right)$ the results shown in Table 3. Figure 1 and Figure 2 represents the MSE and RAB of the estimates of $\theta =\left(p,{\alpha }_{1},{\alpha }_{2},{\beta }_{1},{\beta }_{2}\right)$ when the sample sizes $\left({n}_{1}={n}_{2}=n\right)$ . While Table 4 gives the progressive censoring schemes used in simulation study at ${n}_{1}\ne {n}_{2}$ and ${m}_{1}\ne {m}_{2}$ .

The ML prediction (point and interval) and Bayesian prediction (point and interval) are computed according to the following steps:

Generate ${\theta }^{i}=\left({p}^{i},{\beta }_{1}{}^{i},{\beta }_{2}{}^{i},{\lambda }_{1}{}^{i},{\lambda }_{2}{}^{i}\right)$ , from the posterior PDF using MCMC algorithm.

Solving Equation (32) we get the 95% MLPI for the ${b}^{\text{th}}$ order statistics in a future progressively Type-II censored sample also the MLPP for the future observation ${y}_{b}$ ,is computed using (33).

Table 3. MLEs and Bayes estimates of the parameters and their MSEs and RABs at $\left({\alpha }_{1}=1.1,{\alpha }_{2}=2.3,{\beta }_{1}=0.3,{\beta }_{2}=0.7,{\lambda }_{1}=1.5,{\lambda }_{2}=2,\Omega =0.5,p=0.5\right)$ .

Table 4. Progressive censoring schemes used in simulation study at ${n}_{1}\ne {n}_{2}$ and ${m}_{1}\ne {m}_{2}\text{.}$

Figure 1. Mean square error (MSE) of the estimates of $\theta =\left(p,{\alpha }_{1},{\alpha }_{2},{\beta }_{1},{\beta }_{2}\right)$ when the sample sizes $\left({n}_{1}={n}_{2}=n\right).$

Figure 2. Relative absolute bias (RAB) of the estimates of $\theta =\left(p,{\alpha }_{1},{\alpha }_{2},{\beta }_{1},{\beta }_{2}\right)$ when the sample sizes $\left({n}_{1}={n}_{2}=n\right).$

Table 5. Point and 95% interval predictors for ${Y}_{b}^{*},b=1,$ when $N=M=10,{R}_{i}^{*}=0,$ Math_424# C.S I and ( ${\alpha }_{1}=1.1,{\alpha }_{2}=2.3,{\beta }_{1}=0.3,{\beta }_{2}=0.7,{\lambda }_{1}=1.5,{\lambda }_{2}=2,\Omega =0.5,p=$ Math_426#).

Table 6. Point and 95% interval predictors for ${Y}_{b}^{*},b=1,$ when $N=M=10,{R}_{i}^{*}=0,$ Math_431# C.S II and ( ${\alpha }_{1}=1.1,{\alpha }_{2}=2.3,{\beta }_{1}=0.3,{\beta }_{2}=0.7,{\lambda }_{1}=1.5,{\lambda }_{2}=2,\Omega =0.5,p=$ Math_433#).

The 95% BPI for the future observation ${y}_{b}$ are obtained by solving Equations (36) and (37).

Table 7. Point and 95% interval predictors for ${Y}_{b}^{*},b=1,$ when $N=M=10,{R}_{i}^{*}=0,$ Math_439#C.S III and ( ${\alpha }_{1}=1.1,{\alpha }_{2}=2.3,{\beta }_{1}=0.3,{\beta }_{2}=0.7,{\lambda }_{1}=1.5,{\lambda }_{2}=2,\Omega =0.5,p=$ Math_441#).

The BPP for the future observation ${y}_{b}$ , is computed based on BSEL function using (38) and based on BLINX loss function using (40).

Generate $10,000$ progressively Type-II censored samples each of size $M$ from a population whose CDF is as (7) with ${R}_{i}^{*},\text{}i=1,2,\cdots ,M,$ then calculate the coverage percentage (CP) of ${Y}_{b}$ . For simplicity, we will consider ${R}_{i}^{*}=0,\text{}i=1,2,\cdots ,M$ which represents the ordinary order statistics and $M=N=10.$

6. Conclusions

The progressive Type-II censoring is of great importance in planning duration experiments in reliability studies. It has been shown by  that the inference is possible and practical when the sample data are gathered according to a progressive Type-II censored scheme. This paper dealt with the constant PALT in the case of progressive Type II censoring. It is assumed that the lifetime of test units follows the MTP distributions. MLEs and BEs of the acceleration factor and the parameters under consideration are derived. The BEs were obtained under the assumptions of BSEL and NIPs. It was observed that the BEs cannot be obtained in explicit forms. Instead, the MCMC method was used to obtain the Bayesian estimates. One can clearly see the scope of MCMC based Bayesian solutions which make every inferential development routinely available.

From the result, we observe the following:

It is noticed from the numerical calculations that the Bayes estimates under the BSEL function have the smallest MSEs as compared with their corresponding MLEs.

In general, for increasing the effective sample size $m/n,$ the MSEs and ARBs of the considered parameters decrease.

For fixed values of the sample and failure time sizes, the Scheme II in which the censoring occurs after the first observed failure gives more accurate results through the MSEs and RABs than the other schemes and this coincides with Theorem [2.2] by  .

The MLEs of ${\beta }_{1}$ are better than the BEs in general.

In most cases, we observed that when the sample size increased, the MSEs and RABs decreased for all censoring schemes.

The results in Tables 5-7 show that the lengths of the prediction intervals using the ML procedure are shorter than that of prediction intervals using the Bayes procedure.

The simulation results show that the proposed prediction levels are satisfactory compared with the actual prediction level 95%.

Conflicts of Interest

The authors declare no conflicts of interest. 