Inferences for the Generalized Logistic Distribution Based on Record Statistics

Estimation for the parameters of the generalized logistic distribution (GLD) is obtained based on record statistics from a Bayesian and non-Bayesian approach. The Bayes estimators cannot be obtained in explicit forms. So the Markov chain Monte Carlo (MCMC) algorithms are used for computing the Bayes estimates. Point estimation and confidence intervals based on maximum likelihood and the parametric bootstrap methods are proposed for estimating the unknown parameters. A numerical example has been analyzed for illustrative purposes. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

Let 1 2 3 , , , X X X  a sequence of independent and identically distributed (iid) random variables with cumulitive distribution function ( ) F x and probability density function ( ) we say that j X is a lower record and denoted by The standard logistic distribution has important uses in describing growth and as a substitute for the normal distribution.It has also attracted interesting applications in the modeling of the dependence of chronic obstructtive respiratory disease prevalence on smoking and age, degrees of pneumoconiosis in coal miners, geological issues, hemolytic uremic syndrome data for children, physicochemical phenomenon, psychological issues, survival time of diagnosed leukemia patients, and weight gain data.A generalized logistic distribution is proposed, based on the fact that the difference of two independent Gumbel distributed random variables has the standard logistic distribution.The generalized logistic distribution (GLD) has received additional attention in estimating its parameters for practical usage see for example Asgharzadeh [15].The form of the probability density function (pdf) and cumulative distribution function (cdf) of the two parameter generalized logistic distribution denoted by ( ) GLD , λ θ are given, respectively, by ( ) Here λ and θ are the shape and scale parameters, respectively, the above GLD was originally proposed as a generalization of the logistic distribution by Johnson et al. [16].For 1 λ = , the GLD becomes the standard lo- gistic and it is symmetric.The pdf in (1) has been obtained by compounding an extreme value distribution with a gamma distribution, different estimation procedures can be found in Chen and Balakrishnan [17].
The rest of the paper is organized as follows.In Section 2, we derive point estimation and the approximate confidence interval based on maximum likelihood estimation.The parametric bootstrap confidence intervals are discussed in Section 3. Section 4 describes Bayes estimates and construction of credible intervals using the MCMC algorithms.Section 5 contains the analysis of a numerical example to illustrate our proposed methods.A simulation studies are reported in order to give an assessment of the performance of the different estimation methods in Section 6. Finally we conclude with some comments in Section 7.

Maximum Likelihood Estimation
Suppose that , , , be the lower record values of size n from the generalized logistic distri- bution ( ) GLD , λ θ .The likelihood function for observed record x was given by see Arnold et al. [7]   ( ) , , where ( ) . f

and ( )
. F are given respectively, by ( 1) and ( 2), the likelihood function can be obtained by substituting from (1) and ( 2) in (3) and written as From ( 6), the maximum likelihood estimate (MLE) of λ say λ , can be obtained as Since θ is a fixed point solution of non-linear Equation ( 9), therefore, it can be obtained by using a simple iterative scheme as follows where j θ is the th j iterate of θ .The iteration procedure should be stopped when θ θ λ θ , , L x n ( ) The asymptotic normality of the MLE can be used to compute the approximate confidence intervals for parameters λ and θ .Therefore, ( ) α − confidence intervals for parameters λ , and θ become, respec- tively, as ( ) ( ) where 2 Z α is the percentile of the standard normal distribution with right-tail probability 2 α .

Bootstrap Confidence Intervals
In this section, we propose to use confidence intervals based on the parametric bootstrap methods 1) percentile bootstrap method (Boot-p) based on the idea of Efron [18]; 2) bootstrap-t method (Boot-t) based on the idea of Hall [19].The algorithms for estimating the confidence intervals using both methods are illustrated as follows.

Percentile Bootstrap Method
Algorithm 1 Step 1. From the original data , , , , , , Step 3. As in Step 1, based on x * compute the bootstrap sample estimates of λ and θ , say λ *  and θ *  .
Step 4. Repeat Steps 2-3 N times representing N bootstrap MLE's of λ and θ based on N different bootstrap samples.
Step 5. Arrange all s λ * ′  and s


, in an ascending order to obtain the bootstrap sample , , , , be the cumulative distribution function of l ϕ .Define ( ) for given z .
The approximate bootstrap ( )

Bootstrap-t Method
Algorithm 2 Step 1. From the original data , , ,  compute the ML estimates of the parameters λ  and θ  by Equations ( 13) and ( 9).
Here also,

Bayes Estimation Using MCMC
In Bayesian approach, the performance depends on the prior information about the unknown parameters and the loss function.The prior information can be expressed by the experimenter, who has some beliefs about the unknown parameters and their statistical distributions.This section describes Bayesian MCMC methods that have been used to estimate the parameters of the generalized logistic distribution (GLD).The Bayesian approach is introduced and its computational implementation with MCMC algorithms is described.Gibbs sampling procedure [20] [21] and the Metropolis-Hastings (MH) algorithm [22] [23] are used to generate samples from the posterior density function and in turn compute the Bayes point estimates and also construct the corresponding credible intervals based on the generated posterior samples.By considering model ( 1), assume the following gamma prior densities for θ and λ as ( ) ) The joint prior density of λ and θ can be written as Based on the likelihood function of the observed sample is same as (4) and the joint prior in (25), the joint posterior density of λ and θ given the data, denoted by ( ) , can be written as therefore, the Bayes estimate of any function of λ and θ say ( ) , g λ θ , under squared error loss function is The ratio of two integrals given by ( 27) cannot be obtained in a closed form.In this case, we use the MCMC algorithm to generate samples from the posterior distributions and then compute the Bayes estimator of ( ) , g λ θ under the squared errors loss (SEL) function.For more details about the MCMC methods see, for example, Rezaei et al. [24] and Upadhyaya and Gupta [25].

MCMC Algorithm
The Markov chain Monte Carlo (MCMC) algorithm is used for computing the Bayes estimates of the parameters λ and θ under the squared errors loss (SEL) function.We consider the Metropolis-Hastings algorithm, to generate samples from the conditional posterior distributions and then compute the Bayes estimates.The Metropolis-Hastings algorithm generate samples from an arbitrary proposal distribution (i.e. a Markov transition kernel).The expression for the joint posterior can be obtained up to proportionality by multiplying the likelihood with the joint prior and this can be written as ( ) from (28), the conditional posteriors distribution of parameter λ can be computed and written, by ( ) Therefore, the conditional posteriors distribution of parameter λ , is gamma with parameters ( ) and, therefore, samples of λ can be easily generated using any gamma generat- ing routine.
The conditional posteriors distribution of parameter θ can be written as ( exp log 1 exp log 1 exp . The conditional posteriors distribution of parameter θ Equation (30) cannot be reduced analytically to well known distributions and therefore it is not possible to sample directly by standard methods, but the plot of it (see Figure 1) show that it is similar to normal distribution.So to generate random numbers from this distribution, we use the Metropolis-Hastings method with normal proposal distribution.The choice of the hyper parameters Step 4. Compute ( ) t λ and ( ) t θ .
Step 6. Obtain the Bayes estimates of λ and θ with respect to the SEL function as , .

Numerical Computations
To illustrate the estimation results obtained in the above sections, consider the first seven lower record values simulated from a two-parameter generalized logistic distribution (1) with shape and scale parameters, respectively  1. Figure 2 and Figure 3 plot the MCMC output of λ and θ , using 10 000 MCMC samples (dashed line represent means and red lines represent lower and upper bounds of 95% probability intervals).The plot of histogram of λ and θ generated by MCMC method are given in Figure 4 and Figure 5.This was done with 1000 bootstrap sample and 10,000 MCMC sample and discard the first 1000 values as "burn-in".

Simulation Study and Comparisons
In this section, we conduct some numerical computations to compare the performances of the different estimators proposed in the previous sections.Monte Carlo simulations were performed utilizing 1000 lower record samples from a two-parameter generalized logistic distribution (GLD) for each simulation.The mean square error (MSE) is used to compare the estimators.The samples were generated by using ( ) ( ) , 2,1.2 λ θ = , ( ) 3.22,1.5, with different sample of sizes ( ) n .For computing Bayes estimators, we used the non-informative gamma priors for both the parameters, that is, when the hyper parameters are 0. We call it prior 0: 0 a b c d = = = = .Note that as the hyper parameters go to 0, the prior density becomes inversely proportional to its argument and also becomes improper.This density is commonly used as an improper prior for parameters in the range of 0 to infinity, and this prior is not specifically related to the gamma density.For computing Bayes estimators, other than prior 0, we also used informative prior, including prior 1, 1 a = , 2 b = , 2 c = and 1 d = , also we used the squared error loss (SEL) function to compute the Bayes estimates.We also computed the Bayes estimates and 95% credible intervals based on 10,000 MCMC samples and discard the first 1000 values as "burn-in".We report the average Bayes estimates, mean squared errors (MSEs) and coverage percentages.For comparison purposes, we also computed the MLEs and the 95% confidence intervals based on the observed Fisher information matrix.Finally, we used the same 1000 replicates to compute different estimates Tables 2-5 report the results based on MLEs and the Bayes estimators (using MCMC algorithm) on both λ and θ .

Conclusions
The main aim of this paper is study the estimate the parameters of the generalized Logistic distribution using the Bootstrap, MCMC algorithms and comparing them through numerical example and simulation study.There are many authors have studied classic Bayesian methods, for example, Amin [26] discussed Bayesian and non-Bayesian estimation from Type I generalized Logistic distribution based on lower record values, Aly and Bleed        Note: The first figure represents the average confidence lengths, with the corresponding coverage percentages reported below it in parentheses.
[27] presented Bayesian estimation for the generalized Logistic distribution Type-II censored accelerated life testing.In this paper Bayesian estimation for the parameters of the generalized logistic distribution (GLD) are computed based on the lower record values using MCMC method.We assume the gamma priors on the unknown parameters and provide the Bayes estimators under the assumptions of squared error loss functions (SEL).The Metropolis-Hastings (MH) algorithm from the MCMC method is used for computing Bayes estimates.It has been noticed that, 1) From the results obtained in Tables 2-5, it can be seen that the performance of the Bayes estimators with respect to the non-informative prior (prior 0) is quite close to that of the MLEs, as expected.Thus, if we have no prior information on the unknown parameters, then it is always better to use the MLEs rather than the Bayes estimators, because the Bayes estimators are computationally more expensive.
2) Tables 2-5 report the results based on non-informative prior (prior 0) and informative prior, (prior 1) also in these case the results based on using MH algorithm are quite similar in nature when comparing the Bayes estimators based on informative prior clearly shows that the Bayes estimators based on prior 1 perform better than the MLEs, in terms of MSEs.
3) From Tables 2-5, it is clear that the Bayes estimators based on informative prior perform much better than noninformative prior and the MLEs in terms of MSEs.

Step 3 .Step 4 .
Based on these data, compute the bootstrap estimate of λ and θ , say λ * are obtained using the Fisher information matrix.Repeat Step 2, N boot times.For the 1 T * and 2 T * values obtained in Step 2, determine the upper and lower bounds of the λ and θ as follows: let λ and θ are given by

2 h
and d which make (30) close to the proposal distribution and obviously more convergence of the MCMC iteration.We propose the following MCMC algorithm to draw samples from the posterior density functions; and in turn compute the Bayes estimates and also, construct the corresponding credible intervals.θ λ * using (MH) algorithm in[22] [23].

Table 1 .
Results obtained by MLE, Bootstrap and MCMC method of λ and θ .

Table 2 .
Average values of the different estimators and the corresponding MSEs.when ( ) ( ) The first figure represents the average estimates, with the corresponding MSEs reported below it in parentheses.

Table 3 .
The average confidence lengths relative estimate of parameters and the corresponding coverage percentages when The first figure represents the average confidence lengths, with the corresponding coverage percentages reported below it in parentheses.

Table 4 .
Average values of the different estimators and the corresponding MSEs when ( ) ( ) Note: The first figure represents the average estimates, with the corresponding MSEs reported below it in parentheses.

Table 5 .
The average confidence lengths relative estimate of parameters and the corresponding coverage percentages when