Scientific Research

An Academic Publisher

**RETRACTED: Statistical Analysis of a Competing Risks Model with Weibull Sub-Distributions** ()

**Short Retraction Notice**

The following article has been retracted due to the authors’ strong personal reason about the indexing databases of Applied Mathematics.

This article has been retracted to straighten the academic record. In making this decision the Editorial Board follows COPE's Retraction Guidelines. Aim is to promote the circulation of scientific research by offering an ideal research publication platform with due consideration of internationally accepted standards on publication ethics. The Editorial Board would like to extend its sincere apologies for any inconvenience this retraction may have caused.

Editor guiding this retraction: Prof. Chris Cannings (EIC of AM).

The full retraction notice in PDF is preceding the original paper, which is marked "RETRACTED".

Share and Cite:

1. Introduction

In several applications in reliability analysis, medical research and biological situation there may be more than one cause of failure competing for the event. The event can be either death or recover from a certain disease (risk), see [1] [2] [3] [4] . In the literatures, the model that can be applied to deal with such situations is called competing risks model.

As examples of situations that require a competing risks model: 1) a patient may die from one of several causes such as cancer, heart disease, complications of diabetes, etc., and 2) in AIDS study, if the endpoint is time to opportunistic infection (relapse), the death and infection may be viewed as competing risks during the treatment of death and loss to follow-up as censoring, see [2] [5] .

The available data, in the competing risks models, consist of the times to event and an indicator of either the cause of failure or censored. One of the interesting points in competing risks models is to report on the distribution of the time to failure from one risk in the presence of all other risks, which is called as the relative risk.

Competing risks models are studied by several authors based on both parametric and non-parametric setup, [6] [7] . It is assumed, in the parametric setup, that the risks follow a variety of lifetime distributions. Examples of such distributions include gamma, Weibull, exponential and generalized exponential, [4] [8] [9] [10] [11] [12] . The non-parametric setup does not consider a specific life time distribution. The analysis of the non-parametric version of competing risks models is investigated by several authors, such as [13] [14] [15] .

The Weibull distribution has been frequently used in reliability analysis, see [16] and [17] . Weibull distribution is flexible (its hazard function can be constant, decreasing or increasing) and therefore can be used quite effectively to analyze a variety of real data sets. Various properties of Weibull distribution are discussed by [1] [12] . [18] discussed statistical inferences of a competing risks model assuming exponential causes and Weibull causes of failure with the same shape parameter. They used the maximum likelihood method when the cause of failure is either known or unknown.

In this paper, we discuss statistical inferences of a competing risks model when risks follow Weibull distributions with both unknown scale and shape parameters when the data can be censored. We will use maximum likelihood and Bayes methods. In Bayesian approach, we will consider that the unknown parameters are independent and follow gamma prior distributions. The reset of the paper is organized as follows. Section 2 describes the competing risks model assumptions. In Section 3, we derive the likelihood function and obtain the maximum likelihood estimates and discuss the two-sided confidence intervals for the model parameters. The Bayesian procedure is discussed in Section 4. The risk dues a specific cause in the presence of all other causes, for the underlying model, is discussed in Section 5. A complete analysis of a real data set is provided in Section 6. Finally, Section 7 concludes the paper.

2. Model Assumptions

Assume that there is a set of N identical and independent objects (or systems) on life test. Each object is assigned to one of $k,k\ge 2$ , independent causes of failure. Every object is tested until it fails or reaches a censored time. In the failure case, the object fails due to only one cause. The whole test will be terminated after either all objects fail or it reach the censored times or a combination of the two. When an object fails, there will be two observable quantities, the object’s lifetime, say T, and the cause of failure, say $\delta $ , where $\delta \in \left\{1,2,\cdots ,k\right\}$ . While in the censored case, we observe only the censoring time. To simplify the notations, we set $\delta =0$ for the censoring case. Furthermore, we need the following assumptions throughout the paper:

1) ${T}_{i}$ is the lifetime of object which has a cumulative distribution function $F(.)$ , survival function $\stackrel{\xaf}{F}(.)$ , and probability density function $f(.)$ , $i=1,2,\cdots ,N$ .

2) ${T}_{ji}$ is the random time at which cause $j\text{\hspace{0.17em}}\left(j=1,2,\cdots ,k\right)$ may hit object $i$ , $i=1,2,\cdots ,N$ . Since the N objects on the life test are identical, therefore ${T}_{ji}$ , for $i=1,2,\cdots ,N$ , are identically distributed random variables. Furthermore, we assume that ${T}_{ji}$ , for $i=1,2,\cdots ,N$ , has cumulative distribution function ${F}_{j}(.)$ (which is called also as sub-distribution function of cause j), survival function (or sub-survival function) ${\stackrel{\xaf}{F}}_{j}(.)$ , probability density function ${f}_{j}(.)$ , and hazard rate function ${h}_{j}(.)$ , for $j=1,2,\cdots ,k$ .

Obviously, ${T}_{i}=\underset{1\le j\le k}{\mathrm{min}}\left\{{T}_{ji}\right\}$ . That is, when ${\delta}_{i}=j$ , where $j\in \left\{1,2,\cdots ,k\right\}$ , ${T}_{i}={T}_{ji}$ , therefore, $F\left(t\right)=P\left({T}_{i}\le t\right)=P\left({T}_{ji}\le t\right)={F}_{j}\left(t\right)$ . While when ${\delta}_{i}=0$ , $\stackrel{\xaf}{F}\left(t\right)=P\left({T}_{i}>t\right)=P\left(\underset{1\le j\le k}{\mathrm{min}}\left\{{T}_{ji}\right\}>t\right)={\displaystyle \underset{j=1}{\overset{k}{\prod}}P}\left({T}_{ij}>t\right)={\displaystyle \underset{j=1}{\overset{k}{\prod}}{\stackrel{\xaf}{F}}_{j}}\left(t\right).$

3) More specifically, we assume that ${T}_{ji}$ follow Weibull distributions with unknown parameters ${\alpha}_{j}$ and ${\beta}_{j}$ , say $W\left({\alpha}_{j},{\beta}_{j}\right)$ , for $i=1,2,\cdots ,N$ and $j=1,2,\cdots ,k$ . That is, ${T}_{ji}$ has the cumulative distribution function

${F}_{j}\left(t\right)=1-\mathrm{exp}\left(-{\alpha}_{j}{t}^{{\beta}_{j}}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{\alpha}_{j},{\beta}_{j}>0,t\ge 0,$ (1)

the probability density function is

${f}_{j}\left(t\right)={\alpha}_{j}{\beta}_{j}{t}^{{\beta}_{j}-1}\mathrm{exp}\left(-{\alpha}_{j}{t}^{{\beta}_{j}}\right),$ (2)

and the survival rate function is

${\stackrel{\xaf}{F}}_{j}\left(t\right)=\text{Pr}\left(T>t\right)=\mathrm{exp}\left(-{\alpha}_{j}{t}^{{\beta}_{j}}\right),$ (3)

and the hazard rate function is

${h}_{j}\left(t\right)={\alpha}_{j}{\beta}_{j}{t}^{{\beta}_{j}-1},$ (4)

where ${\beta}_{j}$ is the shape parameter and ${\alpha}_{j}$ is the scale parameter.

3. The Likelihood Function and Maximum Likelihood Estimates

In this section we will derive the likelihood function of the model parameters using the available data described above. The available data can be expressed as $\left({T}_{1},{\delta}_{1}\right),\left({T}_{2},{\delta}_{2}\right),\cdots ,\left({T}_{N},{\delta}_{N}\right)$ , where

${T}_{i}=\{\begin{array}{ll}{T}_{ji}\hfill & \text{if}\text{\hspace{0.17em}}{\delta}_{i}=j\in \left\{1,2,\cdots ,k\right\},\hfill \\ {T}_{i}\hfill & \text{if}\text{\hspace{0.17em}}{\delta}_{i}=0\text{\hspace{0.17em}}\left(\text{here}\text{\hspace{0.17em}}{T}_{i}\text{\hspace{0.17em}}\text{is}\text{\hspace{0.17em}}\text{a}\text{\hspace{0.17em}}\text{censoring}\text{\hspace{0.17em}}\text{time}\right).\hfill \end{array}$

3.1. The Likelihood Function

Using data described above, the likelihood function is

$L\left(\theta |.\right)={\displaystyle \underset{i=1}{\overset{N}{\prod}}\left[{\displaystyle \underset{j=1}{\overset{k}{\prod}}{\left[{f}_{j}\left({t}_{i}\right){\stackrel{\xaf}{F}}_{j}\left({t}_{i}\right)\right]}^{I\left({\delta}_{i}=j\right)}}{\left[\stackrel{\xaf}{F}\left({t}_{i}\right)\right]}^{I\left({\delta}_{i}=0\right)}\right]},$ (5)

where $\theta $ is the vector of 2k unknown parameters, $\theta =\left({\alpha}_{1},{\alpha}_{2},\cdots ,{\alpha}_{k},{\beta}_{1},{\beta}_{2},\cdots ,{\beta}_{k}\right)$ . Based on the model assumptions and using the well-known relations between the reliability measures “the survival, hazard rate and the probability density functions”, we can write the likelihood function as

$L\left(\theta |.\right)=\left[{\displaystyle \underset{i=1}{\overset{N}{\prod}}{\displaystyle \underset{j=1}{\overset{k}{\prod}}{\left[{h}_{j}\left({t}_{i}\right)\right]}^{I\left({\delta}_{i}=j\right)}}}\right]\left[{\displaystyle \underset{i=1}{\overset{N}{\prod}}{\displaystyle \underset{j=1}{\overset{k}{\prod}}{\stackrel{\xaf}{F}}_{j}\left({t}_{i}\right)}}\right],$ (6)

by taking the natural logarithm for both sides of Equation (6), we get the log-li- kelihood function in the general case as

$\mathcal{L}={\displaystyle \underset{i=1}{\overset{N}{\sum}}{\displaystyle \underset{j=1}{\overset{k}{\sum}}\left[I\left({\delta}_{i}=j\right)\left(\mathrm{ln}{h}_{j}\left({t}_{i}\right)\right)+\mathrm{ln}{\stackrel{\xaf}{F}}_{j}\left({t}_{i}\right)\right]}},$ (7)

where I(A) is an indicator function that is defined as I(A) = 1 if A is true and 0, otherwise.

Substituting from (3) and (4) into (6) and (7), we get the likelihood function and the log-likelihood function for the competing risks model, when the causes follow Weibull distribution with unknown parameters, as

$L\left(\theta |.\right)=\left\{{\displaystyle \underset{i=1}{\overset{N}{\prod}}{\displaystyle \underset{j=1}{\overset{k}{\prod}}{\left[{\alpha}_{j}{\beta}_{j}{t}_{i}^{{\beta}_{j}-1}\right]}^{I\left({\delta}_{i}=j\right)}}}\right\}{\text{e}}^{-{\displaystyle \underset{j=1}{\overset{k}{\sum}}{\alpha}_{j}}\left({\displaystyle \underset{i=1}{\overset{N}{\sum}}{t}_{i}^{{\beta}_{j}}}\right)},$ (8)

and

$\mathcal{L}={\displaystyle \underset{i=1}{\overset{N}{\sum}}{\displaystyle \underset{j=1}{\overset{k}{\sum}}\left[I\left({\delta}_{i}=j\right)\left[\mathrm{ln}{\alpha}_{j}+\mathrm{ln}{\beta}_{j}+\left({\beta}_{j}-1\right)\mathrm{ln}{t}_{i}\right]-{\alpha}_{j}{t}_{i}^{{\beta}_{j}}\right]}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}.$ (9)

As well known, the maximum likelihood point estimate of $\theta $ is the set of values of its elements that maximize the likelihood (or log-likelihood) function.

The first partial derivatives of $\mathcal{L}$ with respect to ${\alpha}_{l}$ and ${\beta}_{l},l=1,2,\cdots ,k$ are

$\frac{\partial \mathcal{L}}{\partial {\alpha}_{l}}={\displaystyle \underset{i=1}{\overset{N}{\sum}}{\displaystyle \underset{j=1}{\overset{k}{\sum}}{\delta}_{jl}}}\left\{I\left({\delta}_{i}=j\right)\frac{1}{{\alpha}_{j}}-{t}_{i}^{{\beta}_{j}}\right\},$ (10)

$\frac{\partial \mathcal{L}}{\partial {\beta}_{l}}={\displaystyle \underset{i=1}{\overset{N}{\sum}}{\displaystyle \underset{j=1}{\overset{k}{\sum}}{\delta}_{jl}}}\left\{I\left({\delta}_{i}=j\right)\left[\frac{1}{{\beta}_{j}}+\mathrm{ln}{t}_{i}\right]-{\alpha}_{j}{t}_{i}^{{\beta}_{j}}\mathrm{ln}{t}_{i}\right\},$ (11)

where ${\delta}_{lj},\text{\hspace{0.17em}}l,j=1,2,\cdots ,k$ is Kronecker delta. To get the maximum likelihood

point estimates for the parameters, we set $\frac{\partial \mathcal{L}}{\partial {\alpha}_{l}}=0$ and $\frac{\partial \mathcal{L}}{\partial {\beta}_{l}}=0$ , $l=1,2,\cdots ,k$ ,

then solve the system of 2 k equations obtained with respect to ${\alpha}_{j}$ and ${\beta}_{j}$ , $j=1,2,\cdots ,k$ . The obtained system has no analytical solution, therefore we should use a numerical technique to get the maximum likelihood estimates of the parameters.

In order to obtain the information matrix, we need the second partial de- rivatives of $\mathcal{L}$ with respect to ${\alpha}_{l}$ and ${\beta}_{l}$ , $l=1,2,\cdots ,k$ , which are given below

$\frac{{\partial}^{2}\mathcal{L}}{\partial {\alpha}_{l}{\alpha}_{m}}=-{\displaystyle \underset{i=1}{\overset{N}{\sum}}{\displaystyle \underset{j=1}{\overset{k}{\sum}}\left[I\left({\delta}_{i}=j\right)\frac{{\delta}_{jl}{\delta}_{jm}}{{\alpha}_{j}{}^{2}}\right]}}\text{\hspace{0.05em}}\text{\hspace{0.05em}},$ (12)

$\frac{{\partial}^{2}\mathcal{L}}{\partial {\alpha}_{l}\partial {\beta}_{m}}=-{\displaystyle \underset{i=1}{\overset{N}{\sum}}{\displaystyle \underset{j=1}{\overset{k}{\sum}}{\delta}_{jl}}}{\delta}_{jm}\left[{t}_{i}^{{\beta}_{j}}\mathrm{ln}{t}_{i}\right],$ (13)

$\frac{{\partial}^{2}\mathcal{L}}{\partial {\beta}_{l}\partial {\beta}_{m}}=-{\displaystyle \underset{i=1}{\overset{N}{\sum}}{\displaystyle \underset{j=1}{\overset{k}{\sum}}{\delta}_{jl}}}{\delta}_{jm}\left[\frac{I\left({\delta}_{i}=j\right)}{{\beta}_{j}^{2}}+{\alpha}_{j}{t}_{i}^{{\beta}_{j}}{\left(\mathrm{ln}{t}_{i}\right)}^{2}\right],$ (14)

where $l,m=1,2,\cdots ,k$ .

3.2. Confidence Intervals

The maximum likelihood estimators for the parameters cannot be obtained in analytic forms. Therefore, their actual distributions cannot be derived. However, we can use the asymptotic distribution of the maximum likelihood estimator to derive confidence intervals for $\theta =\left({\alpha}_{1},\cdots ,{\alpha}_{k},{\beta}_{1},\cdots ,{\beta}_{k}\right)$ . It is well known that

$\stackrel{^}{\theta}\to {N}_{2k}\left(\theta ,{I}^{-1}\left(\stackrel{^}{\theta}\right)\right).$ (15)

where ${N}_{2k}$ denotes 2 k-multidimension normal distribution and ${I}^{-1}\left(\stackrel{^}{\theta}\right)$ is the variance-covariance matrix that can be obtained as the inverse of the information matrix of $\stackrel{^}{\theta}$ . The elements of the information matrix are the second partial derivatives of $\mathcal{L}$ evaluated at the maximum likelihood point estimate of the unknown parameters. That is $I\left(\theta \right)=\left({I}_{ij}\left(\theta \right)\right),\text{\hspace{0.17em}}i,j=1,2,\cdots ,k$ , where

${I}_{ij}\left(\theta \right)=-E\left[\frac{{\partial}^{2}\mathcal{L}\left(\theta \right)}{\partial {\theta}_{i}\partial {\theta}_{j}}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}i,j=1,2,\cdots ,k,$ (16)

and $\theta =\left({\alpha}_{1},\cdots ,{\alpha}_{k},{\beta}_{1},\cdots ,{\beta}_{k}\right)$ .

4. Bayesian Procedure

In this section, we use Bayes approach to estimate the unknown model parameters ${\alpha}_{j}$ and ${\beta}_{j},j=1,2,\cdots ,k$ . We assume that all parameters are independent and follow gamma distributions with different but known prior hyperparameters. That is, we assume that ${\alpha}_{j}$ follows Gamma prior distribution with shape parameter ${a}_{j1}$ and scale parameter ${a}_{j2}$ , and ${\beta}_{j}$ follows Gamma prior distribution with shape parameter ${b}_{j1}$ and scale parameter ${b}_{j2}$ , for $j=1,2,\cdots ,k$ . Therefore, the joint prior density of $\theta =\left({\alpha}_{1},\cdots ,{\alpha}_{k},{\beta}_{1},\cdots ,{\beta}_{k}\right)$ , up to a constant, is

(17)

Combine the likelihood function (8) and the joint prior density (17), and using the Bayes’ theorem, we get the joint posterior density function of $\theta $ , up to a constant, as

(18)

where ${n}_{j}={\displaystyle \underset{i=1}{\overset{N}{\sum}}I}\left({\delta}_{i}=j\right),\text{\hspace{0.17em}}j=1,2,\cdots ,k$ .

Under quadratic loss function, the Bayesian estimate of any function of the vector of unknown parameters $\theta $ , say $v\left(\theta \right),$ is the posterior mean of that function. That is,

$\stackrel{^}{v}={E}_{\theta |.}\left(v\left(\theta \right)\right)={\displaystyle {\int}_{0}^{\infty}v}\left(\theta \right)g\left(\theta |.\right)\text{d}\theta .$ (19)

The integral in Equation (19) as well as the normalized constant included in (18) have no analytical solutions. Therefore, numerical approaches should be used to do Bayesian analysis of the underlying model. Among several approaches, we will use Markov Chain Monte Carlo (MCMC) simulation technique to do the analysis. MCMC algorithm can be used to get random draws from the posterior distribution with density given in (18) without calculating the normalized constant. Then we can use the random draws to do any analysis we wish about the model parameters and model characteristics.

Markov Chain Monte Carlo Method

One of the most successful methods in modern Bayesian statistics is the Markov Chain Monte Carlo (MCMC) technique. MCMC method is an algorithm to summarize the posterior distribution without calculating the normalized constant. MCMC techniques have been extensively used to become among the main computational tools in the Bayesian statistical inference [19] . The Metropolis-Hasting sampler is a modified version of the MCMC method. One of the main ideas in MCMC is to find a suitable distribution function, called as “proposal” that satisfies two conditions: 1) easy to simulate from, and 2) it mimics the posterior distribution function of interest. Once we determine such proposal, we get random draws from it, we apply the acceptance-rejection rule to get random draws from our target posterior distribution.

The following describes the steps of Metropolis-Hasting algorithm to simulate random draws from the posterior distribution $g\left(\theta |.\right)$ :

1) Set starting point of the chain, say ${\theta}^{\left(0\right)}$ .

2) Set a size of trails we get for the random draws, say M.

3) For $i=1,\cdots ,M$ repeat the following steps:

a) Set $\theta ={\theta}^{\left(i-1\right)}$ .

b) Generate a candidate ${\theta}^{*}$ from a proposal distribution $p\left({\theta}^{*}|\theta \right)$ .

c) Calculate the acceptance probability $\hslash $ as $\hslash =\mathrm{min}\left\{1,R\right\}$ , where

$R=\frac{g\left({\theta}^{*}|.\right)p\left(\theta |{\theta}^{*}\right)}{g\left(\theta |.\right)p\left({\theta}^{*}|\theta \right)}$ .

d) Set ${\theta}^{\left(i\right)}={\theta}^{*}$ with probability $\hslash $ or otherwise set ${\theta}^{\left(i\right)}=\theta $ .

Under some regularity conditions on the proposal density $p\left({\theta}^{*}|\theta \right)$ , the sequence of the simulated draws ${\left\{{\theta}^{\left(i\right)}\right\}}_{i=1}^{M}$ will converge to random draws that follow the posterior density $g\left(\theta |.\right)$ .

5. The Risks

One of the important characteristics in competing risks models is the relative risk of one of the competing risks in the presence of all other risks. In this section, we discuss how to estimate the failure probability distribution for each cause of failure in the existence of all others and the risk due to a specific cause of failure. The probability of failure due to cause j at time t in the presence of all other causes is, see [17] [20] ,

${F}_{{C}_{j}}\left(t\right)={\displaystyle {\int}_{0}^{t}{h}_{j}\left(u\right)}{\displaystyle \underset{l=1}{\overset{k}{\prod}}{\stackrel{\xaf}{F}}_{l}\left(u\right)\text{d}u},\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots ,k.$ (20)

The risk due to cause $j=1,2,\cdots ,k$ , say ${\pi}_{j}$ , is

${\pi}_{j}=\underset{t\to \infty}{\text{lim}}{F}_{{C}_{j}}\left(t\right)={\displaystyle {\int}_{0}^{\infty}{h}_{j}\left(t\right)}{\displaystyle \underset{l=1}{\overset{k}{\prod}}{\stackrel{\xaf}{F}}_{l}\left(t\right)\text{d}t}.$ (21)

For the Weibull competing risks model discussed here, ${\pi}_{j}$ can be obtained by solving the following integral

${\pi}_{j}={\displaystyle {\int}_{0}^{\infty}{\alpha}_{j}{\beta}_{j}{t}_{i}^{{\beta}_{j}-1}{e}^{-{\displaystyle \underset{l=1}{\overset{k}{\sum}}{\alpha}_{l}{t}^{{\beta}_{l}}}}}\text{d}t,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots ,k,$ (22)

The above integral has no analytic solution. As special case, when all shape parameters for causes are equal, ${\beta}_{1}={\beta}_{2}=\cdots ={\beta}_{k}$ , we get

${\pi}_{j}=\frac{{\alpha}_{j}}{{\displaystyle \underset{l=1}{\overset{k}{\sum}}{\alpha}_{l}}},\text{\hspace{0.17em}}j=1,2,\cdots ,k.$ (23)

Using the invariant property, the maximum likelihood estimate of ${\pi}_{j}$ can obtained by numerical integration when the integrand function in (22) evaluated at the maximum likelihood estimates of the parameters. For Bayesian analysis, we will us the random draws that obtained from the joint distribution along with the integral above to get random draws from the posterior distribution of ${\pi}_{j}$ , then use it do all Bayes analysis we wish on ${\pi}_{j},$ $j=1,2,\cdots ,k$ .

6. Application

We study in this section an application of a real-life data set. This data set gives the times (in years) from HIV infection to AIDS, SI switch and death in 329 men who have sex with men (MSM). Data are from the period until combination anti-retroviral therapy became available (1996). For more background information on this data, see [21] [22] . It was used as example for the competing risks analyses in [23] [24] . These data can be considered as competing risks data with two risks. There are some individuals in the study left with no infection switch or death. Those are considered as censored observations.

Here, we use the model discussed in this paper to analyze this data set. Table 1 shows the maximum likelihood and Bayes point estimates of the fours model parameters. We used R language to do all calculations.

As shown in Table 1, the results from both methods are completely different. Since we do not know the actual values of the unknown parameters, then using those point estimates will not be sufficient to judge which method provides better estimates. One of the comparison tools is the interval estimation or the corresponding variance to each point estimate. This why we calculated them as

Table 1. Estimates the parameters ${\alpha}_{1},{\beta}_{1},{\alpha}_{2}$ and ${\beta}_{2}$ by maximum likelihood and bayesian methods.

given below.

Below is the inverse of the Fisher information matrix that approximates the variance-covariance matrix for the maximum likelihood estimates of the vector of unknown parameters $\theta $ :

${I}^{-1}=\left(\begin{array}{cccc}96.0940576& -0.44433333& 0.00000000& 0.00000000\\ -0.4443333& 0.002359471& 0.00000000& 0.00000000\\ 0.00000000& 0.00000000& 13.3737256& -0.183498758\\ 0.00000000& 0.00000000& -0.1834988& 0.003166101\end{array}\right)$

As we can see, the variances of ${\stackrel{^}{\alpha}}_{1}$ and ${\stackrel{^}{\alpha}}_{2}$ are very large comparing to their values. This would lead to negative lower bound of the asymptotic confidence intervals of ${\alpha}_{1}$ and ${\alpha}_{2}$ . Since ${\alpha}_{1}$ and ${\alpha}_{2}$ cannot be negative, we truncate the lower limits at zero. This is one of the disadvantages of the maximum likelihood method.

Table 2 shows asymptotic 95% confidence intervals of the four unknown parameters. Table 3 shows the estimated risks using both two methods. The 95% credible intervals of the four unknown parameters are shown in Table 4, while Table 5 gives the measures of central tendency of the four parameters using their marginal posterior distributions. As Table 4 and Table 5 show, not only the lower limits of the credible intervals of ${\alpha}_{1}$ and ${\alpha}_{2}$ are larger than zero, but the minimum values of ${\alpha}_{1}$ and ${\alpha}_{2}$ are larger than zero. This is one of the advantages of the Bayes method.

In Bayes case, we set all hyperparameters equal and equal to 0.001 that reflects non-informative prior. To apply MCMC, we used the proposal as a multivariate t distribution with mode equal to the vector of the maximum likelihood estimates, variance-covariance matrix equals to the inverse Fisher matrix and four degrees of freedom. Also, the number of draws, M, was 10000. As a diagnostic test for the MCMC, we plotted the autocorrelations and the traces for each parameter as shown in Figure 1 and Figure 2. The trace plots show good mix of the sampled draws and the autocorrelation plots show that the Lag decreases rapidly, which indicates that the draws become approximately independent over time and the draws come from the actual posterior distribution. We used the random draws from the joint posterior distribution of $\theta $ to get random draws for the risks ${\pi}_{j},j=1,2$ from their marginal posterior distributions without obtaining those actual marginal distributions. We used those draws to calculate Bayes estimates of ${\pi}_{j}$ and sketch their posterior density functions. Figure 3

Table 2. The asymptotic confidence intervals using Maximum likelihood method.

Table 3. Estimates the two risks.

Table 4. 95% credible intervals for the four parameters.

Table 5. The measures of central tendency of the parameters ${\alpha}_{1},{\beta}_{1},{\alpha}_{2}$ and ${\beta}_{2}$ .

shows the trace plots of the draws of ${\pi}_{j}$ and the corresponding marginal posterior density functions. Furthermore, we used the random draws of $\theta $ to get random draws for the sub-survivors and the overall survivor and used them to calculate the Bayes estimators along with 95% credible intervals for those functions at different time as shown in Figure 4.

7. Conclusions

In this paper, we attempted to examine competing risks models with $k,k\ge 2$ ,

Figure 1. The autocorrelation plot of the simulated draws using MCMC after discarding the first 50% of the draws.

Figure 2. The trace plots of the simulated draws after discarding the early 50% of the draws.

independent causes of failure in the presence of censoring data. We derived the likelihood equation of the model in general and used it to derive the likelihood function when the risks follow Weibull distributions with unknown shape and scale parameters.

We discussed how to get the maximum likelihood estimates and Bayes estimates of the model parameters as well as for some important reliability measures such as the individual risks, sub-survivors and overall survivor. In Bayes analysis, we used gamma prior distribution for all unknown parameters with known

Figure 3. Trace plots and posterior density functions of ${\pi}_{j},\left(j=1,2\right)$ .

Figure 4. Bayes point and interval estimates of the sub-survivor functions and the overall survivor function.

hyperparameters and used MCMC to get random draws from the joint posterior distribution function. Also, we discussed the asymptotic confidence intervals and credible intervals of all unknown parameters. The model is applied to a real data set and detailed discussion was provided.

Acknowledgements

The authors are grateful to the anonymous referees for a careful checking of the details and for helpful comments that improved this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
De Wreede, L.C., Fiocco, M. and Putter, H. (2010) The Mstate Package for Estimation and Prediction in Non- and Semiparametric Multi-State and Competing Risks Models. Computer Methods and Programs in Biomedicine, 99, 261-274.
https://doi.org/10.1016/j.cmpb.2010.01.001 |

[2] | Brooks, S., Gelman, A., Jones, G.L. and Meng, X.L. (2011) Chapman & Hall/CRC Handbooks of Modern Statistical Methods. Chapman & Hall/CRC, Boca Raton. |

[3] |
Schmoor, C., Schumacher, M., Finke, J. and Beyersmann, J. (2013) Competing Risks and Multistate Models. Clinical Cancer Research, 19, 12-21.
https://doi.org/10.1158/1078-0432.CCR-12-1619 |

[4] | Geskus, R.B. (2015) Data Analysis with Competing Risks and Intermediate States. CRC Press. |

[5] | Hinchliffe, S.R. (2013) Advancing and Appraising Competing Risks Methodology for Better Communication of Survival Statistics. University of Leicester, Leicester. |

[6] | Lawless, J.F. (2011) Statistical Models and Methods for Lifetime Data. John Wiley & Sons. |

[7] | Haller, B. (2014) The Analysis of Competing Risks Data with a Focus on Estimation of Cause-specific and Subdistribution Hazard Ratios from a Mixture Model, lmu. |

[8] | Cox, D.R. (1959) The Analysis of Exponentially Distributed Lifetimes with Two Types of Failure. Journal of the Royal Statistical Society. Series B (Methodological), 411-421. |

[9] | Berkson, J. and Elveback, L. (1960) Competing Exponential Risks, with Particular Reference to the Study of Smoking and Lung Cancer. Journal of the American Statistical Association, 55, 415-428. https://doi.org/10.1080/01621459.1960.10482072 |

[10] | David, H.A. and Moeschberger, M.L. (1978) The Theory of Competing Risks. Griffin, London. |

[11] |
Kundu, D. and Sarhan, A.M. (2006) Analysis of Incomplete Data in Presence of Competing Risks among Several Groups. IEEE Transactions on Reliability, 55, 262- 269. https://doi.org/10.1109/TR.2006.874919 |

[12] | Ahmad, S.P. and Ahmad, K. (2013) Bayesian Analysis of Weibull Distribution using R Software. Australian Journal of Basic and Applied Sciences, 7, 156-164. |

[13] | Kaplan, E.L. and Meier, P. (1958) Nonparametric Estimation from Incomplete Observations. Journal of the American Statistical Association, 53, 457-481. |

[14] | Park, C., et al. (2005) Parameter Estimation of Incomplete Data in Competing Risks using the EM Algorithm. IEEE Transactions on Reliability, 54, 282-290. |

[15] |
Sarhan, A.M. (2007) Analysis of Incomplete, Censored Data in Competing Risks Models with Generalized Exponential Distributions. IEEE Transactions on Reliability, 56, 132-138. https://doi.org/10.1109/TR.2006.890899 |

[16] | Hossain, A. and Zimmer, W. (2003) Comparison of Estimation Methods for Weibull Parameters: Complete and Censored Samples. Journal of Statistical Computation and Simulation, 73, 145-153. https://doi.org/10.1080/00949650215730 |

[17] | Sarhan, A.M., Hamilton, D.C. and Smith, B. (2010) Statistical Analysis of Competing Risks Models. Reliability Engineering & System Safety, 95, 953-962. |

[18] | Kundu, D. and Basu, S. (2000) Analysis of Incomplete Data in Presence of Competing Risks. Journal of Statistical Planning and Inference, 87, 221-239. |

[19] | Albert, J. (2009) Bayesian Computation with R. Springer Science & Business Media. https://doi.org/10.1007/978-0-387-92298-0 |

[20] | Bocchetti, D., Giorgio, M., Guida, M. and Pulcini, G. (2009) A Competing Risk Model for the Reliability of Cylinder Liners in Marine Diesel Engines. Reliability Engineering & System Safety, 94, 1299-1307. |

[21] | Dukers, N.H.T.M., Renwick, N., Prins, M., Geskus, R.B., Schulz, T.F., Weverling, G.-J., Coutinho, R.A. and Goudsmit, J. (2000) Risk Factors for Human Herpesvirus 8 Seropositivity and Seroconversion in a Cohort of Homosexula Men. American Journal of Epidemiology, 151, 213-224. |

[22] | Xiridou, M., Geskus, R., de Wit, J., Coutinho, R. and Kretzschmar, M. (2003) The Contribution of Steady and Casual Partnerships to the Incidence of HIV Infection among Homosexual Men in Amsterdam. Aids, 17, 1029-1038. https://doi.org/10.1097/00002030-200305020-00012 |

[23] | Putter, H., Fiocco, M. and Geskus, R.B. (2007) Tutorial in Biostatistics: Competing Risks and Multi-State Models. Statistics in Medicine, 26, 2389-2430. https://doi.org/10.1002/sim.2712 |

[24] | Geskus, R.B., Gonzalez, C., Torres, M., Del Romero, J., Viciana, P., Masia, M., Blanco, J.R., Iribarren, M., De Sanjose, S., Hernandez-Novoa, B., et al. (2016) Incidence and Clearance of Anal High-Risk Human Papillomavirus in HIV-Positive Men Who Have Sex with Men: Estimates and Risk Factors. AIDS (London, England), 30, 37. |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.