_{1}

^{*}

Risk modeling for recurrent cervical cancer requires the development of new concepts and methodologies. Unlike most daily decisions, many medical decision making have substantial consequences, and involve important uncertainties and trade-offs. The uncertainties may be about the accuracy of available diagnostic tests, the natural history of the cervical cancer, the effects of treatment in a patient or the effects of an intervention in a group or population as a whole. With such complex decisions, it can be difficult to comprehend all options “in our heads”. This study applied Bayesian decision analysis to an inferential problem of recurrent cervical cancer in survival analysis. A formulation is considered where individual was expected to experience repeated events, along with concomitant variables. In addition, the sampling distribution of the observations is modelled through a proportional intensity Nonhomogeneous Poisson process. The proposed decision models can provide decision support techniques not only for taking action in the light of all available relevant information, but also for minimizing expected loss. The decision process is useful in selecting the best alternative when a patient with recurrent cervical cancer, in particular, the proposed decision process can provide more realistic solutions.

Cervical cancer remains one of the leading causes of cancer-related death among women globally [

In order to model the recurrent cervical cancer, the Nonhomogeneous Poisson process (NHPP) was introduced to make the time-dependent behavior of more cervical cancer tractable [_{0} is the scale factor, β is the aging rate, x is the elapsed time, and h(.) can be any function that reflects the recurrent cervical cancer.

Suppose we have a patient with cervical cancer who recurrent process is given by a NHPP. We observe cervical cancer for x* units of time, during which we observe N recurrent events. In this (time-truncated) case, x* is a constant and N is a random variable. It is known for such an NHPP that the joint density function of the first N recurrent times is

where _{n*} (i.e., the recurrent event-truncated case, where n* is a constant and X_{n*} is a random variable), then the joint density function of the first n* recurrent times is

The recurrent cervical cancer is modeled by a power law failure model if it is given by an NHPP with an intensity function of the form

where β is effectively unit. When β is equal to one, the NHPP degenerates to an HPP with a constant β_{0}. For β < 1 the failure intensity is decreasing (corresponding to survival growth), and for β > 1 the failure intensity is increasing. Note that for 1 < β the failure intensity is concave downward, and for β > 2 the failure intensity is concave upward.

Other failure models have also been proposed to model recurrent process, but in more complicated ways. Unlike the power law failure model given above, these models often have more than two parameters. This makes the analysis more difficult. But since some of these models are generated by combining two of the three commonly used models. For example, the exponential polynomial rate model proposed by Cox [

the Weibull and log-linear rate model proposed by Lee [

the nonlinear failure rate model proposed by Salem [

the bounded intensity model proposed by Hartler [

the gamma type intensity model proposed by Yamada et al. [

and the bathtub type failure rate models

In this paper, we develop a Bayesian decision process for the power law failure model, since the power law intensity function allows for a wide variety of shapes (including both concave upward and concave downward, as well as decreasing) and tends not to increase very steeply, which may make it more realistic for clinical practices.

Suppose a patient with recurrent cervical cancer that behaves according to the nonhomogeneous Poisson process with intensity function

Parameter space Θ:_{0} is the scale factor and β is the aging rate. Both parameters are uncertain and can be estimated through physicians’ opinions.

Action space A:{a_{1},a_{2}}, where a_{1} is the status quo, and a_{2} is the risk reduction action. (We eventually expand this to consider a third possible action, the collection of additional information).

Loss function L: a real function defined on Θ × A. If we decide to keep continuing the status quo, then the loss we face is L(θ,a_{1}); if we decide to take the risk reduction action, then the loss we face is L(θ,a_{2}).

Sample space S: The additional information available to be collected. With recurrent event-time endpoints, it is common to schedule analyses at the times of occurrence of specified landmark events, such as the 5th event, the 10th event, and so on. The collecting this additional data or information should also be reflected in the decision process.

The cost of collecting this additional information should also be reflected in the decision process. The detailed analysis descriptions of each phase are as follows:

The available prior clinical knowledge (e.g., physician’s opinion, past experience, or the similar clinical status) about the parameter space, _{1 }and risk reduction action a_{2} can be derived by taking all cost-related data into account. Once the prior distribution and loss function have been specified, it is easy to perform a prior analysis by simply comparing the expected losses for the options a_{1 }and a_{2}. Therefore, if_{2} is optimal, and if_{1} is optimal.

When the expected losses associated with options a_{1} and a_{2} are fairly close, we might not feel very confident about a decision based solely on a prior analysis, and gathering additional information might be desirable. However, before collecting additional information, we have to investigate the possible outcomes and costs of each candidate sampling plan, and to determine the first stage decision of whether collecting additional information is worthwhile and also which sampling plan is the best in terms of cost-effectiveness. The Expected Value of Sample Information (EVSI) can be calculated according to

where

is defined as

fore, if ENGS ≤ 0, then it is not worthwhile collecting additional information; conversely, if ENGS > 0, then we can start collecting data and prepare for a posterior analysis, and the i^{*}th sampling plan should be adopted in or-

der to satisfy the condition

Once the optimal sampling plan, say S^{(k)}, has been selected based on the preposterior analysis. After the data collection is complete, the observed data S^{(k)} = s^{(k)} can then be used to perform a posterior analysis. The decision should then be made in accordance with the strategy that if

then option a_{2} is optimal, and if

then option a_{1} is optimal.

By exploring the relationships among the optimal decision and the extent of uncertainty about recurrent trends, the conditions under which gathering additional information is worthwhile can be determined, and more generally in developing guidelines for the use of isolating trends in data in risk management. The following terminology will be used throughout this paper:

C_{A}: the cost of a recurrent event if it occurs.

C_{R}: the cost of the proposed risk reduction action.

C_{I}: the cost of collecting additional information.

ρ: the reduction in failure rate that would result from the proposed risk reduction action (0 < ρ < 1).

M: the expected number of failures during the time period [t,T] under the status quo.

Suppose that patient has a planned lifetime T, and the decision of whether to keep the status quo or perform some intervention treatment must be made at time t. The decision variable we are dealing with is then the expected number of recurrent event during the time period [t,T]. Since recurrent times are assumed to be drawn from a nonhomogeneous Poisson process with intensity function

where

On the basis of the assumptions given above, we therefore have a two-action problem with a linear loss function, where the loss for taking action a_{1} (i.e., continuing with the status quo) is C_{A}M and the loss for taking action a_{2} (i.e., undertaking the intervention treatment) is

As a simplistic assumption, one can assume that λ_{0} and β are independent of each other. For example, if the prior distributions for λ_{0} and β are Gamma (_{0} and β is just the product of the individual distributions of λ_{0} and β. The joint posterior distribution for λ_{0} and β obtained by Bayesian updating is simply proportional to the product of the joint prior distribution for λ_{0} and β and the likelihood function, which is given by

where K is the normalizing constant.

Since the prior and posterior density functions for M are functions of λ_{0} and β, some prior and posterior mean values of M can be derived by the bivariate transformation technique. However, closed forms for the prior and posterior means of M are not always available, which is typically the case for the Bayesian analysis. Nevertheless, Bayesian prior and posterior analyses can still be performed by computing the prior and posterior mean values of M using the numerical integration technique and comparing them with the cutoff value _{C}, then we should keep the status quo; if not, then we should perform the risk reduction action.

We have used the recurrent cervical cancer case study to illustrate the use of the models developed in the previous sections. The medical records and pathology were accessible by the Chung Shan Medical University Hospital Tumor Registry. The birth date of the studied subject was 1-May-1943, and the observation period was from 24-Sep-2010, to 23-August-2011. The recurrent dates for the subject during the observation period were: {24-Jan-2011, 24-Apr-2011, 24-May-2011, 23-Jul-2011, 22-Aug-2011}. An application is performed with the assumption that E{λ_{0}} = 0.2 and SD{λ_{0}} = 0.03, and that β is Uniform[1,2.8]. In addition, we have C_{A} = 238,000, C_{R} = 196,000, C_{I} = 102,000, ρ = 0.08, T = 75, and t = 62. We use the entire failure data for the posterior analysis. Prior and posterior analyses are performed by comparing the prior and posterior mean values of M with the cutoff value M_{C}.

Since the recurrent data are already available, we assume that the cost of analyzing the recurrent data is associated with tasks such as reviewing records and interviewing physicians. As can be seen in the

. Summary the result of Bayesian inference

Prior E{M} | 4.9500 |
---|---|

Optimal Sampling Number of Failure | 7 |

Actual Sampling Number of Failure | 4 |

Prior E{λ_{0}} | 0.2 |

Posterior E¢{λ_{0}} | 0.1532 |

Prior E{β} | 1.49 |

Posterior E¢{β} | 1.7314 |

Cutoff Value of E{M} for Risk Reduction | 6.9400 |

Prior Decision | Status Quo |

Posterior E¢{M} | 9.6983 |

Posterior Decision | Risk Reduction |

and posterior analyses can be performed by comparing the prior and posterior mean values of λ_{0} with the cutoff value M_{C}. The observed data support the adoption of the risk reduction action, whereas the priors support the status quo. This can be explained by the fact that the observed data indicate greater deterioration than was assumed by the prior distributions.

In this study, Bayesian inference of a nonhomogeneous Poisson process with power law failure intensity function was used to describe the behavior of recurrent cervical cancer. In particular, the proposed priors allow us to explicitly account for independence between λ_{0} and β, and are a significant improvement over previous approaches, which have generally been based on the assumption either that β is known or that λ_{0} and β are independent. Furthermore, the prior distribution for the power law failure model has more advantages than the corresponding distribution for any other failure models, since it has a wide range of intensity function shapes. The proposed decision models can provide decision support techniques not only for taking action in the light of all available relevant information, but also for minimizing expected loss. One area in which further work might be desirable is to study other failure models (e.g., the proportional hazard model, the exponential polynomial rate model, the Weibull and log linear rate model, the nonlinear failure rate model, or the bathtub type failure rate model) using the same procedure developed in this study. However, since these models usually have more than two parameters, the analysis will be more complicated.