_{1}

Relative risk is a popular measure to compare risk of an outcome in the exposed group to the unexposed group. By applying the delta method and Central Limit Theorem, [1] derives two approximate confidence intervals for the relative risk, and [2] approximates the confidence interval for the relative risk via the likelihood ratio statistic. Both of these approximations require sample size to be large. In this paper, by adjusting the likelihood ratio statistic obtained by [2] , a new method is proposed to obtain the confidence interval for the relative risk. Simulation results showed that the proposed method is extremely accurate even when the sample size is small.

Consider two groups of subjects: exposure group (i = 1) and control group (i = 2). Let n_{i} be the number of subjects in group i with p i being the risk of a specific outcome in group i. Then the random variable X i , which is the number of subjects that give the specific outcome in group i, is distributed as Binomial ( n i , p i ). As defined in [_{i} is unknown, but we observed x_{i}. Then p_{i}_{ }can be estimated by p ^ i = x i / n i . And, therefore, an estimate of the relative risk based on the observed sample is θ ^ = p ^ 1 / p ^ 2 .

Relative risk is a popular measure used in biomedical studies because it is easy to compute and interpret, and it is included in standard statistical software output (e.g., in R and SAS). [

To illustrate the concept of relative risk, let us consider the following example. [

Out of 11,037 physicians taking aspirin over the course of the study, 104 of them had heart attacks. Similarly 189 of 11,034 physicians in the placebo group had heart attacks. Based on this dataset, the relative risk of having heart attacks among physicians is

θ ^ = ( 5 + 99 ) / ( 5 + 99 + 10933 ) ( 18 + 171 ) / ( 18 + 131 + 10845 ) = 0.55.

Thus, physicians who took aspirin over the course of the study have 0.55 times the risk of having a heart attack as physicians who were in the placebo group. This suggests that taking aspirin is associated with a reduction in the risk of heart attacks among physicians as they are about half as likely to have a heart attack as physicians who did not take aspirin throughout the study.

Although reporting a point estimate of relative risk is important, it does not provide information about the variations arising from the observed data. Hence, in practice, a ( 1 − α ) 100 % confidence interval for θ is usually reported and recommended (see [

Let X i , i = 1, 2, be independent random variables distributed as Binomial ( n i , p i ). Then the relative risk is defined as θ = p 1 / p 2 . A standard estimator of θ is Θ ^ = ( X 1 / n 1 ) / ( X 2 / n 2 ) . With realizations x 1 and x 2 , a standard estimate of θ is θ ^ = ( x 1 / n 1 ) / ( x 2 / n 2 ) . [

Group | Heart Attack | ||
---|---|---|---|

Fatal | Non-fatal | No attack | |

Placebo | 18 | 171 | 10,845 |

Aspirin | 5 | 99 | 10,933 |

E ( Ψ ^ ) = E ( ln Θ ^ ) ≈ ln θ = ψ and var ( Ψ ^ ) = var ( ln Θ ^ ) ≈ ∑ i = 1 2 ( 1 n i p i − 1 n i ) .

Therefore, an estimate of ψ is ψ ^ = ln θ ^ , and the estimated variance of Ψ ^ is

var ^ ( Ψ ^ ) ≈ ∑ i = 1 2 ( 1 n i p ^ i − 1 n i ) = ∑ i = 1 2 ( 1 x i − 1 n i ) .

Hence, when n 1 and n 2 are large, by the Central Limit Theorem, an approximate ( 1 − α ) 100 % confidence interval for ψ = ln θ is:

( ln θ ^ − z α / 2 var ^ ( Ψ ^ ) , ln θ ^ + z α / 2 var ^ ( Ψ ^ ) )

where z α / 2 is the ( 1 − α / 2 ) 100 th percentile of the standard normal distribution. Since ψ and θ are one-one correspondence, we have an approximate ( 1 − α ) 100 % confidence interval for θ is

( e ln θ ^ − z α / 2 var ^ ( Ψ ^ ) , e ln θ ^ + z α / 2 var ^ ( Ψ ^ ) ) .

The above interval is directly available from R using the riskratio() command.

Since Θ ^ is a biased estimator of θ , [

Θ ˜ = ( X 1 + 0.5 ) / ( n 1 + 0.5 ) ( X 2 + 0.5 ) / ( n 2 + 0.5 ) and Ψ ˜ = ln Θ ˜ .

The estimated variance of Ψ ˜ = ln Θ ˜ is

var ^ ( Ψ ˜ ) ≈ ∑ i = 1 2 ( 1 x i + 0.5 − 1 n i + 0.5 ) .

Thus, the corresponding approximate ( 1 − α ) 100 % confidence interval for θ is

( e ln θ ˜ − z α / 2 var ^ ( Ψ ˜ ) , e ln θ ˜ + z α / 2 var ^ ( Ψ ˜ ) ) .

[

l ( p 1 , p 2 ) = ∑ i = 1 2 [ x i ln p i + ( n i − x i ) ln ( 1 − p i ) ] .

The point ( p ^ 1 , p ^ 2 ) that maximizes the log-likelihood function is known as the maximum likelihood estimate (MLE) of ( p 1 , p 2 ) , which can be obtained by solving

∂ l ( p 1 , p 2 ) ∂ p 1 = 0 and ∂ l ( p 1 , p 2 ) ∂ p 2 = 0 .

In this case, the MLE of ( p 1 , p 2 ) is ( p ^ 1 , p ^ 2 ) = ( x 1 n 1 , x 2 n 2 ) . Moreover, for a given

θ value, the point ( p ˜ 1 , p ˜ 2 ) that maximized the log-likelihood function subject

to the constraint p ˜ 1 p ˜ 2 = θ is known as the constrained MLE of ( p 1 , p 2 ) . [

gives a numerical algorithm to obtain ( p ˜ 1 , p ˜ 2 ) . However, by applying the Lagrange multiplier technique, we have the explicit closed form of the constrained MLE:

p ˜ 1 = θ p ˜ 2

and

p ˜ 2 = [ ( n 1 + x 2 ) θ + ( n 2 + x 1 ) ] − [ ( n 1 + x 2 ) θ + ( n 2 + x 1 ) ] 2 − 4 ( x 1 + x 2 ) ( n 1 + n 2 ) θ 2 ( n 1 + n 2 ) θ

The observed likelihood ratio statistic is

w ( θ ) = 2 [ l ( p ^ 1 , p ^ 2 ) − l ( p ˜ 1 , p ˜ 2 ) ] .

With the regularity conditions given in [

{ θ : w ( θ ) < χ 1 , α 2 }

where χ 1 , α 2 is the ( 1 − α ) 100 th percentile of the χ 1 2 distribution.

It is well-known that the above methods are not very accurate when the sample size is small. Although W ( θ ) is asymptotically distributed as χ 1 2 distribution, except in special cases, E [ W ( θ ) ] ≠ 1 , which is the mean of the χ 1 2 distribution. [

W ∗ ( θ ) = W ( θ ) E [ W ( θ ) ] .

Then W ∗ ( θ ) is asymptotically distributed as χ 1 2 distribution and E [ W ∗ ( θ ) ] = 1 . However, the explicit form of E [ W ( θ ) ] is only available in a few well-defined problems.

In this paper, I propose to use the following algorithm to approximate E [ W ( θ ) ] and hence the observed Bartlett corrected likelihood ratio statistic w ∗ ( θ ) .

Note that the key step of the algorithm is Step 4 where we simulate new data from the Binomial distribution where the parameter is chosen to be the constrained MLE obtained in Step 2. The reason is that we are trying to obtain a sampling distribution of the likelihood ratio statistic W ( θ ) , which is a function of the θ value given in Step 2. Hence, constrained MLE is used in Step 4.

As a final note in this section, the method by [

Our first example is to revisit the dataset discussed in previous section.

As for our second example, the number of divorces during 2006 in a random sample of Army Reserve and Army Guard couples is reported in [

The estimated relative risk is θ ^ = 12 / 324 7 / 286 = 1.51 , which indicates that the divorce rate for Army Reserve personnel is higher than the divorce rate of the

Method | 95% confidence interval for θ |
---|---|

Agresti’s method without adjustment | (0.4337, 0.6978) |

Agrest’s method with adjustment | (0.4348, 0.6990) |

Zhou’s method | (0.4323, 0.6961) |

Proposed method | (0.4333, 0.6955) |

Personal | Number of couples | Number of divorces |
---|---|---|

Reserve | 324 | 12 |

Guard | 286 | 7 |

Army Guard.

For this example, we also calculated the probability that the true relative risk is as extreme or more extreme than the estimated relative risk by the four methods discussed in this paper. The results are plotted in

Hence, it is important to investigate which method is more accurate when sample size is small. The following simulation studies were performed.

Note that the proportion of samples with θ 0 less than the lower confidence limit is known as the lower error proportion, the proportion of samples with θ 0 larger than the upper confidence limit is known as the upper error proportion, and the proportion of samples with θ 0 falling within the confidence interval is known as the central coverage proportion. Moreover, the average absolute bias is defined as

| lower error proportion − 0.025 | + | upper error proportion − 0.025 | 2 ,

which is a measure of bias of the 95% confidence interval. The nominal values for the lower error proportion, central coverage proportion, upper error proportion, and average absolute bias are 0.025, 0.95, 0.025, and 0, respectively.

Method | 95% confidence interval for θ |
---|---|

Agresti’s method without adjustment | (0.6040, 3.7908) |

Agresti’s method with adjustment | (0.6035, 3.5881) |

Zhou’s method | (0.6174, 4.0177) |

Proposed method | (0.6244, 3.8329) |

n_{1} | p_{1} | n_{2} | p_{2} | Method | le | cc | ue | aab |
---|---|---|---|---|---|---|---|---|

10 | 0.4 | 10 | 0.6 | 1 | 0.0324 | 0.9676 | 0 | 0.0162 |

2 | 0.0324 | 0.9676 | 0 | 0.0162 | ||||

3 | 0.0246 | 0.9555 | 0.0199 | 0.0028 | ||||

4 | 0.0280 | 0.9488 | 0.0232 | 0.0024 | ||||

10 | 0.5 | 10 | 0.5 | 1 | 0.0070 | 0.9868 | 0.0062 | 0.0184 |

2 | 0.0070 | 0.9868 | 0.0062 | 0.0184 | ||||

3 | 0.0237 | 0.9527 | 0.0236 | 0.0013 | ||||

4 | 0.0244 | 0.9508 | 0.0248 | 0.0004 | ||||

20 | 0.3 | 25 | 0.6 | 1 | 0.0367 | 0.9605 | 0.0028 | 0.0170 |

2 | 0.0447 | 0.9535 | 0.0018 | 0.0214 | ||||

3 | 0.0287 | 0.9433 | 0.0280 | 0.0034 | ||||

4 | 0.0247 | 0.9485 | 0.0268 | 0.0011 | ||||

20 | 0.8 | 25 | 0.8 | 1 | 0.0196 | 0.9678 | 0.0126 | 0.0089 |

2 | 0.0196 | 0.9678 | 0.0126 | 0.0089 | ||||

3 | 0.0200 | 0.9582 | 0.0218 | 0.0041 | ||||

4 | 0.0242 | 0.9466 | 0.0292 | 0.0025 | ||||

20 | 0.7 | 25 | 0.3 | 1 | 0.0054 | 0.9612 | 0.0334 | 0.0140 |

2 | 0.0030 | 0.9605 | 0.0365 | 0.0168 | ||||

3 | 0.0234 | 0.9535 | 0.0231 | 0.0018 | ||||

4 | 0.0270 | 0.9486 | 0.0244 | 0.0013 | ||||

50 | 0.2 | 50 | 0.6 | 1 | 0.0321 | 0.9556 | 0.0123 | 0.0099 |

2 | 0.0409 | 0.9501 | 0.0090 | 0.0160 | ||||

3 | 0.0222 | 0.9498 | 0.0280 | 0.0029 | ||||

4 | 0.0230 | 0.9495 | 0.0275 | 0.0023 | ||||

50 | 0.7 | 100 | 0.4 | 1 | 0.0220 | 0.9492 | 0.0288 | 0.0034 |

2 | 0.0207 | 0.9505 | 0.0288 | 0.0041 | ||||

3 | 0.0253 | 0.9454 | 0.0293 | 0.0023 | ||||

4 | 0.0260 | 0.9465 | 0.0275 | 0.0018 |

Note: Method 1 = Agresti’s method without adjustment, Method 2 = Agresti’s method with adjustment, Method 3 = Zhou’s method, and Method 4 = Proposed method.

From

In this paper, we demonstrated via simulations that the two methods discussed in [

The author declares no conflicts of interest regarding the publication of this paper.

Wong, O.C.Y. (2019) A Note on Improving Inference of Relative Risk. Open Journal of Statistics, 9, 100-108. https://doi.org/10.4236/ojs.2019.91009