Properties of the Maximum Likelihood Estimates and Bias Reduction for Logistic Regression Model ()

Nuri H. Salem Badi^{}

Mathematics Department, Faculty of Art and Science—Alabyar, University of Benghazi, Benghazi, Libya.

**DOI: **10.4236/oalib.1103625
PDF
HTML XML
1,573
Downloads
4,014
Views
Citations

Mathematics Department, Faculty of Art and Science—Alabyar, University of Benghazi, Benghazi, Libya.

A frequent problem in estimating logistic regression
models is a failure of the likelihood maximization algorithm to converge.
Although popular and extremely well established in bias correction for maximum
likelihood estimates of the parameters for logistic regression, the behaviour
and properties of the maximum likelihood method are less investigated. The main aim of this paper is to examine the behaviour
and properties of the parameters estimates methods with reduction technique. We
will focus on a method used a modified score function to reduce the bias of the
maximum likelihood estimates. We also present new and interesting examples by
simulation data with different cases of sample size and percentage of the
probability of outcome variable.

Keywords

Logistic Regression Model, Maximum Likelihood Method, Convergence Problems, Bias Reduction Technique

Share and Cite:

Badi, N. (2017) Properties of the Maximum Likelihood Estimates and Bias Reduction for Logistic Regression Model. *Open Access Library Journal*, **4**, 1-12. doi: 10.4236/oalib.1103625.

1. Introduction

The logistic regression methods are often used to interpret the statistical analysis of dichotomous outcome variables. It is commonly applied procedure for describing the relationship between a binary outcome variable. The general method of estimating the logistic regression parameter is maximum likelihood (ML). In a very general sense the ML method yields values for the unknown parameters that maximize the probability of the observed set of data. The commonly problem with using ML method is convergence problem, which occurs when the maximum likelihood estimates (MLE) do not exist. The subject of the assessment behaviour of MLE for logistic regression model is important, as the logistic model is widely used in medical statistics. Much work discusses on logistic regression model address converges problem like [1] or the bias reduction like [2] [3] . Many assumptions and more details considered about the distribution of the coefficients estimated by MLE approach and bias reduction technique, and also for more application and effects of the sample size, see [4] [5] . However, the behavior and properties of bias correction methods are less investigated. A recent paper takes the bias correction technique proposed by [2] to achieve the MLE existing. In the present paper, it centers to evaluate the behavior and properties of the bias reduction method by simulated data with different sample sizes and parameters. The next section, explains the shape and fits the logistic regression model. Section 3 discusses clearly the ML convergence problem. Application on modified score function in logistic regression model will discuss in Section 4 and it illustrates special case of modified function to give two equations that are used to estimate the parameters. Section 5 investi- gates the asymptotic properties for logistic regression model with making compression between estimated parameters with ML method and reduction technique by simulated data. The discussion, conclusion and some general remarks about the results are in Section 6.

2. The Logistic Regression Model

The goal of a logistic regression analysis is to find the best fitting model to describe the relationship between an outcome and covariates where the outcome is dichotomous. [6] considered the logistic regression model is a member of the class of the generalized linear models. For more details of logistic model see [7] [8] [9] also [10] [11] [12] .

The Model

Suppose now ${y}_{i}~\text{binomial}\left({m}_{i}\mathrm{,}{\pi}_{i}\right)$ where ${y}_{i},i=1,\cdots ,n$ is a response variable. Suppose that ${y}_{i}\in \left\{\mathrm{0,1,}\cdots \mathrm{,}{m}_{i}\right\}$ and ${\pi}_{i}$ are related to a collection of covariates $\left({x}_{i1}\mathrm{,}{x}_{i2}\mathrm{,}\cdots \mathrm{,}{x}_{ip}\right)$ according to the equation

$\mathrm{log}\left(\frac{{\pi}_{i}}{1-{\pi}_{i}}\right)={\displaystyle \underset{j=1}{\overset{p}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\beta}_{j}{x}_{ij}={\beta}^{\text{T}}{x}_{i}$ (1)

We consider the special case ${m}_{i}=1$ so ${y}_{i}~\text{binomial}\left(\mathrm{1,}{\pi}_{i}\right)$ where ${\pi}_{i}$ is the probability of success for each $i=1,\cdots ,n$ . We also define ${\eta}_{i}={\beta}^{\text{T}}{x}_{i}$ so that

$g\left({\pi}_{i}\right)=\mathrm{log}\left(\frac{{\pi}_{i}}{1-{\pi}_{i}}\right)={\eta}_{i}$ (2)

and

${\pi}_{i}=\frac{\mathrm{exp}\left({\eta}_{i}\right)}{1+\mathrm{exp}\left({\eta}_{i}\right)}$ (3)

Here $g\left(\mathrm{.}\right)$ is called the logit link function and ${\eta}_{i}={\displaystyle {\sum}_{j=1}^{p}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\beta}_{j}{x}_{ij}={\beta}^{\text{T}}{x}_{i}$ is the linear predictor.

There are some other link functions which can also be used, instead of the logit link function such as the probit link function

$\eta ={\Phi}^{-1}\left(\pi \right)\leftrightarrow \pi =\Phi (\; \eta \; )$

and the complementary log-log link function

$\eta =\mathrm{log}\left(-\mathrm{log}\left(1-\pi \right)\right)\leftrightarrow \pi =1-\mathrm{exp}\left(-{\text{e}}^{\eta}\right)$

Fitting The Model

The logistic model when ${y}_{i}~\text{binomial}\left({m}_{i}\mathrm{,}{\pi}_{i}\right)$ with ${m}_{i}=1$ can be fitted using the method of maximum likelihood to estimate the parameters. The first step is to construct the likelihood function which is a function of the unknown parameters. we choose those values of the parameters that maximize this function. The probability function of the model is

$f\left({y}_{i}\right)={\pi}_{i}^{{y}_{i}}{\left(1-{\pi}_{i}\right)}^{1-{y}_{i}}$ (4)

where the likelihood function is

$L\left({\pi}_{i}|{y}_{i}\right)={\displaystyle \underset{i=1}{\overset{n}{\prod}}}f\left({y}_{i}|{\pi}_{i}\right)$ (5)

Since the observations are independent, the likelihood function is as follows:

$L\left({\pi}_{i}|{y}_{i}\right)={\displaystyle \underset{i=1}{\overset{n}{\prod}}}{\pi}_{i}^{{y}_{i}}{\left(1-{\pi}_{i}\right)}^{1-{y}_{i}}$ (6)

The maximum likelihood estimate of $\beta $ is the value which maximizes the likelihood function. In general the log likelihood function is easier to work with mathematically and is:

$l={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left[{y}_{i}\text{\hspace{0.05em}}\mathrm{log}\text{\hspace{0.05em}}\left({\pi}_{i}\right)+\left(1-{y}_{i}\right)\mathrm{log}\left(1-{\pi}_{i}\right)\right]$ (7)

2.1. Special Case of the Logistic Model with Two Covariates

In this case the logistic regression model with two covariates, thus, $p=2$ , with one the general mean. So, we have ${\beta}_{0}$ and ${\beta}_{1}$ , such that

$g\left({\pi}_{i}\right)={\eta}_{i}={\beta}_{0}+{\beta}_{1}{x}_{i}$ (8)

where ${x}_{i}$ is now a scalar covariate and

${\pi}_{i}=\frac{\mathrm{exp}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)}{1+\mathrm{exp}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)}$ (9)

Therefore we can write the log-likelihood function as:

$l\left({\beta}_{0},{\beta}_{1}\right)={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)-\mathrm{log}\left[1+\mathrm{exp}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)\right]$ (10)

To estimate the values of ${\beta}_{0}$ and ${\beta}_{1}$ we differentiate $l\left({\beta}_{0}\mathrm{,}{\beta}_{1}\right)$ in terms of ${\beta}_{0}$ and ${\beta}_{1}$ respectively as:

$\frac{\partial l}{\partial {\beta}_{0}}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}-\frac{\mathrm{exp}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)}{1+\mathrm{exp}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({y}_{i}-{\pi}_{i}\right)$ (11)

$\frac{\partial l}{\partial {\beta}_{1}}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}{x}_{i}-\frac{{x}_{i}\left(\mathrm{exp}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)\right)}{1+\mathrm{exp}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({y}_{i}-{\pi}_{i}\right){x}_{i}$ (12)

Now we set $\frac{\partial l}{\partial {\beta}_{0}}=0$ and $\frac{\partial l}{\partial {\beta}_{1}}=0$ and so the maximum likelihood estimates

of ${\beta}_{0}$ and ${\beta}_{1}$ are the solution of the following equations

$\underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\pi}_{i$ (13)

and

$\underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}{x}_{i}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\pi}_{i}{x}_{i$ (14)

and will be denoted as ${\stackrel{^}{\beta}}_{0}$ and ${\stackrel{^}{\beta}}_{1}$ . We know that for the logistic regression the last two equations are non linear in ${\beta}_{0}$ and ${\beta}_{1}$ , and we need to use a numerical method for their solution, such as Newton-Raphson method.

2.2. The Asymptotic Distribution of the (MLE)

The estimated parameters $\stackrel{^}{\beta}={\left({\stackrel{^}{\beta}}_{0},{\stackrel{^}{\beta}}_{1}\right)}^{\prime}$ , have an asymptotic distribution which is given by $\stackrel{^}{\beta}~N\left(\beta \mathrm{,}I{\left(\beta \right)}^{-}{}^{1}\right)$ where $I\left(\beta \right)$ is Fisher’s information matrix defined as

$I\left(\beta \right)=-E\left(\begin{array}{cc}\frac{\partial {l}^{2}}{\partial {\beta}_{0}^{2}}& \frac{\partial {l}^{2}}{\partial {\beta}_{0}\partial {\beta}_{1}}\\ \frac{\partial {l}^{2}}{\partial {\beta}_{1}\partial {\beta}_{0}}& \frac{\partial {l}^{2}}{\partial {\beta}_{1}^{2}}\end{array}\right)$ (15)

where the matrix is evaluated at the MLE. For the logistic regression the estimated Fisher Information matrix can be writen as

$I\left(\stackrel{^}{\beta}\right)=\left(\begin{array}{cc}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\pi}}_{i}\left(1-{\stackrel{^}{\pi}}_{i}\right)& {\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{i}{\stackrel{^}{\pi}}_{i}\left(1-{\stackrel{^}{\pi}}_{i}\right)\\ {\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{i}{\stackrel{^}{\pi}}_{i}\left(1-{\stackrel{^}{\pi}}_{i}\right)& {\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{i}^{2}{\stackrel{^}{\pi}}_{i}\left(1-{\stackrel{^}{\pi}}_{i}\right)\end{array}\right)$ (16)

where ${\stackrel{^}{\pi}}_{i}=\frac{\mathrm{exp}\left(\stackrel{^}{\eta}\right)}{1+\mathrm{exp}\left(\stackrel{^}{\eta}\right)}$ and $\stackrel{^}{\eta}={\stackrel{^}{\beta}}_{0}+{\stackrel{^}{\beta}}_{1}{x}_{i}$ . The variance of $\stackrel{^}{\beta}$ is approximated

defined by $\text{Var}\left(\stackrel{^}{\beta}\right)=I{\left(\stackrel{^}{\beta}\right)}^{-1}$ .

3. Maximum Likelihood Convergence Problems

A problem occurs in estimating logistic regression models when the maximum likelihood estimates do not exist and one or more components of $\stackrel{^}{\beta}$ are infinite. The one case of the occurrence of this problem is when all of the observations have the same response. For example, suppose that ${m}_{i}=1$ and that all of the response variables equal zero i.e., ${\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}=0$ . In this case the log-likelihood function is

$l\left({\beta}_{0},{\beta}_{1}\right)={\displaystyle \underset{i=1}{\overset{n}{\sum}}}-\mathrm{log}\left[1+\mathrm{exp}\left({\beta}_{0}+{\beta}_{1}{x}_{i}\right)\right]$ (17)

Now differentiating $l\left({\beta}_{0}\mathrm{,}{\beta}_{1}\right)$ in terms of ${\beta}_{0}$ and ${\beta}_{1}$ respectively and setting equal to zero gives

$\underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\pi}_{i}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\frac{\mathrm{exp}\left({\stackrel{^}{\beta}}_{0}+{\stackrel{^}{\beta}}_{1}{x}_{i}\right)}{1+\mathrm{exp}\left({\stackrel{^}{\beta}}_{0}+{\stackrel{^}{\beta}}_{1}{x}_{i}\right)}=0$ (18)

and

$\underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\pi}_{i}{x}_{i}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{i}\frac{\mathrm{exp}\left({\stackrel{^}{\beta}}_{0}+{\stackrel{^}{\beta}}_{1}{x}_{i}\right)}{1+\mathrm{exp}\left({\stackrel{^}{\beta}}_{0}+{\stackrel{^}{\beta}}_{1}{x}_{i}\right)}=0$ (19)

The first equation has no solution because it is the sum of positive quantities and so cannot be equal to zero and satisfy the equation. To make this equation equal to zero we need to make ${\beta}_{0}$ larger and negative i.e. tend to $-\infty $ . However, if precisely one of the response variable equal 1, the result maximum likelihood equation become

$\underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\pi}_{i}=1$ (20)

$\underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\pi}_{i}{x}_{i}={x}_{1$ (21)

where we have assumed the numbers such that ${y}_{1}=1$ . Here the maximum likelihood estimates is exist and the convergence of the MLE is achieved. Because the two previous equations are sum of positive quantities equal positive values. So as in first equation, if parameter is large and positive, then the sum is larger than one as well as if it is large and negative, then the sum is smaller than one and will not satisfy the equation, then we can find finite estimate of parameters which satisfy the equation.

4. Modified Score Function

Firth [2] proposed a method to reduce bias in MLE. The maximum likelihood convergence problem does not exist with the modified score function. The idea that extend and focus on two standard approaches have been extensively studied in the literature. The computationally-intensive jackknife method proposed by [13] [14] . The second approach simply substitutes $\stackrel{^}{\beta}$ for the unknown $\beta $ in

$\frac{b\left(\beta \right)}{n}$ . The point that discussed in case of small size sets of data, it is not

uncommon that $\stackrel{^}{\beta}$ is infinite in some samples of logistic regression models [15] [16] . We know that the maximum likelihood approach is dependent on the derivative the log-likelihood function as a solution to the score equation

$\frac{\partial l\left(\beta \right)}{\partial \beta}=U\left(\beta \right)=0$ (22)

[2] proposed that instead, we solve ${U}^{*}\left(\beta \right)=0$ , where the appropriate modification to $U\left(\beta \right)$ is:

${U}^{*}\left(\beta \right)=U\left(\beta \right)-I\left(\beta \right)b\left(\beta \right)$ (23)

and the expected value of $\stackrel{^}{\beta}$ proposed by [3] , is given by:

$E\left(\stackrel{^}{\beta}\right)=\beta +b\left(\stackrel{^}{\beta}\right)+O\left({n}^{-1}\right)$ (24)

where

$b\left(\beta \right)=\frac{2E\left(\frac{\partial l}{\partial \beta}\frac{\partial {l}^{2}}{\partial {\beta}^{2}}\right)+E\left(\frac{\partial {l}^{3}}{\partial {\beta}^{3}}\right)}{2{\left\{E\left(\frac{\partial {l}^{2}}{\partial {\beta}^{2}}\right)\right\}}^{2}}$

The variance of $\stackrel{^}{\beta}$ is approximated defined by $\text{Var}\left(\stackrel{^}{\beta}\right)\simeq I{\left(\stackrel{^}{\beta}\right)}^{-1}$ .

4.1. Modified Function with Logistic Regression Model

In this part we will apply the modified score function to simple logistic regression model. We know that the $O\left({n}^{-1}\right)$ bias vector given in the form

$b={\left({X}^{\text{T}}WX\right)}^{-1}{X}^{\text{T}}W\xi $ which proposed by [17] . Here $W\xi $ has ith element

${h}_{i}\left({\pi}_{i}-\frac{1}{2}\right)$ and ${h}_{i}$ is the ith diagonal element of the hat matrix

$H={W}^{1/2}X{\left({X}^{\text{T}}WX\right)}^{-1}{X}^{\text{T}}{W}^{1/2}\mathrm{.}$

where $W=diag\left({\pi}_{i}\left(1-{\pi}_{i}\right)\right)$ and $X$ is the design matrix. Then, the modified score function is written as

${U}^{*}=U-{X}^{\text{T}}W\xi $ (25)

In this case, the modified score function ${U}^{\mathrm{*}}=\left({U}_{0}^{\mathrm{*}}\mathrm{,}{U}_{1}^{\mathrm{*}}\right)$ gives two equations

${U}_{0}^{*}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left[\left({y}_{i}+\frac{{h}_{i}}{2}\right)-\left(1+{h}_{i}\right){\pi}_{i}\right]=0$ (26)

and

${U}_{1}^{*}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left[\left({y}_{i}+\frac{{h}_{i}}{2}\right)-\left(1+{h}_{i}\right){\pi}_{i}\right]{x}_{i}=0$ (27)

These are used to estimate the parameters.

4.2. Special Case of Modified Function

For more evaluation, we will discuss the behaviour of the adjusted score function when all the observation have the same response i.e. ${\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}=0$ . As a special case, suppose we have one explanatory variable ${x}_{i}$ taking values 0 or 1. Before we calculate the adjusted score function, first calculate the form of ${h}_{i}$ which we obtain from $H$ . Here, ${h}_{i}$ is the diagonal element of the $H$ matrix and is

${h}_{i}=\frac{{\pi}_{i}\left(1-{\pi}_{i}\right)\left({X}_{2}-2{x}_{i}{X}_{1}+{x}_{i}^{2}{X}_{0}\right)}{\Delta}$ (28)

where $\Delta ={X}_{0}{X}_{2}-{X}_{1}^{2}$ , ${X}_{0}={n}_{0}{\pi}_{0}\left(1-{\pi}_{0}\right)+{n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right)$ , ${X}_{1}={n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right)$ and ${X}_{2}={n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right)$ , where ${n}_{0}$ and ${n}_{1}$ are the number of observations of x equal to 0 and 1 respectively. Hence

${h}_{0}=\frac{{\pi}_{0}\left(1-{\pi}_{0}\right)\left[{n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right)\right]}{\Delta}$ (29)

and

${h}_{1}=\frac{{\pi}_{1}\left(1-{\pi}_{1}\right)\left[{n}_{0}{\pi}_{0}\left(1-{\pi}_{0}\right)\right]}{\Delta}$ (30)

Therefore, when we set the adjusted score function $\left({U}_{0}^{*},{U}_{1}^{*}\right)=0$ with

${\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}=0$ we have

${U}_{1}^{*}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left[\frac{{h}_{i}}{2}-\left(1+{h}_{i}\right){\pi}_{i}\right]{x}_{i}=0$ (31)

This gives

$\left[\frac{{h}_{1}}{2}-\left(1+{h}_{1}\right){\pi}_{1}\right]=0$ (32)

and

${\pi}_{1}=\frac{{h}_{1}}{2\left(1+{h}_{1}\right)}$ (33)

Now,

${U}_{0}^{*}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left[\frac{{h}_{i}}{2}-\left(1+{h}_{i}\right){\pi}_{i}\right]=0$ (34)

and so

$\left[{h}_{1}\frac{{n}_{1}}{2}-{n}_{1}\left(1+{h}_{1}\right){\pi}_{1}\right]+\left[{h}_{0}\frac{{n}_{0}}{2}-{n}_{0}\left(1+{h}_{0}\right){\pi}_{0}\right]=0$ (35)

we get

${\pi}_{0}=\frac{{h}_{0}}{2\left(1+{h}_{0}\right)}$ (36)

Before calculate ${\pi}_{0}$ and ${\pi}_{1}$ we can consider the following way to calculate ${h}_{0}$ and ${h}_{1}$ . Let $A={n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right)$ and $B={n}_{0}{\pi}_{0}\left(1-{\pi}_{0}\right)$ . Then, ${X}_{0}=A+B$ , ${X}_{1}=A$ and ${X}_{2}=A$ , so, we can write $\Delta $ as

$\Delta ={X}_{0}{X}_{2}-{X}_{1}^{2}={A}^{2}+AB-{A}^{2}=AB={n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right){n}_{0}{\pi}_{0}\left(1-{\pi}_{0}\right)$ (37)

Therefore, ${h}_{0}$ and ${h}_{1}$ can be written as

${h}_{0}=\frac{{\pi}_{0}\left(1-{\pi}_{0}\right)\left[{n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right)\right]}{\left[{n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right)\right]{n}_{0}{\pi}_{0}\left(1-{\pi}_{0}\right)}=\frac{1}{{n}_{0}}$ (38)

and

${h}_{1}=\frac{{\pi}_{1}\left(1-{\pi}_{1}\right)\left[{n}_{0}{\pi}_{0}\left(1-{\pi}_{0}\right)\right]}{\left[{n}_{1}{\pi}_{1}\left(1-{\pi}_{1}\right)\right]{n}_{0}{\pi}_{0}\left(1-{\pi}_{0}\right)}=\frac{1}{{n}_{1}}$ (39)

Then, we obtain

${\pi}_{0}=\frac{{h}_{0}}{2\left(1+{h}_{0}\right)}=\frac{1/{n}_{0}}{2\left(1+1/{n}_{0}\right)}=\frac{1}{2\left({n}_{0}+1\right)}$ (40)

and

${\pi}_{1}=\frac{{h}_{1}}{2\left(1+{h}_{1}\right)}=\frac{1/{n}_{1}}{2\left(1+1/{n}_{1}\right)}=\frac{1}{2\left({n}_{1}+1\right)}$ (41)

As a result of this example with $x=0,1$ when ${\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}y=0$ , we can say that, the estimate of parameters are finite. The modified function works well and the problem of convergence does not exist.

5. Simulation Study

The follow discussion are the simulation plan and the designs used in generating the data to identify the effect of sample size and proportion of events (the percentage of $y=1$ or $y=0$ ) on estimation of parameters. We will examine the precision of the estimation by calculating the variance of parameters obtained by simulation for the two approaches, MLE and Firth, and compare those with $I{\left(\beta \right)}^{-1}$ evaluated at the known values of $\beta $ . The simulation study is designed as follows:

1) Thre sample sizes have been used $n=40$ , $n=120$ and $n=500$ .

2) For each sample size we choose ${x}_{i}$ as a draw from $N\left(\mathrm{0,1}\right)$ . The x variables are fixed at these values throughout the simulation.

3) We choose ${\beta}_{0}$ and ${\beta}_{1}$ to give three cases. Choose ${\beta}_{1}=0.2$ and adjust ${\beta}_{0}$ so that over the covariates $\text{pr}\left(y=1\right)$ is approximately (a) 0.5, (b) 0.1, (c) 0.05.

4) For each sample size and set of parameter values we perform 100,000 simulation.

5) Two approaches are used to estimate the parameters, MLE and the bias- reduced estimator Firth.

5.1. Results and Discussion of Sample Size n = 500

The simulation reported the accuracy of the estimation of $\text{Var}\left(\stackrel{^}{\beta}\right)$ using the information matrix. We calculate $\text{Var}\left({\stackrel{^}{\beta}}_{0}\right)$ and $\text{Var}\left({\stackrel{^}{\beta}}_{1}\right)$ for the simulated values of ${\stackrel{^}{\beta}}_{0}$ , ${\stackrel{^}{\beta}}_{1}$ and also by evaluating $I\left(\beta \right)$ at the known values of $\beta $ . The results in the Table 1, which shows the three cases of the proportion of $y=1$ , achieved the convergence of likelihood maximization alogrithm.

As can be seen in Table 1, $\text{Var}{\stackrel{^}{\beta}}_{L}$ Sim and $\text{Var}{\stackrel{^}{\beta}}_{F}$ Sim are the variance of the parameters estimated by MLE and Firth method respectively. Ratio L and Ratio F denote the ratio of the variance estimated by MLE and Firth’s method, respectively. The results showed that, both the variance of the parameters calculated from the simulation and the variance calculated by evaluating the information matrix at the known values of $\beta $ are almost the same. We note that the ratio in the first case when $\text{pr}\left(y=1\right)$ is 0.5 appeared nearly close to one but in the second case and the last case the ratio appeared slightly larger than in the first case.

The variance of parameters calculated by Firth’s method were smaller than when calculated by MLE and the ratio in general was close to 1. Moreover, the bias ( ${\stackrel{^}{\beta}}_{F}\text{-}\beta $ ) was smaller.

5.2. Results and Discussion of Sample Size n = 120

In this part using the same way used in the previous case when $n=500$ . The

(a) The variance of the parameters estimated by MLE and Firth with (0.5, 0.1, 0.05) propotion of y = 1.

(b)The bias value with (0.5, 0.1, 0.05) propotion of y = 1.

Table 1. Results of 100,000 simulations with sample size n = 500 and (0.5, 0.1, 0.05) propotion of y = 1.

results of simulation are shown in Table 2. Maximum likelihood convergence problems occurred (when $\text{pr}\left(y=1\right)=0.05$ ). Note that, there are many situations in which the likelihood function has no maximum, in which case we say that the maximum likelihood estimate does not exist. Consider the simulation which generating the data set 100,000 times, in some cases the coefficients reach to infinite in the final iterations and so, we have not results of the estimation ${\stackrel{^}{\beta}}_{0}$ and ${\stackrel{^}{\beta}}_{1}$ , that result in at which point the algorithm has not converged. In our simulation we consider the cases that not achieved the converges algorithm.

Here for only 99,806 (99%) of the data sets was it possible to obtain finite estimates of ${\beta}_{0}$ and ${\beta}_{1}$ converged. Moreover, the variance of the parameters ${\stackrel{^}{\beta}}_{0}$ and ${\stackrel{^}{\beta}}_{1}$ is large. This is because even though convergence is achieved when ${\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}=1$ , There are some very large negative values of $\stackrel{^}{\beta}$ . In the other two cases of $\text{pr}\left(y=1\right)=\left(0.5,0.1\right)$ we achieved ML convergence in every simulation. We note that the ratio is nearly one but is a bit high when compared with case of $n=500$ . Firth’s approach showed reasonable results, all cases achieved the maximum likelihood convergence. Moreover, the ratio was better than MLE approach as well as the bias ${\stackrel{^}{\beta}}_{F}\text{-}\beta $ .

5.3. Results and Discussion of Sample Size n = 40

We used the same analysis as in the previous cases with $n=40$ . As can be seen in Table 3, the results showed that, MLE approach had convergence problems,

(a) The variance of the parameters estimated by MLE and Firth with (0.5, 0.1, 0.05) propotion of y = 1.

(b) The bias value with (0.5, 0.1, 0.05) propotion of y = 1.

Table 2. Results of 100,000 simulations with sample size n = 120 and (0.5, 0.1, 0.05) propotion of y = 1.

(a) The variance of the parameters estimated by MLE and Firth with (0.5, 0.1, 0.05) propotion of y = 1.

(b) The bias value with (0.5, 0.1, 0.05) propotion of y = 1.

Table 3. Results of 100,000 simulations with sample size n = 40 and (0.5, 0.1, 0.05) propotion of y = 1.

98,273 (98%) and 85,967 (86%) of data sets achieved ML convergence when $\text{pr}\left(y=1\right)$ was (0.1, 0.05), respectively. Convergence was only achieved in every simulation in the case of $\text{pr}\left(y=1\right)=0.5$ , where the ratio was nearly close to one, but is a bit high from previous cases. Moreover, we found the same problem as discussed in the case of $n=120$ , in that the variance of the parameters ${\stackrel{^}{\beta}}_{0}$ and ${\stackrel{^}{\beta}}_{1}$ is large. However, when we use Firth’s approach, all data sets achieved M.L convergence. Moreover, the ratio was better than M.L.E approach as well as the bias ${\stackrel{^}{\beta}}_{F}\text{-}\beta $ being smaller.

6. Conclusion

Attention has been directed in this work to determine the behaviour of the asymptotic estimation of parameters by two methods―MLE and bias reduction technique compared with the result of the information matrix. In fact in regular convergence problem the modified score function appeared appropriate behaviour, which denoted that the bias form may be removed from the MLE by reduction bias term. The asymptotic variance of the MLE may be appeared as strange behaviour, and the results shown variance of the parameters were large in some cases, even though convergence is achieved. It is denoted that there are some very large negative values of $\stackrel{^}{\beta}$ , as shown in results section. We can report that the small sample size and the value of $\text{pr}\left(y=1\right)$ have an effect on behaviour estimation of parameters when using MLE. Clearly, we found conver- gence problem for some combinations of sample size and $\text{pr}\left(y=1\right)$ . The approach of Firth appeared a moderate results that the data sets in all cases of sample size and $\text{pr}\left(y=1\right)$ achieved ML convergence. Overall, we can consider the bias reduction technique is worked well and has a moderate behaviour almost with all cases which have been investigated. Moreover, the convergence problem is not only effective on behaviour of the MLE, and although the convergence is achieved, the variance of the parameters estimates appeared large value.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Cox, D.R. and Hinkley, D.V. (1974) Theoretical Statistics. Chapman and Hall, London. |

[2] |
Firth, D. (1993) Bias Reduction of Maximum Likelihood Estimates. Biometrika, 80, 27-38. https://doi.org/10.1093/biomet/80.1.27 |

[3] | Anderson, J.A. and Richardson, C. (1979) Logistic Discrimination and Bias Correction in Maximum Likelihood Estimation. Technometrics, 21, 71-78. |

[4] |
McCullagh, P. (1986) The Conditional Distribution of Goodness-of-Fit Statistics for Discrete Data. Journal of the American Statistical Association, 81, 104-107. https://doi.org/10.1080/01621459.1986.10478244 |

[5] | Shenton, L.R. and Bowman, K.O. (1977) Maximum Likelihood Estimation in Small Samples. Griffin’s Statistical Monograph No. 38, London. |

[6] |
Nelder, J.A. and Wedderburn, R.W.M. (1972) Generalized Linear Models. Journal of the Royal Statistical Society, Series A, 135, 370-384. https://doi.org/10.2307/2344614 |

[7] | Dobson, A. (1990) An Introduction to Generalized Linear Models. Chapman and Hall, London. |

[8] | Dobson, A.J. and Barnett, A.G. (2008) An Introduction to Generalized Linear Models. 3rd Edition, Chapman and Hall, New York. |

[9] |
Kleinbaum, D.G. (1994) Logistic Regression: A Self-Learning Text. Springer-Verlag, New York. https://doi.org/10.1007/978-1-4757-4108-7 |

[10] | Hilbe, J.M. (2009) Logistic Regression Model. Chapman and Hall, New York. |

[11] |
Hosmer, D.W. and Lemeshow, S. (2000) Applied Logistic Regression. Wiley, Chichester. https://doi.org/10.1002/0471722146 |

[12] |
Hosmer, D., Lemeshow, S. and Sturdivant, R.X. (2013) Applied Logistic Regression. 3rd Edition, Wiley, Chichester. https://doi.org/10.1002/9781118548387 |

[13] | Quenouille, M.H. (1949) Approximate Tests of Correlation in Time-Series. Journal of the Royal Statistical Society: Series B, 11, 68-84. |

[14] |
Quenouille, M.H. (1956) Notes on Bias in Estimation. Biometrika, 43, 353-360. https://doi.org/10.1093/biomet/43.3-4.353 |

[15] |
Albert, A. and Anderson, J.A. (1984) On the Existence of Maximum Likelihood Estimates in Logistic Regression Models. Biometrika, 71, 1-10. https://doi.org/10.1093/biomet/71.1.1 |

[16] |
Clogg, C.C., Rubin, D.B., Schenker, N., Schultz, B. and Weidman, L. (1991) Multiple Imputation of Industry and Occupation Codes in Census Public-Use Samples Using Bayesian Logistic Regression. Journal of the American Statistical Association, 86, 68-78. https://doi.org/10.1080/01621459.1991.10475005 |

[17] | McCullagh, P. and Nelder, J.A. (1989) Linear Models. 2nd Edition, Chapman and Hall, London. |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.