An Approximation Method for a Maximum Likelihood Equation System and Application to the Analysis of Accidents Data

There exist many iterative methods for computing the maximum likelihood estimator but most of them suffer from one or several drawbacks such as the need to inverse a Hessian matrix and the need to find good initial approximations of the parameters that are unknown in practice. In this paper, we present an estimation method without matrix inversion based on a linear approximation of the likelihood equations in a neighborhood of the constrained maximum likelihood estimator. We obtain closed-form approximations of solutions and standard errors. Then, we propose an iterative algorithm which cycles through the components of the vector parameter and updates one component at a time. The initial solution, which is necessary to start the iterative procedure, is automated. The proposed algorithm is compared to some of the best iterative optimization algorithms available on R and MATLAB software through a simulation study and applied to the statistical analysis of a road safety measure.

Share and Cite:

N’Guessan, A. , Geraldo, I. and Hafidi, B. (2017) An Approximation Method for a Maximum Likelihood Equation System and Application to the Analysis of Accidents Data. Open Journal of Statistics, 7, 132-152. doi: 10.4236/ojs.2017.71011.

1. Introduction

Approximation methods for Maximum Likelihood (ML) systems of equations are of interest and are motivated in this paper by the need to find estimation methods that are simple and easy to implement in the specific field of the statistical evaluation of the impact of a road safety measure. In practice, the estimation methods dedicated to this evaluation depend both on the nature of the measure and the available data. Methods based on frequencies combination have received considerable attention    and for the most part of them, we are faced with the estimation of unknown parameters which are often functionally dependent.

Many approximation methods for maximum likelihood estimation need to solve systems of linear or non-linear equations with or without constraints  -  . Newton-Raphson’s method and Fisher scoring are certainly the most commonly used approximation methods. They consist in updating the whole parameter vector using the iterative scheme:

${\varphi }^{\left(k+1\right)}={\varphi }^{\left(k\right)}+{\left[M\left({\varphi }^{\left(k\right)}\right)\right]}^{-1}\nabla \mathcal{l}\left({\varphi }^{\left(k\right)}\right)$ (1)

where ${\varphi }^{\left(k\right)}$ is the estimate of the vector parameter at the step $k$ , $\mathcal{l}$ is the log-likelihood function, $\nabla \mathcal{l}\left({\varphi }^{\left(k\right)}\right)$ is the gradient of $\mathcal{l}$ and $M\left({\varphi }^{\left(k\right)}\right)$ is the observed or the expected information matrix. Both methods require the com- putation of second-order partial derivatives and a matrix inversion in each iteration, which can be very costly. Some authors such as Wang  have proposed quadratic approximations and extended Fisher scoring. The main point is to use quadratic approximations to the log-likelihood function and optimize these approximations within the constrained parameter space.

Within the framework of crash data analysis, different iterative estimation methods have been proposed   . For example, Mkhadri et al.  propose a Minorization-Maximization (MM) algorithm for the maximum likelihood estimation of the parameter vector of a multinomial distribution modelling crash data. Their proposed MM algorithm cycles through the components of the parameter vector and updates one component at a time which leads to closed-form expressions of the parameters. They claim that their MM algorithm is simple to implement without any matrix inversion and constraints are integrated easily.

Despite the above advantages, the choice of the starting value ${\varphi }^{\left(0\right)}$ remains a major issue since a value of ${\varphi }^{\left(0\right)}$ relatively far from the true unknown value of the vector parameter can lead to erroneous solutions or to non-convergence. In addition to this, it must be noted that obtaining explicit expressions of standard errors is not generally easy.

In this paper, we present an estimation method without matrix inversion based on a linear approximation of the likelihood equations in a neighborhood of the constrained maximum likelihood estimator. We obtain closed-form ap- proximations of solutions and standard errors. Then, we propose a partial linear approximation (PLA) algorithm which cycles through the components of the vector parameter and updates one component at a time. The initial solution is automated and standard errors are obtained in a closed-form. The PLA is com- pared to some of the best available algorithms on R and MATLAB software through a simulation study and applied to real crash data.

The remainder of the paper is organized as follows. Section 2 is devoted to the statistical model and the main assumptions used to get closed-form appro- ximations and standard errors. The proposed estimation method and the method for computing standard errors are presented in Section 3. The general framework of the proposed algorithm is also described. In Section 4, we give an illustration of our results using a crash data model. The numerical performance of the proposed algorithm is examined in Section 5 through a simulation study while a real-data application is given in Section 6. The appendix is devoted to the technical details of the main results.

2. Statistical Model and Main Assumptions

2.1. Statistical Model

Let $Y=\left({Y}_{11},\cdots ,{Y}_{1r},{Y}_{21},\cdots ,{Y}_{2r}\right)$ be a random vector with $2r\left(r>1\right)$ compo- nents and $\pi \left(\varphi \right)=\left({\pi }_{11}\left(\varphi \right),\cdots ,{\pi }_{1r}\left(\varphi \right),{\pi }_{21}\left(\varphi \right),\cdots ,{\pi }_{2r}\left(\varphi \right)\right)$ be a vector of pro- babilities such that

$\underset{i=1}{\overset{2}{\sum }}\underset{j=1}{\overset{r}{\sum }}{\pi }_{ij}\left(\varphi \right)=1$

where $\varphi$ is a parameter vector. It is assumed that the vector $Y$ has the multinomial distribution $\mathcal{M}\left(n;\pi \left(\varphi \right)\right)$ where $n>0$ is a known integer. The basic principle of the multinomial distribution $\mathcal{M}\left(n;\pi \left(\varphi \right)\right)$ consists in dis- tributing $n$ items in $2r$ categories or classes ( $2r$ being the number of com- ponents of vector $\pi \left(\varphi \right)$ ). The probability for an object to fall in a class is called class probability with the sum of all class probabilities equal to 1. Here, the class probabilities ${\pi }_{ij}\left(\varphi \right)$ depend on the unknown vector parameter $\varphi$ .

Given a vector of integers $y=\left({y}_{11},\cdots ,{y}_{1r},{y}_{21},\cdots ,{y}_{2r}\right)$ such that

$\underset{i=1}{\overset{2}{\sum }}\underset{j=1}{\overset{r}{\sum }}{y}_{ij}=n,$

the probability function related to $\mathcal{M}\left(n;\pi \left(\varphi \right)\right)$ is defined by

$f\left(y\right)=\frac{n!}{{\prod }_{i=1}^{r}{\prod }_{j=1}^{2}{y}_{ij}!}\underset{i=1}{\overset{r}{\prod }}\underset{j=1}{\overset{2}{\prod }}{\left({\pi }_{ij}\left(\varphi \right)\right)}^{{y}_{ij}}.$ (2)

2.2. ML Estimation and Main Assumptions

Assumption 1. The vector parameter $\varphi$ is partitioned as $\varphi =\left(\theta ,\beta \right)$ where $\theta >0$ is a real parameter, $\beta ={\left({\beta }_{1},\cdots ,{\beta }_{r}\right)}^{\text{T}}$ is a vector and ${\beta }_{j}>0$ for all $j=1,\cdots ,r$ .

Assumption 2. The unknown vector $\varphi$ is subject to a linear constraint $C\left(\varphi \right)=0$ where $C$ is a continuously differentiable function from ${ℝ}^{r+1}$ to $ℝ$ .

Let $y=\left({y}_{11},\cdots ,{y}_{1r},{y}_{21},\cdots ,{y}_{2r}\right)\in {ℝ}^{2r}$ be a vector of observed data and $\mathcal{l}\left(\varphi \right)$ be the logarithm of the probability density function $f\left(y\right)$ defined by Equation (2). The constrained maximum likelihood estimator $\stackrel{^}{\varphi }=\left(\stackrel{^}{\theta },\stackrel{^}{\beta }\right)$ , is solution to the optimization problem

$\text{Maximize}\mathcal{l}\left(\varphi \right)\text{subjetto}C\left(\varphi \right)=0.$ (3)

Problem (3) is equivalent to the maximization of function

$\mathcal{L}\left(\varphi ,\lambda \right)=\mathcal{l}\left(\varphi \right)-\lambda C\left(\varphi \right)$ (4)

where $\lambda$ is the Lagrange multiplier. The information matrix linked to the constrained maximum likelihood estimator $\stackrel{^}{\varphi }$ is

${\Gamma }_{\varphi }=\left[\begin{array}{cc}{J}_{\varphi }& {C}_{\varphi }^{\text{T}}\\ {C}_{\varphi }& 0\end{array}\right],\text{ }\text{ }{J}_{\varphi }=\left[\begin{array}{cc}{\tau }_{\varphi }& {U}_{\varphi }^{\text{T}}\\ {U}_{\varphi }& {B}_{\varphi }\end{array}\right]$ (5)

where ${C}_{\varphi }={\left(\partial C/\partial \theta ,\partial C/\partial {\beta }_{1},\cdots ,\partial C/\partial {\beta }_{r}\right)}^{\text{T}}\in {ℝ}^{1+r}$ , ${J}_{\varphi }\in {ℝ}^{1+r}×{ℝ}^{1+r}$ ,

${\tau }_{\varphi }=\text{ }E\text{ }\left(-\frac{{\partial }^{2}\mathcal{l}}{\partial {\theta }^{2}}\right)\in ℝ,\text{}{U}_{\varphi }=\text{ }E\text{ }\left(-\frac{{\partial }^{2}\mathcal{l}}{\partial \theta \partial {\beta }_{1}},\cdots ,-\frac{{\partial }^{2}\mathcal{l}}{\partial \theta \partial {\beta }_{r}}\right)$

and ${B}_{\varphi }$ is a $r×r$ matrix whose entries are $E\left(-{\partial }^{2}\mathcal{l}/\partial {\beta }_{m}\partial {\beta }_{j}\right)$ , $m,j=1,\cdots ,r$ . We also assume that the following conditions are verified:

Assumption 3. $\partial C/\partial \theta =0$ and $〈{C}_{\beta },\beta 〉=\kappa \ne 0$ where $〈.,.〉$ is the inner product, $\kappa$ is a constant and ${C}_{\beta }={\left(\partial C/\partial {\beta }_{1},\cdots ,\partial C/\partial {\beta }_{r}\right)}^{\text{T}}\in {ℝ}^{r}$ ;

Assumption 4. For any $\theta >0$ , there exists a non-singular $r×r$ matrix ${\Omega }_{\stackrel{^}{\theta },y}$ such that ${C}_{\stackrel{^}{\beta }}^{\text{T}}{\Omega }_{\stackrel{^}{\theta },y}^{-1}{C}_{\stackrel{^}{\beta }}>0$ and the non-linear system

${\left(\frac{\partial \mathcal{l}}{\partial \beta }\right)}_{\stackrel{^}{\varphi }}-\frac{〈\stackrel{^}{\beta },{\left(\frac{\partial \mathcal{l}}{\partial \beta }\right)}_{\stackrel{^}{\varphi }}〉}{〈\stackrel{^}{\beta },{C}_{\stackrel{^}{\beta }}〉}{C}_{\stackrel{^}{\beta }}={0}_{r}$

is approximated by the linear system

$\left[\begin{array}{cc}{\Omega }_{\stackrel{^}{\theta },y}& {C}_{\stackrel{^}{\beta }}\\ {C}_{\stackrel{^}{\beta }}^{\text{T}}& 0\end{array}\right]\left[\begin{array}{c}\stackrel{^}{\beta }\\ 0\end{array}\right]=\left[\begin{array}{c}{D}_{y}\\ \kappa \end{array}\right]$

where ${0}_{r}={\left(0,\cdots ,0\right)}^{\text{T}}\in {ℝ}^{r}$ and ${D}_{y}$ is a $r×1$ vector whose components are obtained from $y$ .

Assumption 5. There exists a function $g:{ℝ}^{r-1}\to ℝ$ such that the equation

${\left(\frac{\partial \mathcal{l}}{\partial \theta }\right)}_{\stackrel{^}{\varphi }}=0$ is equivalent to $\stackrel{^}{\theta }=g\left(\stackrel{^}{\beta }\right)$ .

Assumption 6. There exist two strictly positive real numbers ${a}_{n,\varphi }$ and ${b}_{n,\varphi }$ , a non-singular $r×r$ diagonal matrix ${\Sigma }_{\varphi }$ and a vector ${V}_{\varphi }\in {ℝ}^{r}$ such that ${\tau }_{\varphi }={a}_{n,\varphi }^{-1}{b}_{n,\varphi }^{2}$ , ${B}_{\varphi }={a}_{n,\varphi }\left({\Sigma }_{\varphi }+{V}_{\varphi }{V}_{\varphi }^{\text{T}}\right)$ and ${U}_{\varphi }={b}_{n,\varphi }{V}_{\varphi }$ .

Assumption 3 specifies the form of function $C$ . Particularly, $C$ is only a function of sub-vector $\beta$ . Assumptions 4 and 5 enable us to get $\stackrel{^}{\beta }$ from $\stackrel{^}{\theta }$ and inversely. Assumption 6 enables us to transform the Fisher information matrix in order to use classical results on matrix inversion with Schur’s com- plement  .

3. The Estimation Method

3.1. Partial Linear Approximation Principle

The general problem of finding the constrained maximum likelihood estimator has been discussed by many authors    . The classical approach is based on a Newton-type algorithm and computes the components of $\stackrel{^}{\varphi }$ at once. Except from some few simple cases, it is not generally possible to get explicit expressions of the components of $\stackrel{^}{\varphi }$ . One shows the following lemma (we refer the reader to the appendix for a proof).

Lemma 1. The constrained maximum likelihood estimator $\stackrel{^}{\varphi }$ , provided it exists, is solution to the non-linear system

${\left(\frac{\partial \mathcal{l}}{\partial \theta }\right)}_{\stackrel{^}{\varphi }}=0\text{ }\text{and}\text{ }{\left(\frac{\partial \mathcal{l}}{\partial \beta }\right)}_{\stackrel{^}{\varphi }}-\frac{〈\stackrel{^}{\beta },{\left(\frac{\partial \mathcal{l}}{\partial \beta }\right)}_{\stackrel{^}{\varphi }}〉}{〈\stackrel{^}{\beta },{C}_{\stackrel{^}{\beta }}〉}{C}_{\stackrel{^}{\beta }}={0}_{r}$ (6)

where ${0}_{r}={\left(0,\cdots ,0\right)}^{\text{T}}\in {ℝ}^{r}$ .

Our approach consists in dividing Equation (6) into two parts: one con- cerning the first component of $\stackrel{^}{\varphi }$ and the other one concerning the sub-vector $\stackrel{^}{\beta }$ .

Theorem 1. Under assumptions 3 - 5, the constrained MLE $\stackrel{^}{\varphi }=\left(\stackrel{^}{\theta },\stackrel{^}{\beta }\right)$ is given by:

$\stackrel{^}{\theta }=g\left(\stackrel{^}{\beta }\right)$

$\stackrel{^}{\beta }={\Omega }_{\stackrel{^}{\theta },y}^{-1}{D}_{y}.$

The non-obvious part of the proof consists in the determination of $\stackrel{^}{\beta }$ by inverting the $\left(r+1\right)×\left(r+1\right)$ matrix linked to the linear system in Assumption 4. This result based on the inversion of partitioned matrices will not be demonstrated in this paper. We refer the reader to classical papers on Schur complement   .

From Theorem 1, it is seen that the MLE $\stackrel{^}{\varphi }=\left(\stackrel{^}{\theta },\stackrel{^}{\beta }\right)$ is a fixed point of the function from ${ℝ}^{r+1}$ to itself defined by $\left(\theta ,\beta \right)↦\left(g\left(\beta \right),{\Omega }_{\theta ,y}^{-1}{D}_{y}\right)$ . We can then build an iterative algorithm to estimate $\varphi$ . The classical fixed point method which consists in simultaneously updating $\stackrel{^}{\theta }$ and $\stackrel{^}{\beta }$ may be hard to im- plement because of the link between $\stackrel{^}{\theta }$ and $\stackrel{^}{\beta }$ . We propose instead to alternate between updating $\stackrel{^}{\theta }$ holding $\stackrel{^}{\beta }$ fixed and vice-versa. Starting from a given ${\theta }^{\left(0\right)}$ , we compute ${\beta }^{\left(0\right)}$ and then ${\theta }^{\left(1\right)}$ and ${\beta }^{\left(1\right)}$ and so on. The process is repeated until a stopping criteria is satisfied. For example, we can stop the iterations when successive values of the log-likelihood satisfy the condition $|\mathcal{l}\left({\varphi }^{\left(k+1\right)}\right)-\mathcal{l}\left({\varphi }^{\left(k\right)}\right)|<ϵ$ where $ϵ>0$ .

The estimation process may be completed by the computation of standard errors with Theorem 2 below.

Theorem 2. Under assumptions 3 - 6, the asymptotic variance of the com- ponents of $\stackrel{^}{\varphi }$ are

${\stackrel{^}{\sigma }}^{2}\left(\stackrel{^}{\theta }\right)={a}_{n,\stackrel{^}{\varphi }}\text{ }{b}_{n,\stackrel{^}{\varphi }}^{-2}\left(1+{‖{V}_{\stackrel{^}{\varphi }}‖}_{{\Sigma }_{\stackrel{^}{\varphi }}^{-1}}^{2}-{‖{C}_{\stackrel{^}{\beta }}‖}_{{\Sigma }_{\stackrel{^}{\varphi }}^{-1}}^{-2}{\xi }_{\stackrel{^}{\varphi }}^{2}\right)$ (7)

and

${\stackrel{^}{\sigma }}^{2}\left({\stackrel{^}{\beta }}_{j}\right)={a}_{n,\stackrel{^}{\varphi }}^{-1}\left({\Sigma }_{\stackrel{^}{\varphi },j}^{-1}-{‖{C}_{\stackrel{^}{\beta }}‖}_{{\Sigma }_{\stackrel{^}{\beta }}^{-1}}^{-2}×{\left({\Sigma }_{\stackrel{^}{\varphi },j}^{-1}×{C}_{\stackrel{^}{\beta },j}\right)}^{2}\right),\text{}j=1,\cdots ,r$ (8)

where ${\xi }_{\varphi }={V}_{\varphi }^{\text{T}}{\Sigma }_{\varphi }^{-1}{C}_{\beta }>0$ and the real values ${\Sigma }_{\varphi ,j}^{-1}$ (resp. ${C}_{\beta ,j}$ ), $j=1,\cdots ,r$ , are the diagonal elements (resp. components) of matrix ${\Sigma }_{\varphi }^{-1}$ (resp. of vector ${C}_{\beta }$ ).

The proof is given in the appendix. It stems from the results of N'Guessan and Langrand  .

3.2. General Framework of the Partial Linear Approximation Algorithm

Algorithm 1 (The partial linear approximation algorithm).

Step 0 (Initialization) Given ${\theta }^{\left(0\right)}$ , $ϵ>0$ , ${D}_{y}$ , compute ${\beta }^{\left(0\right)}={\Omega }_{{\theta }^{\left(0\right)},y}^{-1}{D}_{y}$ .

Step 1 (Loop for computing $\stackrel{^}{\varphi }$ ) For a given $k\ge 0$ ,

a) Compute ${\theta }^{\left(k+1\right)}=g\left({\beta }^{\left(k\right)}\right)$ and ${\beta }^{\left(k+1\right)}={\Omega }_{{\theta }^{\left(k+1\right)},y}^{-1}{D}_{y}$ .

b) If $|\mathcal{l}\left({\varphi }^{\left(k+1\right)}\right)-\mathcal{l}\left({\varphi }^{\left(k\right)}\right)|\ge ϵ$ , then replace $k$ by $k+1$ and return to Step 1.

Else, set $\stackrel{^}{\theta }={\theta }^{\left(k+1\right)}$ , $\stackrel{^}{\beta }={\beta }^{\left(k+1\right)}$ and go to Step 2.

Step 2 (Computation of standard errors)

a) Compute ${\stackrel{^}{\sigma }}^{2}\left(\stackrel{^}{\theta }\right)={a}_{n,\stackrel{^}{\varphi }}{b}_{n,\stackrel{^}{\varphi }}^{-2}\left(1+{‖{V}_{\stackrel{^}{\varphi }}‖}_{{\Sigma }_{\stackrel{^}{\varphi }}^{-1}}^{2}-{‖{C}_{\stackrel{^}{\beta }}‖}_{{\Sigma }_{\stackrel{^}{\varphi }}^{-1}}^{-2}\text{ }{\xi }_{\stackrel{^}{\varphi }}^{2}\right)$ .

b) For $j=1,\cdots ,r$ , compute ${\stackrel{^}{\sigma }}^{2}\left({\stackrel{^}{\beta }}_{j}\right)={a}_{n,\stackrel{^}{\varphi }}^{-1}\left({\Sigma }_{\stackrel{^}{\varphi },j}^{-1}-{‖{C}_{\stackrel{^}{\beta }}‖}_{{\Sigma }_{\stackrel{^}{\beta }}^{-1}}^{-2}×{\left({\Sigma }_{\stackrel{^}{\varphi },j}^{-1}×{C}_{\stackrel{^}{\beta },j}\right)}^{2}\right)$ .

The aim of this paper is not to conduct a theoretical study of the convergence of the proposed algorithm. We rather focus on the numerical aspect of this convergence through an application model. Nevertheless, we can notice that the estimation of $\stackrel{^}{\varphi }$ using our algorithm does not require any matrix inversion. It is thus easy to think that getting the constrained maximum likelihood estimator $\stackrel{^}{\varphi }$ is improved in terms of computation time.

4. Application to the Combination of Crash Data

4.1. Statistical Model

We apply the above algorithm to estimate the parameters of a statistical model used to assess the effect of a road safety measure applied to an experimental site presenting $r>0$ mutually exclusive accident types (fatal accidents, seriously injured people, slightly injured people, material damage, etc $\cdots$ ) over a fixed period. Let us consider the random vector $Y=\left({Y}_{11},\cdots ,{Y}_{1r},{Y}_{21},\cdots ,{Y}_{2r}\right)$ where ${Y}_{1j}$ and ${Y}_{2j}\left(j=1,\cdots ,r\right)$ respectively represent the number of accidents of type $j$ registered on the experimental site before and after the application of the road safety measure. In order to take into account some external factors (such as traffic flow, speed limit variation, weather conditions, regression to the mean effect, etc $\cdots$ ), the experimental site is linked to a control area where the safety measure was not directly applied. The accidents data for the control area over the same period is given by the non-random vector $Z={\left({z}_{1},\cdots ,{z}_{r}\right)}^{\text{T}}$ where ${z}_{j}$ denotes the ratio of the number of accidents of type $j$ registered in the period after to the number of accidents of type $j$ registered in the period before. Following N’Guessan et al.  , we assume that the vector $Y$ has the multi- nomial distribution

$Y\sim M\left(n,\pi \left(\varphi \right)\right)$

where $n$ is the total number of crashes recorded at the experimental site and $\pi =\left({\pi }_{11},\cdots ,{\pi }_{1r},{\pi }_{21},\cdots ,{\pi }_{2r}\right)$ is a vector of class probabilities whose components are

${\pi }_{ij}\left(\varphi \right)=\left(\begin{array}{cc}\frac{{\beta }_{j}}{1+\theta {\sum }_{m=1}^{r}{z}_{m}{\beta }_{m}}& \text{if}i=1;\text{}j=1,\cdots ,r\\ \frac{\theta {\beta }_{j}{\sum }_{m=1}^{r}{z}_{m}{\beta }_{m}}{1+\theta {\sum }_{m=1}^{r}{z}_{m}{\beta }_{m}}& \text{if}i=2;\text{}j=1,\cdots ,r.\end{array}$ (9)

By construction, the parameter vector $\varphi =\left(\theta ,\beta \right)$ of this model satisfies the conditions $\theta >0$ , ${\beta }_{j}>0$ and the linear constraint

$C\left(\varphi \right)=0,\text{ }\text{with}\text{ }C\left(\varphi \right)=〈{1}_{r},\beta 〉-1$ (10)

where ${1}_{r}=\left(1,\cdots ,1\right)\in {ℝ}^{r}$ . The scalar $\theta$ denotes the unknown parameter average effect of the road safety measure while each ${\beta }_{j}\left(j=1,\cdots ,r\right)$ denotes the risk of having an accident of type $j$ on a site having the same characteristics as the experimental site. This model is a special case of the multinomial model proposed by N’Guessan et al.  which was applied simultaneously on several sites.

4.2. Cyclic Estimation of the Average Effect and the Different Accident Risks

The log-likelihood is specified up to an additive constant by

$\mathcal{l}\left(\varphi \right)=\underset{m=1}{\overset{r}{\sum }}\left[{y}_{.m}\mathrm{log}\left({\beta }_{m}\right)+{y}_{2m}\mathrm{log}\left(\theta \right)-{y}_{.m}\mathrm{log}\left(1+\theta \underset{j=1}{\overset{r}{\sum }}{z}_{j}{\beta }_{j}\right)+{y}_{2m}\mathrm{log}\left(\underset{j=1}{\overset{r}{\sum }}{z}_{j}{\beta }_{j}\right)\right]$ (11)

where ${y}_{.m}={y}_{1m}+{y}_{2m}$ . Different iterative methods can be used to compute the constrained MLE $\stackrel{^}{\varphi }$ . Most of them look for stationary points of the Lagrangian

$\mathcal{L}\left(\varphi ,\lambda \right)=\mathcal{l}\left(\varphi \right)-\lambda \left(\underset{m=1}{\overset{r}{\sum }}{\beta }_{m}-1\right).$

N'Guessan et al.  showed that a stationary point of $\mathcal{L}\left(\varphi ,\lambda \right)$ must be the solution to the following system of non-linear equations:

$\left\{\begin{array}{l}\underset{m=1}{\overset{r}{\sum }}\left({y}_{2m}-{y}_{.m}\frac{\stackrel{^}{\theta }\stackrel{^}{E}\left(Z\right)}{1+\stackrel{^}{\theta }\stackrel{^}{E}\left(Z\right)}\right)=0\hfill \\ {y}_{.j}-\frac{n{\stackrel{^}{\beta }}_{j}\left(1+\stackrel{^}{\theta }{z}_{j}\right)}{1+\stackrel{^}{\theta }\stackrel{^}{E}\left(Z\right)}-\frac{{y}_{2.}{\stackrel{^}{\beta }}_{j}\left(\stackrel{^}{E}\left(Z\right)-{z}_{j}\right)}{\stackrel{^}{E}\left(Z\right)}=0,\text{}j=1,\cdots ,r\hfill \\ \stackrel{^}{\theta }>0,\text{}{\stackrel{^}{\beta }}_{j}>0,\text{}j=1,\cdots ,r\hfill \end{array}$ (12)

where ${y}_{2.}={\sum }_{m=1}^{r}{y}_{2m}$ and $\stackrel{^}{E}\left(Z\right)={\sum }_{m=1}^{r}{\stackrel{^}{\beta }}_{m}{z}_{m}$ .

The main idea proposed in this paper consists in neglecting the term $\left[{y}_{2.}{\stackrel{^}{\beta }}_{j}\left(\stackrel{^}{E}\left(Z\right)-{z}_{j}\right)\right]/\stackrel{^}{E}\left(Z\right)$ so that we can write the remaining equations

${y}_{.j}-\frac{n{\stackrel{^}{\beta }}_{j}\left(1+\stackrel{^}{\theta }{z}_{j}\right)}{1+\stackrel{^}{\theta }\stackrel{^}{E}\left(Z\right)}=0,\text{ }j=1,\cdots ,r$

as the linear system of equations

${\Omega }_{\stackrel{^}{\theta },y}\stackrel{^}{\beta }={D}_{y}$ (13)

where ${\Omega }_{\stackrel{^}{\theta },y}={M}_{\stackrel{^}{\theta }}-\stackrel{^}{\theta }{D}_{y}{Z}^{\text{T}}$ , ${M}_{\stackrel{^}{\theta }}=\text{diag}\left(1+\stackrel{^}{\theta }{z}_{1},\cdots ,1+\stackrel{^}{\theta }{z}_{r}\right)$ is a diagonal $r×r$ matrix and

${D}_{y}={\left(\frac{{y}_{11}+{y}_{21}}{n},\cdots ,\frac{{y}_{1r}+{y}_{2r}}{n}\right)}^{\text{T}}\in {ℝ}^{r}.$

Drawing our inspiration from the work by N'Guessan  , we show that the linear system (13) has the vector $\stackrel{^}{\beta }={\left(1-\stackrel{^}{\theta }{Z}^{\text{T}}{M}_{\stackrel{^}{\theta }}^{-1}{D}_{y}\right)}^{-1}{M}_{\stackrel{^}{\theta }}^{-1}{D}_{y}$ as unique so- lution. One shows that

$\left(1-\stackrel{^}{\theta }{Z}^{\text{T}}{M}_{\stackrel{^}{\theta }}^{-1}{D}_{y}\right)=1-\frac{1}{n}\underset{m=1}{\overset{r}{\sum }}\frac{\stackrel{^}{\theta }{z}_{m}{y}_{\cdot m}}{1+\stackrel{^}{\theta }{z}_{m}}.$

The components of the constrained MLE $\stackrel{^}{\varphi }$ can then be computed as follows.

Corollary 1. The components of $\stackrel{^}{\varphi }=\left(\stackrel{^}{\theta },\stackrel{^}{\beta }\right)$ are given by

$\stackrel{^}{\theta }=\frac{{\sum }_{m=1}^{r}{y}_{2m}}{\left({\sum }_{m=1}^{r}{\stackrel{^}{\beta }}_{m}{z}_{m}\right)×\left({\sum }_{m=1}^{r}{y}_{1m}\right)}$ (14)

${\stackrel{^}{\beta }}_{j}=\frac{1}{1-{\Delta }_{n}\left(\stackrel{^}{\theta }\right)}×\frac{1}{1+\stackrel{^}{\theta }{z}_{j}}×\frac{{y}_{.j}}{n},\text{}j=1,\cdots ,r$ (15)

where ${\Delta }_{n}\left(\stackrel{^}{\theta }\right)=\frac{1}{n}\underset{m=1}{\overset{r}{\sum }}\frac{\stackrel{^}{\theta }{z}_{m}{y}_{\cdot m}}{1+\stackrel{^}{\theta }{z}_{m}}$ .

Applying Theorem 2’s results, we can give the asymptotic variance of the constrained MLE $\stackrel{^}{\varphi }$ .

Corollary 2. The asymptotic variance of the components of the constrained maximum likelihood estimator $\stackrel{^}{\varphi }$ is given by

${\stackrel{^}{\sigma }}^{2}\left(\stackrel{^}{\theta }\right)=\frac{1}{n\stackrel{^}{E}\left(Z\right)}\stackrel{^}{\theta }+\frac{1}{n}\left(1+\frac{\stackrel{^}{E}\left({Z}^{2}\right)}{{\left(\stackrel{^}{E}\left(Z\right)\right)}^{2}}\right){\theta }^{2}+\frac{\stackrel{^}{E}\left(Z\right)}{n}{\theta }^{3}$ (16)

${\stackrel{^}{\sigma }}^{2}\left({\stackrel{^}{\beta }}_{j}\right)=\frac{1}{n}{\stackrel{^}{\beta }}_{j}\left(1-{\stackrel{^}{\beta }}_{j}\right),\text{}j=1,\cdots ,r$ (17)

where $\stackrel{^}{E}\left(Z\right)={\sum }_{m=1}^{r}{\stackrel{^}{\beta }}_{m}{z}_{m}$ and $\stackrel{^}{E}\left({Z}^{2}\right)={\sum }_{m=1}^{r}{\stackrel{^}{\beta }}_{m}{z}_{m}^{2}$ .

Technical proof uses the Schur complement approach and stems from  . One shows (see the appendix) that the elements of the asymptotic information matrix ${\Gamma }_{\varphi }$ linked to $\stackrel{^}{\varphi }$ are

${C}_{\varphi }={\left(0,1,\cdots ,1\right)}^{\text{T}}\in {ℝ}^{1+r},\text{}{\tau }_{\varphi }=\frac{{\gamma }^{2}E\left(Z\right)}{n\theta },\text{}{U}_{\varphi }=\frac{{\gamma }^{2}}{n}Z,\text{}{B}_{\varphi }=\gamma \left({\Sigma }_{\varphi }+{V}_{\varphi }{V}_{\varphi }^{\text{T}}\right)$ (18)

where

${V}_{\varphi }={\left(\frac{\theta \gamma }{nE\left(Z\right)}\right)}^{1/2}Z,\text{}{\Sigma }_{\varphi }=\frac{n}{\gamma }×\text{diag}\left(\frac{1}{{\beta }_{1}},\cdots ,\frac{1}{{\beta }_{r}}\right),\text{}\gamma =\frac{n}{1+\theta E\left(Z\right)}.$ (19)

Setting ${a}_{n,\varphi }=\gamma$ and ${b}_{n,\varphi }^{2}=\left({\gamma }^{3}E\left(Z\right)\right)/\left(n\theta \right)$ , we show that Assumption 6 is satisfied. We then apply Theorem 2 to get the results of Corollary 2.

4.3. Practical Aspect of the Partial Linear Approximation Algorithm

Algorithm 2.

Step 0 (Initialization) Given $ϵ>0$ and ${D}_{y}$ , set ${\theta }^{\left(0\right)}=0$ and compute ${\beta }^{\left(0\right)}={D}_{y}$ .

Step 1 (Loop for computing $\stackrel{^}{\varphi }$ ) For a given $k\ge 0$ ,

a) Compute ${\theta }^{\left(k+1\right)}=\frac{{\sum }_{m=1}^{r}{y}_{2m}}{\left({\sum }_{m=1}^{r}{\beta }_{m}^{\left(k\right)}{z}_{m}\right)×\left({\sum }_{m=1}^{r}{y}_{1m}\right)}$

b) For $j=1,\cdots ,r$ , compute

${\beta }_{j}^{\left(k+1\right)}=\frac{1}{1-\frac{1}{n}\underset{m=1}{\overset{r}{\sum }}\frac{{\theta }^{\left(k+1\right)}{z}_{m}{y}_{.m}}{1+{\theta }^{\left(k+1\right)}{z}_{m}}}×\frac{1}{1+{\theta }^{\left(k+1\right)}{z}_{j}}×\frac{{y}_{.j}}{n}.$

c) If $|\mathcal{l}\left({\varphi }^{\left(k+1\right)}\right)-\mathcal{l}\left({\varphi }^{\left(k\right)}\right)|\ge ϵ$ , then replace $k$ by $k+1$ and return to Step 1.

Else, set $\stackrel{^}{\theta }={\theta }^{\left(k+1\right)}$ , $\stackrel{^}{\beta }={\beta }^{\left(k+1\right)}$ and go to Step 2.

Step 2 (Computation of standard errors)

a) Compute ${\stackrel{^}{\sigma }}^{2}\left(\stackrel{^}{\theta }\right)=\frac{1}{n\stackrel{^}{E}\left(Z\right)}\stackrel{^}{\theta }+\frac{1}{n}\left(1+\frac{\stackrel{^}{E}\text{ }\left({Z}^{2}\right)}{{\left(\stackrel{^}{E}\left(Z\right)\right)}^{2}}\right){\theta }^{2}+\frac{\stackrel{^}{E}\left(Z\right)}{n}{\theta }^{3}$ .

b) For $j=1,\cdots ,r$ , compute ${\stackrel{^}{\sigma }}^{2}\left({\stackrel{^}{\beta }}_{j}\right)=\frac{1}{n}{\stackrel{^}{\beta }}_{j}\left(1-{\stackrel{^}{\beta }}_{j}\right)$ .

The partial linear approximation algorithm for computing the constrained maximum likelihood estimator $\stackrel{^}{\varphi }$ of the model presented in subsection 4.1 stems from the cyclic algorithm of N’Guessan and Geraldo  . The PLA proceeds as follows: step 1 allows to estimate $\stackrel{^}{\varphi }$ alternating between its two components $\stackrel{^}{\theta }$ et $\stackrel{^}{\beta }$ . To start the procedure, we initialize ${\theta }^{\left(0\right)}$ . Then we compute ${\beta }_{j}^{\left(0\right)}\left(j=1,\cdots ,r\right)$ and define ${\varphi }^{\left(0\right)}=\left({\theta }^{\left(0\right)},{\beta }^{\left(0\right)}\right)$ . But, we could also initialize ${\beta }^{\left(0\right)}$ using the problem’s data and get ${\theta }^{\left(0\right)}$ . The process is repeated until the stopping criterion is satisfied. We note that our algorithm is automated and can be started as soon as the problem’s data ${D}_{y}$ is entered.

We can also note that the second partial derivatives of the log-likelihood function are no longer used in our algorithm. The aim of this paper is not to carry out a theoretical study of the convergence of the proposed algorithm. We rather focus on the numerical aspect of this convergence using simulated datasets.

5. Numerical Results with Simulated Datasets

5.1. Data Simulation Principle

For a given value of $r$ (the number of crash types), we generate the com-

ponents of vector $Z={\left({z}_{1},\cdots ,{z}_{r}\right)}^{\text{T}}$ from a uniform random variable $U\left(\frac{1}{2},\frac{5}{2}\right)$ .

The true value of $\theta$ denoted ${\theta }^{0}$ is fixed and the true value of $\beta$ , denoted ${\beta }^{0}={\left({\beta }_{1}^{0},\cdots ,{\beta }_{r}^{0}\right)}^{\text{T}}$ , such that ${\sum }_{j=1}^{r}{\beta }_{j}^{0}=1$ , comes from a uniform random variable $U\left(ϵ,1-ϵ\right)$ where $ϵ={10}^{-5}$ . Using those values, we compute the true probabilities

${\pi }_{1j}\left({\varphi }^{0}\right)=\frac{{\beta }_{j}^{0}}{1+{\theta }^{0}{\sum }_{m=1}^{r}{z}_{m}{\beta }_{m}^{0}},\text{}{\pi }_{2j}\left({\varphi }^{0}\right)=\frac{{\theta }^{0}{\beta }_{j}^{0}{\sum }_{m=1}^{r}{z}_{m}{\beta }_{m}^{0}}{1+{\theta }^{0}{\sum }_{m=1}^{r}{z}_{m}{\beta }_{m}^{0}},\text{}j=1,\cdots ,r$

linked to the multinomial distribution presented in subsection 4.1. Finally, one generates the total number $n$ of crash data from a Poisson distribution and then randomly shares it between the before and after periods using probabilities ${\pi }_{1j}\left({\varphi }^{0}\right)$ and ${\pi }_{2j}\left({\varphi }^{0}\right)$ . The observed values of ${y}_{ij}$ such that ${\sum }_{i=1}^{2}{\sum }_{j=1}^{r}{y}_{ij}=n$ are then found.

5.2. Numerical Results

This subsection deals with the numerical convergence of the partial linear approximation algorithm. As usual in the study of iterative algorithms, we analyse the influence of the initial solution ${\varphi }^{\left(0\right)}=\left({\theta }^{\left(0\right)},{\beta }^{\left(0\right)}\right)$ , the number of iterations, the computation time (CPU time) and the mean squared error. The performances of the partial linear approximation algorithm are compared to those of some classical optimization methods available in R and MATLAB software. The computations presented in this section were executed on a PC with an AMD E-350 Processor 1.6 GHz CPU.

The methods selected for comparison are the Newton-Raphson’s method, Nelder-Mead’s (NM) simplex algorithm  , quasi-Newton BFGS algorithm (from the names of its authors Broyden, Fletcher, Goldfarb and Shanno     ), Interior Point (IP) algorithm  , the Lenverberg-Marquardt (LM) algorithm   and Trust Region (TR) algorithms  . In our work, the BFGS and NM algorithms are implemented using the package developed by Varadhan  .

The simulation process was performed on many simulated crash datasets. For each one, small and large values of $n$ were considered. The results presented here correspond to the case $r=3$ , $n\in \left\{50\text{ };\text{ }5000\right\}$ , ${\varphi }^{0}=\left({\theta }^{0},{\beta }^{0}\right)$ with ${\theta }^{0}=0.6$ and ${\beta }^{0}=\left(0.025,0.232,0.743\right)$ .

Three different ways of setting ${\beta }^{\left(0\right)}$ were considered:

1) Uniform initialisation: ${\beta }_{1}^{\left(0\right)}={\beta }_{2}^{\left(0\right)}=\cdots ={\beta }_{r}^{\left(0\right)}=1/r$ .

2) Proportional initialisation: ${\beta }_{j}^{\left(0\right)}=\frac{{y}_{1j}+{y}_{2j}}{n}$ , $j=1,\cdots ,r$ .

3) Random initialisation: for $j=1,\cdots ,r$ , ${\beta }_{j}^{\left(0\right)}={u}_{j}/\left({\sum }_{i=1}^{r}{u}_{i}\right)$ where each

${u}_{i}\left(i=1,\cdots ,r\right)$ is randomly generated from an uniform distribution $U\left(0.05,0.95\right)$ .

By combining the two values of $n$ and the three ways to initialize ${\beta }^{\left(0\right)}$ , we get six scenarios and for each one, we performed 1000 replications. The stopping criterion is the condition $|\mathcal{l}\left({\varphi }^{\left(k+1\right)}\right)-\mathcal{l}\left({\varphi }^{\left(k\right)}\right)|<{10}^{-5}$ .

Tables 1-6 present a few of the results obtained for an overall 6000 simu- lations. All computation times are given in seconds and the duration ratio of an

Table 1. Results for uniform initialisation of ${\beta }^{\left(0\right)}$ , $r=3$ , $n=50$ .

Table 2. Results for uniform initialisation of ${\beta }^{\left(0\right)}$ , $r=3$ , $n=5000$ .

Table 3. Results for proportional initialisation of ${\beta }^{\left(0\right)}$ , $r=3$ , $n=50$ .

Table 4. Results for proportional initialisation of ${\beta }^{\left(0\right)}$ , $r=3$ , $n=5000$ .

algorithm is defined as the ratio between the mean computation time of this latter and the mean computation time of the PLA (therefore the duration ratio of the partial linear approximation algorithm always equals 1). The computation time depends on the computer used to perform the simulations while the duration ratio is computer-free and therefore more useful.

To analyse the convergence, we used the mean squared error (MSE) defined as:

Table 5. Results for random initialisation of ${\beta }^{\left(0\right)}$ , $r=3$ , $n=50$ .

Table 6. Results for random initialisation of ${\beta }^{\left(0\right)}$ , $r=3$ , $n=5000$ .

$\text{MSE}\left(\stackrel{^}{\varphi },{\varphi }^{0}\right)=\frac{1}{1+r}\left({\left(\stackrel{^}{\theta }-{\theta }^{0}\right)}^{2}+\underset{j=1}{\overset{r}{\sum }}{\left({\stackrel{^}{\beta }}_{j}-{\beta }_{j}^{0}\right)}^{2}\right)$ (20)

One can see that the MSE decreases when $n$ (the total number of road crashes) increases. The PLA is at least as efficient as the other algorithms. It always converges to reasonable and acceptable parameter vector estimates and the estimate gets closer to the the true value as $n$ increases.

For a small value of $n$ (see Table 1, Table 3 and Table 5), the estimate $\stackrel{^}{\varphi }$ produced by the PLA is relatively close to the true parameter vector and quite close to those of the other methods. More generally, all the compared methods have a MSE of order ${10}^{-2}$ . However the estimates produced by the BFGS and Nelder-Mead’s methods are very far from the true values (see Table 5). In the case of random initialisation, the MSE for Nelder-Mead’s and BFGS algorithms is 20 times greater than those of the other algorithms.

When $n$ increases $\left(n=5000\right)$ , the MSE of the PLA decreases from ${10}^{-2}$ to ${10}^{-4}$ . So the PLA is also efficient. Unsurprisingly, when $n$ is very great the estimates produced by the other algorithms are closer to the true values than those of the PLA. This is expected because the other algorithms work with the exact gradient. However, the Nelder-Mead’s and BFGS algorithms produce estimates very far from the truth. Their MSE’s order is ${10}^{-2}$ (Table 4 and Table 6) while those of the PLA and the other methods are ${10}^{-4}$ .

To analyse the influence of the initial guess, we considered the mean number of iterations and the amplitude of the iterations (i.e. difference between the maximum and the minimum number of iterations). An increase in the am- plitude of iterations suggests a greater influence of the initial solution ${\varphi }^{\left(0\right)}$ . The results given in Tables 1-6 suggest that the PLA is stable and robust to initial guesses of the parameter being estimated. For the 6000 replications, the number of iterations needed by the PLA to converge lies between 2 and 4. In other words, setting the initial guess to 6000 different values chosen in the parameter vectors space does not disturb the PLA. This performance is as good as those of Newton- Raphson and Levenberg-Marquardt and by far better than those of BFGS, Nelder-Mead’s and Interior Point which have their number of iterations varying respectively from 1 to 15, 1 to 21 and 3 to 31.

As far as the computation time is concerned, it can be noticed that all the CPU time ratio are greater than 1 which means that none of the compared algorithms is faster than the PLA. On average, the PLA is 4 to 5 times quicker than Newton-Raphson, 239 to 345 times quicker than BFGS algorithm, 354 to 395 times quicker than Nelder-Mead, 27 to 31 times quicker than Levenberg- Marquardt, 33 to 39 times quicker than Trust region algorithms and 311 to 383 times quicker than Interior point algorithms. This can be an important factor when larger values of $r$ are considered.

6. Real-Data Analysis

We apply the partial linear approximation algorithm to the data concerning the changes applied to road markings on a rural Nord Pas-de-Calais road (France)  . This road lay-out consisted in what is called “Italian marking”. The road markings were modified in order to make overtaking impossible in both directions at the same time. On a short distance, there are two lanes for the first direction (overtaking is then allowed in that direction) and there is only one for the other direction. Both directions are separated by a white line. A bit farther away, the system is inverted so that overtaking is possible in the opposite direction, and so on. The data recorded for eight years (four years before and four years after) on the marked area are given by Table 7.

Note that the number of fatal crashes (Fatal) decreased from four (in the before period) to one after the lay-out change. Indeed there were four accidents with at least one person killed in the period before the road works and one crash with at least one person killed in the period after. The number of slight crashes (no serious bodily injuries involved) recorded in the same period were more than halved. For the same lengths of time, on a portion of National Road 17 used as a control area, accident reports are the following.

The values given in Table 8 are obtained by dividing, for each accident type, the number of crashes after the changes by the number of crashes before. On the whole, a decrease can be noted in comparison with the control area accident numbers between the 4-year period after the changes and the 4-year period before. All the algorithms can be applied to Table 7 and Table 8. Since the simulation results suggested that the partial linear approximation algorithm con- verges after a few iterations and remains steady, we only present the estimations related to the partial linear approximation algorithm (Table 9).

Parameters ${\beta }_{1}$ , ${\beta }_{2}$ and ${\beta }_{3}$ enable us to assess the risks for each type of accident on the test area in the eight years when the road markings’ effects are studied. Estimated values ${\stackrel{^}{\beta }}_{1}$ , ${\stackrel{^}{\beta }}_{2}$ and ${\stackrel{^}{\beta }}_{3}$ are respectively $0.1525$ , $0.1605$ and $0.6870$ . These values suggest that in the eight years of road marking analysis, $15.25%$ of the crashes recorded in the test area were fatal, $16.05%$ were serious and $68.70%$ were slight as compared to the crashes recorded in the control area.

The estimated mean efficiency index $\left(\stackrel{^}{\theta }\right)$ is 0.7054. It corresponds to an average decrease in proportion of $29.46%=\left(1-0.7054\right)×100%$ of the whole

Table 7. Crash data for experimental zone.

Table 8. Crash data for control zone.

Table 9. Pas-De-Calais’s crash data using partial linear approximation algorithm.

set of accidents in the test area as compared to the average trend in the control area. The mean effect significance for this type of lay-out may also be tested by using the confidence interval at the $95%$ level associated to parameter $\theta$ . This confidence interval reveals a bracket of values whose lower (resp. upper) bound is strictly superior to 0 (resp. 1). Indeed, as $1\in \left[0.1646,1.2463\right]$ , we cannot rule out the hypothesis ${H}_{0}:\theta =1$ versus ${H}_{1}:\theta \ne 1$ with a type-1 error of $5%$ . Even if in the case studied here, we can notice a decrease in proportion of $29.46%=\left(1-0.7054\right)×100%$ in the average accident number in the test area, the above test shows that the mean efficiency index value is not significantly different from 1 to enable us to conclude that this type of road marking is efficient. In practice, an analysis according to periods, recorded data and control area should be carried out in order to get more appropriate conclusions.

7. Concluding Remarks

We propose in this paper, under assumptions, a principle of 2-block splitting of a constrained Maximum Likelihood (ML) system of equations linked to a parameter vector. We obtain analytical approximations of the components of the constrained ML estimator. The standard asymptotic errors are also obtained in closed-form.

We then build an iterative algorithm by initializing the first component of the parameter vector without inverting (at each iteration) the information matrix nor the Hessian matrix. Our partial linear approximation algorithm cycles through the components of the parameter vector and updates one component at a time. It is very simple to program and the constraints are integrated easily. To implement our algorithm, we use a particular version of the multinomial model of N’Guessan et al.  used to estimate the average effect of a road safety measure and the different accident risks when road conditions are improved. We prove that the assumptions are all satisfied and we obtain simple expressions of the estimators and their asymptotic variance. Afterwards, we give a practical version of our algorithm.

The numerical performance of the proposed algorithm is examined through a simulation study and compared to those of classical methods available in R and MATLAB software. The choice of these other methods is dictated not only by the fact that they are relevant and integrated in most statistical software but also by the fact that some need second order derivatives and others do not. The comparisons suggest that not only the partial linear approximation algorithm is competitive in statistical accuracy and computational speed with the best currently available algorithm, but also it is not disturbed by the initial guess.

The link between the numerical performance of our algorithm and the particular model used in this paper seems to be a limitation to the com- petitiveness of our algorithm with regards to the other methods considered in this paper. However, this particular choice of model allows not only to show the feasibility of our algorithm but also represents a good basis for approaching, under additional conditions, more general models.

The simulations results obtained on the particular model considered in this paper suggest that our algorithm may be extended to other families of multi- nomial models (such as the model of N’Guessan et al.  ). Drawing our inspiration from  , we may also prove the asymptotic strong consistency of the estimator obtained from our algorithm. This perspective will give a wider interest to our algorithm in the context of a large-scale use.

Acknowledgements

We thank the Editor and the anonymous referee for their remarks and suggestions which enabled a substantial improvement of this paper. This article was partially written while the second author was visiting the Lille 1 University (France).

8. Appendix

8.1. Proof of Lemma 1

Proof. Problem (3) is equivalent to maximizing function

$\mathcal{L}\left(\varphi ,\lambda \right)=\mathcal{l}\left(\varphi \right)-\lambda C\left(\varphi \right)$ (21)

where $\lambda$ is the Lagrange multiplier. Any solution $\stackrel{^}{\varphi }$ must satisfy

${\left(\frac{\partial \mathcal{L}}{\partial \theta }\right)}_{\stackrel{^}{\varphi }}=0\text{and}{\left(\frac{\partial \mathcal{L}}{\partial {\beta }_{j}}\right)}_{\stackrel{^}{\varphi }}=0,\text{}j=1,\cdots ,r.$ (22)

From assumption 3, system (22) is equivalent to

${\left(\frac{\partial \mathcal{l}}{\partial \theta }\right)}_{\stackrel{^}{\varphi }}=0\text{and}{\left(\frac{\partial \mathcal{l}}{\partial {\beta }_{j}}\right)}_{\stackrel{^}{\varphi }}-\stackrel{^}{\lambda }{\left(\frac{\partial C}{\partial {\beta }_{j}}\right)}_{\stackrel{^}{\varphi }}=0,\text{}j=1,\cdots ,r$ (23)

where $\stackrel{^}{\lambda }=\lambda \left(\stackrel{^}{\varphi }\right)$ . After multiplication by ${\stackrel{^}{\beta }}_{j}$ and summation on the index $j$ , we get:

$\underset{j=1}{\overset{r}{\sum }}{\stackrel{^}{\beta }}_{j}{\left(\frac{\partial \mathcal{l}}{\partial {\beta }_{j}}\right)}_{\stackrel{^}{\varphi }}-\stackrel{^}{\lambda }\underset{j=1}{\overset{r}{\sum }}{\stackrel{^}{\beta }}_{j}{\left(\frac{\partial C}{\partial {\beta }_{j}}\right)}_{\stackrel{^}{\varphi }}=0.$

that is equivalent to

$〈\stackrel{^}{\beta },{\left(\frac{\partial \mathcal{l}}{\partial \beta }\right)}_{\stackrel{^}{\varphi }}〉-\stackrel{^}{\lambda }〈\stackrel{^}{\beta },{C}_{\stackrel{^}{\beta }}〉=0.$

We obtain (6) by substitution of $\stackrel{^}{\lambda }$ in (23). □

8.2. Proof of Theorem 2

Proof. Under conditions 6, ${\Gamma }_{\varphi }$ is non singular and after some matrix mani- pulations, we get

$\begin{array}{l}{\Gamma }_{\varphi }^{-1}=\left[\begin{array}{cc}{W}_{\varphi }& {J}_{\varphi }^{-1}{C}_{\varphi }^{\text{T}}{R}_{\varphi }^{-1}\\ {R}_{\varphi }^{-1}{C}_{\varphi }{J}_{\varphi }^{-1}& -{R}_{\varphi }^{-1}\end{array}\right],\\ {J}_{\varphi }^{-1}=\left[\begin{array}{cc}{\left({J}_{\varphi }/{B}_{\varphi }\right)}^{-1}& -{\left({J}_{\varphi }/{B}_{\varphi }\right)}^{-1}{U}_{\varphi }^{\text{T}}{B}_{\varphi }^{-1}\\ -{B}_{\varphi }^{-1}{U}_{\varphi }{\left({J}_{\varphi }/{B}_{\varphi }\right)}^{-1}& {\left({J}_{\varphi }/{\tau }_{\varphi }\right)}^{-1}\end{array}\right]\end{array}$

where ${R}_{\varphi }^{-1}={a}_{n,\varphi }{‖{C}_{\beta }‖}_{{\Sigma }_{\varphi }^{-1}}^{-2}$ is a scalar, ${B}_{\varphi }^{-1}={a}_{n,\varphi }^{-1}\left({\Sigma }_{\varphi }^{-1}-{\left(1+{‖{V}_{\varphi }‖}_{{\Sigma }_{\varphi }^{-1}}^{2}\right)}^{-1}{\Sigma }_{\varphi }^{-1}{V}_{\varphi }{V}_{\varphi }^{\text{T}}{\Sigma }_{\varphi }^{-1}\right)$

is the inverse of ${B}_{\varphi }$ , ${\left({J}_{\varphi }/{B}_{\varphi }\right)}^{-1}={b}_{n,\varphi }^{-2}{a}_{n,\varphi }\left(1+{‖{V}_{\varphi }‖}_{{\Sigma }_{\varphi }^{-1}}^{2}\right)$ , is a scalar,

${\left({J}_{\varphi }/{\tau }_{\varphi }\right)}^{-1}={a}_{n,\varphi }^{-1}{\Sigma }_{\varphi }^{-1}$ , is a $r×r$ matrix and ${W}_{\varphi }={J}_{\varphi }^{-1}-{J}_{\varphi }^{-1}{C}_{\varphi }^{\text{T}}{R}_{\varphi }^{-1}{C}_{\varphi }{J}_{\varphi }^{-1}$ is the $\left(1+r\right)×\left(1+r\right)$ asymptotic variance-covariance matrix of $\varphi$ . We show that

${W}_{\varphi }=\left[\begin{array}{cc}{W}_{\varphi }\left(1,1\right)& {W}_{\varphi }^{\text{T}}\left(2,1\right)\\ {W}_{\varphi }\left(2,1\right)& {W}_{\varphi }\left(2,2\right)\end{array}\right]$

where ${W}_{\varphi }\left(1,1\right)$ is a scalar and ${W}_{\varphi }\left(2,2\right)$ is a $r×r$ matrix. The asymptotic variance of $\stackrel{^}{\theta }$ is given by ${W}_{\varphi }\left(1,1\right)$ and ${\stackrel{^}{\sigma }}^{2}\left({\stackrel{^}{\beta }}_{j}\right)$ is obtained by the diagonal elements of ${W}_{\varphi }\left(2,2\right)$ . After straightforward calculation, we get

${W}_{\varphi }\left(1,1\right)={\left({J}_{\varphi }/{B}_{\varphi }\right)}^{-1}-{R}_{\varphi }^{-1}{\xi }_{\varphi }^{2}{\left({J}_{\varphi }/{B}_{\varphi }\right)}^{-2}\frac{{b}_{n,\varphi }^{2}{a}_{n,\varphi }^{-2}}{{\left(1+{‖{V}_{\varphi }‖}_{{\Sigma }_{\varphi }^{-1}}^{2}\right)}^{2}}$

and

${W}_{\varphi }\left(2,2\right)={a}_{n,\varphi }^{-1}\left({\Sigma }_{\varphi }^{-1}-{‖{C}_{\beta }‖}_{{\Sigma }_{\varphi }^{-1}}^{-2}{\Sigma }_{\varphi }^{-1}{C}_{\beta }{C}_{\beta }^{\text{T}}{\Sigma }_{\varphi }^{-1}\right)$

where ${\xi }_{\varphi }={V}_{\varphi }^{\text{T}}{\Sigma }_{\varphi }^{-1}{C}_{\beta }$ . The results of Theorem 2 follow from a direct calculation.

8.3. Proof of Corollary 2

Proof. Taking the second derivatives of the negative of $\mathcal{L}$ with respect to the

components of $\varphi$ and $\lambda$ , we show that: $\frac{{\partial }^{2}\mathcal{L}}{\partial {\lambda }^{2}}=\frac{{\partial }^{2}\mathcal{L}}{\partial \lambda \partial \theta }=0$ , $\frac{{\partial }^{2}\mathcal{L}}{\partial \lambda \partial {\beta }_{j}}=-1$ , $\frac{{\partial }^{2}\mathcal{L}}{\partial {\theta }^{2}}=-\frac{{y}_{2.}}{{\theta }^{2}}+\frac{n{\left(E\left(Z\right)\right)}^{2}}{{\left(1+\theta E\left(Z\right)\right)}^{2}}$ , $\frac{{\partial }^{2}\mathcal{L}}{\partial \theta \partial {\beta }_{j}}=-\frac{n{z}_{j}}{{\left(1+\theta E\left(Z\right)\right)}^{2}}$ and

$\frac{{\partial }^{2}\mathcal{L}}{\partial {\beta }_{j}\partial {\beta }_{k}}=\left\{\begin{array}{ll}-\frac{{y}_{.j}}{{\beta }_{j}^{2}}+\frac{n{\theta }^{2}{z}_{j}^{2}}{{\left(1+\theta E\left(Z\right)\right)}^{2}}-\frac{{y}_{2.}{z}_{j}^{2}}{{\left(E\left(Z\right)\right)}^{2}}\hfill & \text{if}j=k\hfill \\ \frac{n{\theta }^{2}{z}_{j}{z}_{k}}{{\left(1+\theta E\left(Z\right)\right)}^{2}}-\frac{{y}_{2.}{z}_{j}{z}_{k}}{{\left(E\left(Z\right)\right)}^{2}}\hfill & \text{if}j\ne k\hfill \end{array}$

for $j$ , $k=1,\cdots ,r$ . Now, setting $E\left({y}_{.j}\right)=n{\beta }_{j}$ , $E\left({y}_{2.}\right)=n\theta E\left(Z\right)/\left(1+\theta E\left(Z\right)\right)$ and $\gamma =n/\left(1+\theta E\left(Z\right)\right)$ , we obtain the components of matrix ${J}_{\varphi }$ as follows

${\tau }_{\varphi }=\frac{{\gamma }^{2}E\left(Z\right)}{n\theta },\text{}{U}_{\varphi ,j}=\frac{{\gamma }^{2}{z}_{j}}{n},\text{}{B}_{\varphi ,jj}=\gamma \left(\frac{n}{\gamma {\beta }_{j}}+\frac{\gamma {\theta }^{2}{z}_{j}^{2}}{nE\left(Z\right)}\right)\text{and}{B}_{\varphi ,jk}=\frac{\theta {\gamma }^{2}{z}_{j}{z}_{k}}{nE\left(Z\right)},\left(j\ne k\right)$

where ${U}_{\varphi ,j}$ (resp. ${B}_{\varphi ,jj}$ and ${B}_{\varphi ,jk}$ ) are the components of vector ${U}_{\varphi }$ (resp. the elements of matrix ${B}_{\varphi }$ ). Using the last expressions of the elements of ${U}_{\varphi }$ and ${B}_{\varphi }$ and after straightforward calculation, we show that

${W}_{\varphi }\left(2,2\right)={\gamma }^{-1}{\Sigma }_{\varphi }^{-1}-\frac{1}{n}\beta {\beta }^{\text{T}}$ and then, we deduce the expression of ${\stackrel{^}{\sigma }}^{2}\left({\stackrel{^}{\beta }}_{j}\right)$

given by corollary 2. In the same way, we obtain

${W}_{\varphi }\left(1,1\right)=\frac{n\theta }{{\gamma }^{2}{t}_{\varphi }E\left(Z\right)}-\frac{{\theta }^{2}}{n}$

where ${t}_{\varphi }=\left[{n}^{2}E\left(Z\right)\right]/\left[{n}^{2}E\left(Z\right)+\theta {\gamma }^{2}\stackrel{^}{E}\left({Z}^{2}\right)\right]$ . Using the last expression of ${t}_{\varphi }$ , we get the expression of ${\stackrel{^}{\sigma }}^{2}\left(\stackrel{^}{\theta }\right)$ . □

Conflicts of Interest

The authors declare no conflicts of interest.

  Hauer, E. (1997) Observational Before-After Studies in Road Safety Estimating the Effect of Highway Traffic Engineering Measures on Road Safety. Pergamon, Oxford.  N’Guessan, A., Essai, A. and Langrand, C. (2001) Estimation multidimensionnelle des controles et de l’effet moyen d’une mesure de sécurité routière. Revue de Statistique Appliquée, 49, 85-102.  Lord, D. and Mannering, F. (2010) The Statistical Analysis of Crash-Frequency Data: A Review and Assessment of Methodological Alternatives. Transport Research Part A, 44, 291-305.  Aitchison, J. and Silvey, S.D. (1958) Maximum Likelihood Estimation of Parameters Subject to Restraints. The Annals of Mathematical Statistics, 29, 813-828. https://doi.org/10.1214/aoms/1177706538  Crowder, M. (1984) On the Constrained Maximum Likelihood Estimation with Non-I.I.D Observations. Annals of the Institute of Statistical Mathematics, 36, 239-249. https://doi.org/10.1007/BF02481968  Haber, M. and Brown, M.B. (1986) Maximum Likelihood Methods for Log-Linear Models When Expected Frequencies Are Subject to Linear Constraints. Journal of the American Statistical Association, 81, 477-482. https://doi.org/10.1080/01621459.1986.10478293  Lange, K. (2010) Numerical Analysis for Statisticians. 2nd Edition, Springer, Berlin. https://doi.org/10.1007/978-1-4419-5945-4  Li, Z.F., Osborne, M.R. and Prvan, T. (2003) Numerical Algorithms for Constrained Maximum Likelihood Estimation. ANZIAM Journal, 45, 91-114. https://doi.org/10.1017/S1446181100013171  Slivapulle, M.J. and Sen, P.K. (2005) Constrained Statistical Inference. Wiley, Hoboken.  Wang, Y. (2007) Maximum Likelihood Computation Based on the Fisher Scoring and Gauss-Newton Quadratic Approximations. Computational Statistics & Data Analysis, 51, 3776-3787. https://doi.org/10.1016/j.csda.2006.12.037  Mkhadri, A., N’Guessan, A. and Hafidi, B. (2010) An MM Algorithm for Constrained Estimation in a Road Safety Measure Modeling. Communications in Statistics-Simulation and Computation, 39, 1057-1071. https://doi.org/10.1080/03610911003778119  El Barmi, H. and Dykstra, R.L. (1998) Maximum Likelihood Estimates via Duality for Log-Convex Models When Cell Probabilities Are Subject to Convex Constraints. The Annals of Statistics, 26, 1878-1893. https://doi.org/10.1214/aos/1024691361  Matthews, G.B. and Crowther, N.A.S. (1995) A Maximum Likelihood Estimation Procedure When Modelling in Terms of Constraints. South African Statistical Journal, 29, 29-51.  Liu, C. (2000) Estimation of Discrete Distributions with a Class of Simplex Constraints. Journal of the American Statistical Association, 95, 109-120. https://doi.org/10.1080/01621459.2000.10473907  Ouellette, D.V. (1981) Schur Complement and Statistics. Linear Algebra and Its Applications, 36, 187-295. https://doi.org/10.1016/0024-3795(81)90232-9  Zhang, F. (2005) The Schur Complement and Its Applications. Springer, Berlin. https://doi.org/10.1007/b105056  N’Guessan, A. and Langrand, C. (2005) A Covariance Components Estimation Procedure When Modelling a Road Safety Measure in Terms of Linear Constraints. Statistics, 39, 303-314. https://doi.org/10.1080/02331880500108544  N’Guessan, A., Essai, A. and N’zi, M. (2006) An Estimation Method of the Average Effect and the Different Accident Risks When Modelling a Road Safety Measure: A Simulation Study. Computational Statistics & Data Analysis, 51, 1260-1277. https://doi.org/10.1016/j.csda.2005.09.002  N’Guessan, A. (2010) Analytical Existence of Solutions to a System of Non-Linear Equations with Application. Journal of Computational and Applied Mathematics, 234, 297-304. https://doi.org/10.1016/j.cam.2009.12.026  N’Guessan, A. and Geraldo, I.C. (2015) A Cyclic Algorithm for Maximum Likelihood Estimation Using Schur Complement. Numerical Linear Algebra with Applications, 22, 1161-1179. https://doi.org/10.1002/nla.1999  Nelder, J.A. and Mead, R. (1965) A Simplex Algorithm for Function Minimization. Computer Journal, 7, 308-313. https://doi.org/10.1093/comjnl/7.4.308  Broyden, C.G. (1970) The Convergence of a Class of Double-Rank Minimization Algorithms. Journal of the Institute of Mathematics and Its Applications, 6, 76-90. https://doi.org/10.1093/imamat/6.1.76  Fletcher, R. (1970) A New Approach to Variable Metric Algorithms. The Computer Journal, 13, 317-322. https://doi.org/10.1093/comjnl/13.3.317  Goldfarb, D. (1970) A Family of Variable Metric Updates Derived by Variational Means. Mathematics of Computation, 24, 23-26. https://doi.org/10.1090/S0025-5718-1970-0258249-6  Shanno, D.F. (1970) Conditioning of Quasi-Newton Methods for Function Minimization. Mathematics of Computation, 24, 647-656. https://doi.org/10.1090/S0025-5718-1970-0274029-X  Waltz, R.A., Morales, J.L., Nocedal, J. and Orban, D. (2006) An Interior Algorithm for Nonlinear Optimization That Combines Line Search and Trust Region Steps. Mathematical Programming, 107, 391-408. https://doi.org/10.1007/s10107-004-0560-5  Levenberg, K. (1944) A Method for the Solution of Certain Problems in Least Squares. Quarterly of Applied Mathematics, 2, 164-168. https://doi.org/10.1090/qam/10666  Marquardt, D. (1963) An Algorithm for Least-Squares Estimation of Nonlinear Parameters. SIAM Journal on Applied Mathematics, 11, 431-441. https://doi.org/10.1137/0111030  Byrd, R.H., Schnabel, R. and Shultz, G.A. (1988) Approximate Solution of the Trust Region Problem by Minimization over Two-Dimensional Subspaces. Mathematical Programming, 40, 247-263. https://doi.org/10.1007/BF01580735  Varadhan, R. (2011) Alabama: Constrained Nonlinear Optimization. R Package Version 9-1.  N’Guessan, A. and Truffier, M. (2008) Impact d’un aménagement de sécurité routière sur la gravité des accidents de la route. Journal de la Société Francaise de Statistiques, 149, 23-41.  Geraldo, I.C., N’Guessan, A. and Gneyou, K.E. (2015) A Note on the Strong Consistency of a Constrained Maximum Likelihood Estimator Used in Crash Data Modelling. Comptes Rendus Mathematique, 353, 1147-1152. https://doi.org/10.1016/j.crma.2015.09.025 