^{1}

^{1}

^{*}

^{1}

For the regression model about longitudinal data, we combine the robust estimation equation with the elemental empirical likelihood method, and propose an efficient robust estimator, where the robust estimation equation is based on bounded scoring function and the covariate depended weight function. This method reduces the influence of outliers in response variables and covariates on parameter estimation, takes into account the correlation between data, and improves the efficiency of estimation. The simulation results show that the proposed method is robust and efficient.

Longitudinal data is a dataset obtained by repeatedly measuring multiple times for each individual over a period of time. The longitudinal data is equivalent to the combination of cross section and time series data, and is composed of a plurality of short time series. For a fixed time point, the observation data of different individuals is similar to the cross-sectional data; for fixed individuals, different time points observation data is similar to time series. Therefore, longitudinal data can make full use of the information inside the individual while distinguishing individual differences. In the fields of medicine and finance, the frequency of longitudinal data appears to be higher and higher, so the research on longitudinal data is of great significance.

Longitudinal data is a hot topic in statistical research in recent years. So far, significant progress has been made in the field of theoretical research. Liang et al. (1986) [

In the field of longitudinal data research, empirical likelihood methods are also one of the frequently used methods. The empirical likelihood (EL) method was originally applied by Owen (1988) [

Based on the existing research results of longitudinal data, this paper combines the bounded scoring function based on Huber function and the robust estimation equation of weight function based on covariate with elemental empirical likelihood method, and proposes an effective robust estimation method. The effectiveness of the proposed method is derived from the advantages of the empirical likelihood method, while robustness is obtained by means of robust estimation equations. We carried out the simulation and the results showed that whether the correlation matrix used is consistent with the real correlation matrix or not, the estimation method in this paper can reduce the impact of outliers on the estimation and improve the estimation efficiency.

The following content is divided into four subsections. In Section 1, we give the linear regression model of the longitudinal data and the estimation method used in this paper. The iterative algorithm of this paper is introduced in Section 2. Section 3 is the simulation experiment part and Section 4 is the summary and outlook.

Linear models are often used in longitudinal data research. Their structure is simple for analysis and the basis of many models. We will consider the following continuous response variable longitudinal regression model

y i j = x i j T β + ε i j , i = 1 , ⋯ , n ; j = 1 , ⋯ , m i (2.1)

where y i j is the jth observation on the ith subject, x i j is a p-vector of covariance values and β 0 is a p-vector of unknown regression coefficients, x i j T = ( x i j ( 1 ) , x i j ( 2 ) , ⋯ , x i j ( p ) ) , β T = ( β ( 1 ) , β ( 2 ) , ⋯ , β ( p ) ) . n is the number of subjects participating in the study, m i is the number of repeated measurements for the ith subject, m 1 , ⋯ , m n are bounded positive integers. Denote the total sample size by N = m 1 + ⋯ + m n . ε i j is the random error term, satisfying E ( ε i j ) = 0 , V a r ( ε i j ) = σ 2 . We set Y i T = ( y i 1 , y i 2 , ⋯ , y i m i ) , X i = ( x i 1 , x i 2 , ⋯ , x i m i ) and ε i T = ( ε i 1 , ε i 2 , ⋯ , ε i m i ) . Then model (2.1) can be rewritten as

Y i = X i T β + ε i (2.2)

For the longitudinal data model, it is usually assumed that the variables between different individuals are independent of each other, and the different measurements of the same individual are related. The covariance matrix of the random vector ε i is V i = σ 2 R ( ρ ) = A i 1 2 R ( ρ ) A i 1 2 , where V i is an m i × m i invertible matrix, R ( ρ ) is a correlation matrix, ρ is a correlation coefficient vector, A i 1 2 = σ I m i . Exchangeable structure (Exch), work-independent structure (Ind) and first-order autoregressive structure (AR(1)) are common related structures in practice.

R E x c h = ( 1 ρ ⋯ ρ ρ 1 ⋯ ρ ⋮ ⋮ ⋱ ⋮ ρ ρ ⋯ 1 ) , R l n d = ( 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ 1 ) , R A R ( 1 ) = ( 1 ρ ⋯ ρ m i − 1 ρ 1 ⋯ ρ m i − 2 ⋮ ⋮ ⋱ ⋮ ρ m i − 1 ρ m i − 2 ⋯ 1 ) .

Let

Z i ( β ) = { z i 1 ( β ) , z i 2 ( β ) , ⋯ , z i m i ( β ) } T = V i − 1 ( Y i − X i T β ) (2.3)

More generally, we can define an estimating equation

∑ i = 1 n X i Z i ( β ) = ∑ i = 1 n ∑ j = 1 m i x i j z i j ( β ) = 0 (2.4)

Such estimating equation is susceptible to the influence of outliers. Bounded scoring function of Huber function and weight function depending on covariates are introduced in formula (3)

Z i R ( β ) = { z i 1 R ( β ) , z i 2 R ( β ) , ⋯ , z i m i R ( β ) } = ( A i 1 2 R i ) − 1 W i ψ c ( A i − 1 2 ( Y i − X i T β ) ) (2.5)

Consequently, a robust estimation equation is obtained

∑ i = 1 n X i Z i R ( β ) = ∑ i = 1 n ∑ j = 1 m i x i j z i j R ( β ) = 0 (2.6)

where ψ c ( x ) = min ( c , max ( − c , x ) ) is the bounded scoring function, it is used to limit the influence on outliers in response. Because it is applied to the standardized residuals, the value of c are generally between 1 and 2. W i = d i a g ( w i 1 , w i 2 , ⋯ , w i m i ) is the weight function. There are many ways to select the weight function, similar to the reference [

w i j = min { 1 , ( b 0 ( x i j − m x ) T S x − 1 ( x i j − m x ) ) r 2 } (2.7)

where r ≥ 1 , b 0 is the 0.95 quantile of the chi-square distribution with the degree of freedom equal to the dimension of x i j , m x and S x are some robust estimates of location and scatter of x i j . If x i j deviates from the whole, ( x i j − m x ) T S x − 1 ( x i j − m x ) will be larger and

( b 0 ( x i j − m x ) T S x − 1 ( x i j − m x ) ) r 2

will be smaller. Then the corresponding weight w i j is less than 1. On the contrary, the corresponding weight w i j is 1. Therefore, the influence of the outliers on the estimation can be controlled by the Mahalanobis distance function. For data without outlying points, we can set ψ c ( x ) = x and w i j = 1 in the robust estimating Equation (2.6), and this will lead to the classical nonrobust estimating equation.

Empirical likelihood method is a non-parametric statistical method, which has many good properties. The empirical likelihood and the estimated equation were first associated by Qin and Lawless (1994) [

L 1 ( β ) = sup { ∏ i = 1 n ∏ j = 1 m i N p i j | p i j ≥ 0 , ∑ i = 1 n ∑ j = 1 m i p i j = 1 , ∑ i = 1 n ∑ j = 1 m i p i j x i j z i j R ( β ) = 0 } (2.8)

Similar to Owen (1988) [

p i j = 1 N ( 1 + λ ′ x i j z i j R ( β ) ) − 1 , i = 1 , ⋯ , n ; j = 1 , ⋯ , m i (2.9)

where the vector λ = ( λ 1 , λ 2 , ⋯ , λ p ) ′ satisfies 1 N ∑ i = 1 n ∑ j = 1 m i x i j z i j R ( β ) 1 + λ ′ x i j z i j R ( β ) = 0 . From this we can get

− 2 log L 1 ( β ) = 2 ∑ i = 1 n ∑ j = 1 m i log ( 1 + λ ′ x i j z i j R ( β ) ) (2.10)

The estimate proposed in this paper is the maximum point of Equation (2.8), which is equivalently the minimum point of Equation (2.10). Recorded as

β ^ RELGEE = arg max L 1 ( β ) = arg min ( − 2 log L 1 ( β ) ) (2.11)

β ^ RELGEE is the RELEGE estimate of β . Under some regular conditions, the estimate can be proved to be consistent and subject to an asymptotic normal distribution. − 2 log L 1 ( β ) subject to an asymptotic linear combination of independent chi-Square distribution. The proof process is similar to Wang et al. (2010) [

The parameter of interest in this paper is β . But the parameters σ and ρ are unknown. Reference to He et al. (2005) [

σ ^ = { 1.483 median { | ( y i j − x i j T β * ) − median ( y i j − x i j T β * ) | } } (2.12)

where β * is the current estimate of β .

According to the above description of the correlation matrix, the specific robust estimation of the parameter ρ depends on the selection of the relevant structure. Simply consider, when m i = m , for Exchange-able structure (Exch)

ρ ^ = ∑ i = 1 n ∑ j > j ′ r ^ i j * r ^ i j ′ * 0.5 n m ( m − 1 ) − p (2.13)

where r ^ i j * = ψ c ( y i j − x i j T β * σ ^ ) . For first-order autoregressive structure (AR(1)), We use robust moment estimators

ρ ^ = a b − a b − 1 (2.14)

where a = ∑ i = 1 n r i * ′ T 1 r i * , b = ∑ i = 1 n r i * ′ T 2 r i * , r i * = ( r i 1 * , r i 2 * , ⋯ , r i m * ) T , T 1 is m-order diagonal matrix d i a g ( 1,1,1,1, ⋯ , 1,1 ) and T 2 is m-order diagonal matrix d i a g ( 0,1,1,1, ⋯ , 1,0 ) .

Since the maximal estimation of computational empirical likelihood will encounter numerical calculation problems, when solving β ^ RELGEE , we refer to the Newton-type algorithm of Lagrange multiplier for constrained optimization problems proposed by Özdemir (2018) [

L 2 ( β ) = sup { ∏ i = 1 N p i | ∑ i = 1 N p i = 1 , p i > 0 , ∑ i = 1 N p i x i z i R ( β ) } (3.1)

This problem can also be defined as follows

J ( p , β ) = min p i ∈ ( 0 , 1 ) [ − ∑ i = 1 N ln p i ] (3.2)

under the constraints ∑ i = 1 N p i = 1 and ∑ i = 1 N p i x i z i R ( β ) = 0 .

Lagrangian multiplier method is commonly used to find the extreme value of a general constrained function. We can get

J ( p , β , λ 0 , λ 1 T ) = − ∑ i = 1 N ln p i + λ 0 ( ∑ i = 1 N p i − 1 ) − λ 1 T ∑ i = 1 N p i x i z i R ( β ) (3.3)

where λ 0 ∈ R 1 and λ 1 T ∈ R p are Lagrange multipliers

A = [ p β ] , Λ = [ λ 0 λ 1 T ] (3.4)

The first order gradient of Equation (3.3) is

∇ J ( A , Λ ) = [ ∇ R ( A ) + ( D G ( A ) ) T Λ G ( A ) T ] (3.5)

where

∇ J ( A ) = [ − 1 p 1 − 1 p 2 ⋯ − 1 p N 0 1 × p ] T , G ( A ) = [ ∑ i = 1 N p i − 1 , ∑ i = 1 N p i x i z i R ( β ) ]

G ( A ) Jacobi matrix about A can be written

D G ( A ) = [ 1 1 ⋯ 1 0 l × p x 1 z 1 R ( β ) x 2 z 2 R ( β ) ⋯ x N z N R ( β ) − [ ∑ i = 1 N p i x i ϕ i ] T ] (3.6)

where ϕ i = ∂ z i R ( β ) ∂ β .

So the Hessian matrix of Equation (3.3) is

H L ( A , Λ ) = [ B ( A , Λ ) D G ( A ) T D G ( A ) 0 p × p ] (3.7)

where B ( A , Λ ) = [ 1 p 1 2 0 ⋯ 0 − λ 1 T x 1 ϕ 1 0 1 p 2 2 ⋯ 0 − λ 1 T x 2 ϕ 2 ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 ⋯ 1 p N 2 − λ 1 T x N ϕ N − [ λ 1 T x 1 ϕ 1 ] T − [ λ 1 T x 2 ϕ 2 ] T ⋯ − [ λ 1 T x N ϕ N ] T 0 p × p ] , it’s a ( N + k ) × ( N + k ) matrix.

By Newton iteration, we can get

H L ( A t , Λ t ) [ A t + 1 − A t Λ t + 1 − Λ t ] + ∇ L ( A t , Λ t ) = 0 (3.8)

where the value of A t , Λ t represents the result of iteration A , Λ at the tth time, the value of A t + 1 , Λ t + 1 is calculated by A t , Λ t .

We can get the iterative expression of A t , Λ t

[ A t + 1 Λ t + 1 ] = [ A t Λ t ] − H L ( A t , Λ t ) − 1 ∇ L ( A t , Λ t ) (9)

Summarize the algorithm for estimating the parameter β ^ RELGEE as follows:

Step 1. Set the initial value of p , β , λ 0 , λ 1 and the threshold ξ , where the initial value of β can be set to the estimated value obtained by the least squares method to speed up the convergence.

Step 2. Calculate H L ( A t , Λ t ) and ∇ L ( A t , Λ t ) at the tth time.

Step 3. Calculate A t + 1 , Λ t + 1 according to Iterative Formula (3.9).

Step 4. Let t ← t + 1 , repeat Step 2 and Step 3 until ‖ β t + 1 − β t ‖ < ξ is satisfied and obtain a convergent β ^ RELGEE . At this point, the solution of the robust empirical likelihood estimate is completed.

In this section, we present a simulation study. The estimators obtained by the RELGEE method proposed in this paper are compared with the estimators obtained by the common element empirical likelihood method (ELGEE). The finite sample properties of the estimators are explored. The main research contents are as follows:

1) Estimated relative efficiency when there is no pollution in the data;

2) Estimated robustness when the data is contaminated;

3) The effect on the estimation efficiency when the work correlation matrix is correctly or incorrectly specified.

The model is set to

y i j = x i j T β + ε i j , i = 1 , 2 , ⋯ , n ; j = 1 , 2 , 3 (4.1)

where the number of repeated observations is 3, x i j T = ( x i j ( 1 ) , x i j ( 2 ) , x i j ( 3 ) ) , x i j ~ ( ( 0,0,0 ) T , I 3 ) , β T = ( 7,1.7, − 1.5 ) and

( ε i 1 , ε i 2 , ε i 3 ) T ~ N ( ( 0 , 0 , 0 ) T , R ( ρ ) ) .

Considering sample size n = 50 and n = 100 . Let R ( ρ ) take exchangeable structure (Exch) and first-order autoregressive structure (AR(1)), where the parameter ρ is taken as 0.3 and 0.7 respectively. Because of the different values of parameter ρ and the different settings of real correlation matrix and work correlation matrix, We repeat the simulation 1000 times for different settings to calculate MSE (×100) that represents 100 times the mean square error of the sample under different conditions. Since there are three parameters, we find the average of the mean square error of the three parameters.

In order to study the problem (1), we compared the mean squared error of the estimating method (RELGEE) and the ordinary element empirical likelihood method (ELGEE) in the case of no pollution. The simulation results are shown in

When processing non-polluting data, due to the robust processing of the longitudinal data to some extent, resulting in the loss of part of the information, the efficiency of robust estimation is usually lower than the non-stable estimate when there is no pollution.

In order to explore questions (2) and (3), we have designed three ways of pollution:

Non-pollution | TR | |||||
---|---|---|---|---|---|---|

n | ρ | WR | Exch | AR(1) | ||

ELGEE | RELGEE | ELGEE | RELGEE | |||

0.3 | Exch | 0.57386 | 0.58100 | 0.57386 | 0.58100 | |

AR(1) | 0.58885 | 0.60091 | 0.54477 | 0.56010 | ||

50 | Ind | 0.72633 | 0.73693 | 0.72633 | 0.73693 | |

0.7 | Exch | 0.28785 | 0.29751 | 0.36407 | 0.37805 | |

AR(1) | 0.36407 | 0.37805 | 0.28785 | 0.29751 | ||

Ind | 0.66757 | 0.67960 | 0.66757 | 0.67960 | ||

0.3 | Exch | 0.27770 | 0.27829 | 0.39119 | 0.41201 | |

AR(1) | 0.34903 | 0.35454 | 0.29119 | 0.29492 | ||

100 | Ind | 0.44306 | 0.43724 | 0.40696 | 0.41015 | |

0.7 | Exch | 0.17467 | 0.17678 | 0.16708 | 0.16904 | |

AR(1) | 0.19500 | 0.19504 | 0.14110 | 0.14566 | ||

Ind | 0.42543 | 0.42879 | 0.40011 | 0.42267 |

WR: Woring R, TR: True R.

(C1). Pollution of the response variable Y: randomly turn S% of y i j into y i j + b , b ~ N ( 2,1 ) .

(C2). Pollution only for covariate X: randomly turn S% of x i j into x i j + a , a ~ N ( ( 1 , 1 , 1 ) T , I 3 ) .

(C3). Simultaneous contamination of X and Y: randomly turn S%/3 of y i j into y i j + b , b ~ N ( 2,1 ) , randomly turn S%/3 of x i j into x i j + a , a ~ N ( ( 1 , 1 , 1 ) T , I 3 ) .

Where S% is pollution rate. In this paper, 0.06 and 0.1 are selected. Some simulation results are shown in Tables 2-4.

C1 | TR | ||||||
---|---|---|---|---|---|---|---|

n | ρ | YP | WR | Exch | AR-1 | ||

ELGEE | RELGEE | ELGEE | RELGEE | ||||

Exch | 1.04790 | 0.79504 | 0.99345 | 0.74981 | |||

0.06 | AR-1 | 1.07018 | 0.81152 | 0.97866 | 0.73729 | ||

0.3 | Ind | 1.14262 | 0.87243 | 0.98603 | 0.76988 | ||

Exch | 1.23468 | 0.88941 | 1.50948 | 1.05796 | |||

0.1 | AR-1 | 1.29277 | 0.92924 | 1.43798 | 1.02313 | ||

50 | Ind | 1.33828 | 0.98621 | 1.46806 | 1.03712 | ||

Exch | 0.80301 | 0.56517 | 0.91464 | 0.62653 | |||

0.06 | AR-1 | 0.89749 | 0.65623 | 0.86469 | 0.62672 | ||

0.7 | Ind | 1.18518 | 0.99555 | 1.02135 | 0.84933 | ||

Exch | 1.15356 | 0.78475 | 1.30655 | 0.79852 | |||

0.1 | AR-1 | 1.28685 | 0.90425 | 1.24750 | 0.79105 | ||

Ind | 1.46320 | 1.12052 | 1.39281 | 1.01290 | |||

Exch | 0.42925 | 0.36534 | 0.45132 | 0.41615 | |||

0.06 | AR-1 | 0.44041 | 0.37885 | 0.44808 | 0.40805 | ||

0.3 | Ind | 0.47229 | 0.41564 | 0.45741 | 0.42950 | ||

Exch | 0.49700 | 0.44274 | 0.42543 | 0.38569 | |||

0.1 | AR-1 | 0.50975 | 0.45279 | 0.41175 | 0.37517 | ||

100 | Ind | 0.53253 | 0.48246 | 0.43528 | 0.38958 | ||

Exch | 0.26753 | 0.23767 | 0.28060 | 0.24274 | |||

0.06 | AR-1 | 0.30773 | 0.27131 | 0.28723 | 0.23802 | ||

0.7 | Ind | 0.42582 | 0.39690 | 0.47786 | 0.43469 | ||

Exch | 0.33180 | 0.25774 | 0.36312 | 0.30250 | |||

0.1 | AR-1 | 0.39815 | 0.31492 | 0.38295 | 0.30005 | ||

Ind | 0.51711 | 0.46137 | 0.50765 | 0.46074 |

YP: Pollution probability of the response variable Y.

C2 | TR | ||||||
---|---|---|---|---|---|---|---|

n | ρ | XP | WR | Exch | AR-1 | ||

ELGEE | RELGEE | ELGEE | RELGEE | ||||

Exch | 12.91886 | 0.99187 | 13.97928 | 1.02373 | |||

0.06 | AR(1) | 12.82331 | 0.99716 | 14.12307 | 1.02851 | ||

0.3 | Ind | 13.06875 | 1.03024 | 13.80362 | 1.00403 | ||

Exch | 42.76916 | 3.00972 | 39.56463 | 2.88551 | |||

0.1 | AR(1) | 43.36231 | 3.05236 | 39.30508 | 2.88421 | ||

50 | Ind | 43.31341 | 3.05817 | 40.03126 | 2.89432 | ||

Exch | 14.44630 | 0.88005 | 13.3182 | 0.89685 | |||

0.06 | AR(1) | 14.57675 | 0.93714 | 13.72404 | 0.87365 | ||

0.7 | Ind | 14.76241 | 1.09283 | 13.10561 | 0.93932 | ||

Exch | 39.52561 | 2.89129 | 37.91395 | 2.73270 | |||

0.1 | AR(1) | 40.71029 | 3.08226 | 37.78436 | 2.74435 | ||

Ind | 40.76240 | 3.18213 | 39.59294 | 2.85362 |

XP: Pollution probability of covariate X.

C3 | TR | ||||||
---|---|---|---|---|---|---|---|

n | ρ | YXP | WR | Exch | AR-1 | ||

ELGEE | RELGEE | ELGEE | RELGEE | ||||

Exch | 17.02696 | 1.67541 | 13.75782 | 1.83632 | |||

0.06 | AR(1) | 17.02611 | 1.68535 | 13.80805 | 1.77938 | ||

0.3 | Ind | 16.81146 | 1.69671 | 13.69043 | 1.70931 | ||

Exch | 17.74346 | 1.62392 | 15.18327 | 1.75757 | |||

0.1 | AR(1) | 18.27163 | 1.59380 | 15.07867 | 1.73827 | ||

50 | Ind | 19.71323 | 1.63598 | 15.07310 | 1.73339 | ||

Exch | 19.71323 | 1.66511 | 16.65348 | 1.49746 | |||

0.06 | AR(1) | 20.38681 | 1.82246 | 16.67835 | 1.47826 | ||

0.7 | Ind | 19.18665 | 1.92724 | 15.74532 | 1.71987 | ||

Exch | 15.62539 | 1.63831 | 15.67169 | 1.71610 | |||

0.1 | AR(1) | 17.59703 | 1.83022 | 16.35533 | 1.61183 | ||

Ind | 14.57003 | 2.01667 | 16.47950 | 1.73774 |

YXP: Pollution probability of the response variable Y when Pollution probability of covariate X is 0.06.

mean square error of the estimator in this paper is smaller than that of the estimator in ELGEE method. Comparing different pollution intensity, we can see that the greater the pollution intensity, the more obvious the superiority of robust estimation efficiency, which shows that the estimation method in this paper has a strong robustness. It can also be seen that when the working matrix is set incorrectly, the difference of estimator is relatively small; when the working correlation matrix is a real matrix, the estimation efficiency is the highest; when the working matrix is independent structure (Ind), the estimation efficiency is the lowest without considering the correlation between data.

It is worth adding that because the method in this paper is a non-parametric method, the distribution of random errors does not necessarily follow the normal distribution. We simulate the distribution of random errors satisfying t(3) again. The simulation results are shown in

In

We introduce the generalized estimating equations commonly used in longitudinal data, and derive robust estimation functions. Then we combine the robust estimation equations with the elemental empirical likelihood method to obtain the empirical likelihood ratio function of the estimated parameters. We show a

t(3) | ||||||
---|---|---|---|---|---|---|

n | ρ | WR | Bias (×100) | MSE (×100) | ||

ELGEE | RELGEE | ELGEE | RELGEE | |||

0.3 | Exch | −0.03467 | −0.05221 | 1.81182 | 1.01774 | |

AR(1) | 0.17968 | 0.30765 | 1.75744 | 1.15462 | ||

50 | Ind | 0.26076 | 0.11889 | 2.14819 | 1.30291 | |

0.7 | Exch | 0.55148 | 0.64514 | 0.84950 | 0.58737 | |

AR(1) | 0.18788 | 0.18082 | 1.25479 | 0.67416 | ||

Ind | −0.19433 | −0.07454 | 2.04687 | 1.18954 |

relatively optimized algorithm that can improve the efficiency and computational time of operation. We do a systematic simulation study. The simulation results show that our method maintains high estimation efficiency when the data is not polluted; in the case of data pollution, the estimator of this paper is obviously better than the non-robust estimator. With the increase of pollution intensity, the robustness of our method is more significant, and it has a significant resistance to outliers. When the working matrix is set incorrectly, the difference of estimator is relatively small; when the working correlation matrix is a real matrix, the estimation efficiency is the highest; when the working matrix is independent structure (Ind), the estimation efficiency is the lowest without considering the correlation between data. At the same time, since the estimator in this paper is based on empirical likelihood method, it is suitable for the longitudinal data of thick-tailed distribution. There are still many problems worth further study in this paper, such as the application of estimation methods to partial linear models and variable selection based on robust estimation.

The authors declare no conflicts of interest regarding the publication of this paper.

Huang, T.Y., Fan, Y.L. and Sun, Z.R. (2019) Robust Element-Wise Empirical Likelihood Estimation Method for Longitudinal Data. Journal of Applied Mathematics and Physics, 7, 1408-1420. https://doi.org/10.4236/jamp.2019.76094