Share This Article:

ESL-Based Robust Estimation for Mean-Covariance Regression with Longitudinal Data

Abstract Full-Text HTML XML Download Download as PDF (Size:436KB) PP. 10-30
DOI: 10.4236/ojs.2020.101002    24 Downloads   110 Views  

ABSTRACT

When longitudinal data contains outliers, the classical least-squares approach is known to be not robust. To solve this issue, the exponential squared loss (ESL) function with a tuning parameter has been investigated for longitudinal data. However, to our knowledge, there is no paper to investigate the robust estimation procedure against outliers within the framework of mean-covariance regression analysis for longitudinal data using the ESL function. In this paper, we propose a robust estimation approach for the model parameters of the mean and generalized autoregressive parameters with longitudinal data based on the ESL function. The proposed estimators can be shown to be asymptotically normal under certain conditions. Moreover, we develop an iteratively reweighted least squares (IRLS) algorithm to calculate the parameter estimates, and the balance between the robustness and efficiency can be achieved by choosing appropriate data adaptive tuning parameters. Simulation studies and real data analysis are carried out to illustrate the finite sample performance of the proposed approach.

1. Introduction

Longitudinal data arises frequently in many fields, such as biological research, social science and other fields. The observations on the same subject are measured repeatedly over time, and thus intrinsically correlated [1]. The covariance matrix of such data is important since ignoring the correlation structure may lead to inefficient estimators of the mean parameter. Furthermore, the covariance matrix itself may be of scientific interest [2]. However, it is challenging to model the covariance matrix which suffers from the positive-definiteness constraint and includes many unknown parameters. To avoid this challenge, a common strategy is to specify the working correlation structure [3], which does not permit more general structures and cannot flexibly incorporate covariates that may be helpful to explain the covariations. To overcome this limitation, joint modelling for the mean and covariance of longitudinal data has received increasing interest recently; see, for example, [4] - [12]. Among these joint modeling approaches, the modified Cholesky decomposition (MCD) for the covariance matrix proposed in [4] [5] is attractive owing to the fact that it leads automatically to positive definite covariance matrix, and the parameters in it can be interpreted by suitable statistical concepts. As a result, the regression techniques and model based inference can be adopted to the parameters in this decomposition, see [7] for more details.

However, the aforementioned approaches are very sensitive to outliers or heavy-tailed distributions. [13] proposed a robust procedure for modeling the correlation matrix of longitudinal data based on an alternative Cholesky decomposition and heavy-tailed multivariate t-distributions with unknown degrees of freedom. It should be pointed out that the use of the multivariate t-distribution alone does not necessarily guarantee robustness. In addition, [14] [15] developed robust generalized estimating equations (GEE) for regression parameters in joint mean-covariance model by employing the Huber’s function and leverage-based weights. [16] developed an efficient parameter estimation via MCD for quantile regression with longitudinal data. [17] further proposed a moving average Cholesky factor model, which is transformed from MCD, in covariance modeling for composite quantile regression with longitudinal data. Then, [18] carried out smoothed empirical likelihood inference via MCD for quantile varying coefficient models with longitudinal data. Later, [19] developed quantile estimation via MCD for longitudinal single-index models.

Although M-type regression and quantile regression procedures can overcome outliers and heavy-tail errors, they may lose efficiency under the normal distribution. To overcome this difficulty, [20] recently proposed a robust variable selection approach by adopting the exponential squared loss (ESL) function with a tuning parameter. They have showed that, with properly selected tuning parameter, the proposed approach not only achieves good robustness with respect to outliers in the dataset, but also is as asymptotically efficient as the least squares estimation without outliers under normal error. Later, some authors employed the ESL funtion for longitudinal data. For example, [21] propsed an efficient and robust variable selection method for longitudinal generalized linear models based on GEE. [22] proposed a robust and efficient estimation procedure for simultaneous model structure identification and variable selection in generalized partial linear varying coefficient models for longitudinal data. [23] developed GEE-based robust estimation and empirical likelihood inference approach with ESL for panel data models. With a similar loss function, [24] proposed a robust variable selection method in modal varying-coefficient models with longitudinal data. [25] proposed modal regression statistical inference for longitudinal semivarying coefficient models, including GEE, empirical likelihood and variable selection. However, to our knowledge, there is no paper to investigate the robust estimation procedure against outliers within the framework of mean-covariance regression analysis for longitudinal data employing the ESL function.

In this paper, we propose a robust estimation approach for the model parameters of the mean and generalized autoregressive parameters in the within subject covariance matrices for longitudinal data based on the ESL function. We begin with the ESL-based estimation for the mean parameters pretending that the repeated measurements within a subject are independent. Then based on the roughly estimated mean parameters, the simultaneous estimation for the mean and generalized autoregressive parameters is carried out using the ESL function. The proposed estimators can be shown to be asymptotically normal under certain conditions. Moreover, we develop an iteratively reweighted least squares (IRLS) algorithm [26] to calculate the parameter estimates. Numerical studies are carried out to illustrate the finite sample performance of our approach.

The outline of this paper is organized as follows. Section 2 develops the robust estimation procedure. Section 3 establishes the asymptotic properties of the proposed ESL estimators. The IRLS algorithm and a data-driven method for the selection of tuning parameters are presented in Section 4. Simulations are carried out in Section 5. Section 6 analyses a real data set. A discussion is given in Section 7. The technical proofs are provided in the Appendix.

2. Robust Estimation Procedure

2.1. Initial Estimate for the Mean Parameters

Consider n subjects, where each subject is measured repeatedly over time. For the ith subject, suppose that Y i j is the observed scale response variable at time t i j , and X i j is the corresponding p × 1 covariate vector, i = 1 , , n , j = 1 , , m i . Denote N = i = 1 n m i . Furthermore, let Y i = ( Y i 1 , , Y i m i ) Τ , X i = ( X i 1 , , X i m i ) Τ , t i = ( t i 1 , , t i m i ) Τ . A longitudinal linear regression model has the form

Y i j = X i j Τ β + ε i j , i = 1, , n , j = 1, , m i , (1)

where β is the p × 1 vector of associated parameters, ε i = ( ε i 1 , , ε i m i ) T is the random error satisfying that E ( ε i j | X i j ) = 0 and var ( ε i | X i ) = Σ i .

Define the ESL function 1 ϕ τ ( t ) = 1 exp ( t 2 / τ ) . We first estimate β pretending that the random error ε i j ’s are independent. More specifically, we estimate β by maximizing the objective function

L 1 ( β ) = i = 1 n j = 1 m i ϕ τ 1 ( Y i j X i j Τ β ) , (2)

where τ 1 is a tuning parameter. The resulting initial estimate of β is denoted by β ˜ .

2.2. Simultaneous Estimate for the Mean and Generalized Autoregressive Parameters

Based on the Cholesky decomposition, there exists a lower triangle matrix Φ i with diagonal ones such that

var ( Φ i ε i | X i ) = Φ i Σ i Φ i Τ = D i ,

where D i = diag ( d i 1 2 , , d i m i 2 ) . In other words, let ϵ i = ( ϵ i 1 , , ϵ i m i ) T = Φ i ε i , we obtain that

ε i 1 = ϵ i 1 , ε i j = ϕ j ,1 ( i ) ε i 1 + + ϕ j , j 1 ( i ) ε i , j 1 + ϵ i j , i = 1, , n , j = 2, , m i , (3)

where ϕ j , k ( i ) is the negative of the ( j , k ) th element of Φ i . It’s obvious that ϵ i j ’s are uncorrelated with E ( ϵ i j ) = 0 and var ( ϵ i j ) = d i j 2 , j = 1 , , m i . If ε i ’s were available, then (1) could be transformed as the following linear model with uncorrelated error ϵ i j ’s:

Y i 1 = X i 1 Τ β + ϵ i 1 , Y i j = X i j Τ β + ϕ j ,1 ( i ) ε i 1 + + ϕ j , j 1 ( i ) ε i , j 1 + ϵ i j , i = 1, , n , j = 2, , m i . (4)

[4] pointed out that the MCD has a well-founded statistical interpretation, and it has the advantage that the generalized autoregressive parameter ϕ j , k ( i ) ’s and log-innovation variance log ( d i j 2 ) ’s are unconstrained. For simplicity, we assume that d i j 2 = d 2 for all i = 1 , , n , j = 2 , , m i . Since Σ i = var ( ε i | X i ) may depend on X i , we adopt a more parsimonious structure,

ϕ j , k ( i ) = Z j , k ( i ) Τ γ , (5)

where γ = ( γ 1 , , γ q ) Τ is the regression coefficient, and the covariates Z j , k ( i ) = ( Z j , k ,1 ( i ) , , Z j , k , q ( i ) ) Τ may contain the time, the baseline covariates X i , the interactions and so on.

Based on β ˜ , we can obtain the estimated residuals

ε ˜ i j = Y i j X i j Τ β ˜ , i = 1, , n , j = 1, , m i 1. (6)

From (4), (5) and (6), we can obtain the simultaneous estimate θ ^ = ( β ^ Τ , γ ^ Τ ) Τ of θ = ( β Τ , γ Τ ) Τ by maximizing the following objective function:

L 2 ( θ ) = i = 1 n j = 2 m i ϕ τ 2 ( Y i j X i j Τ β ε ˜ i 1 Z j ,1 ( i ) Τ γ ε ˜ i , j 1 Z j , j 1 ( i ) Τ γ ) = i = 1 n j = 2 m i ϕ τ 2 ( Y i j X i j Τ β k = 1 j 1 ε ˜ i , k Z j , k ,1 ( i ) γ 1 k = 1 j 1 ε ˜ i , k Z j , k , q ( i ) γ q ) = i = 1 n j = 2 m i ϕ τ 2 ( Y i j X i j Τ β ζ ˜ i j Τ γ ) = i = 1 n j = 2 m i ϕ τ 2 ( Y i j δ ˜ i j Τ θ ) , (7)

where δ ˜ i j = ( X i j Τ , ζ ˜ i j Τ ) Τ with ζ ˜ i j = ( k = 1 j 1 ε ˜ i , k Z j , k ,1 ( i ) , , k = 1 j 1 ε ˜ i , k Z j , k , q ( i ) ) Τ . Then we can obtain the estimates of ϕ j , k ( i ) ’s in model (3) by combining (5) and γ ^ .

3. Asymptotic Properties

Define F 1 ( x , τ 1 ) = E ( ϕ τ 1 ( ε i j ) | X i j = x ) , G 1 ( x , τ 1 ) = E ( ϕ τ 1 ( ε i j ) 2 | X i j = x ) , F 2 ( x , τ 2 ) = E ( ϕ τ 2 ( ϵ i j ) | X i j = x ) , G 2 ( x , τ 2 ) = E ( ϕ τ 2 ( ϵ i j ) 2 | X i j = x ) , H ( x ¯ i j ) = E ( ζ i j ζ i j Τ | X ¯ i j = x ¯ i j ) , X ¯ i j = ( X i 1 , , X i j ) Τ , x ¯ i j = ( x i 1 , , x i j ) Τ , δ i j = ( X i j Τ , ζ i j Τ ) Τ , ζ i j = ( k = 1 j 1 ε i , k Z j , k ,1 ( i ) , , k = 1 j 1 ε i , k Z j , k , q ( i ) ) Τ . To establish the asymptotic properties of the proposed ESL estimator, assume that the following regularity conditions hold:

(C1) There exists a positive integer M such that max 1 i n m i M < . This means that n and N have the same order.

(C2) There exists a positive constant C such that | X i j ( k ) | C for 1 i n , 1 j m i and 0 k p . In addition, 1 n i = 1 n j = 1 m i X i j X i j Τ converges to a finite positive definite matrix in probability.

(C3) F 1 ( x , τ 1 ) and G 1 ( x , τ 1 ) are continuous with respect to x. Moreover, for any τ 1 > 0 , F 1 ( x , τ 1 ) < 0 .

(C4) E ( ϕ τ 1 ( ε i j ) | X i j = x ) = 0 , E ( | ϕ τ 1 ( ε i j ) | 3 | X i j = x ) , E ( ϕ τ 1 ( ε i j ) 2 | X i j = x ) , E ( ϕ τ 1 ( ε i j ) | X i j = x ) and E ( ϕ τ 1 ( ε i j ) 2 | X i j = x ) are continuous with respect to x.

(C5) 1 n i = 1 n E ( X i Τ ϕ τ 1 ( ε i ) ϕ τ 1 ( ε i ) Τ X i ) converges to a finite positive definite matrix.

(C6) For 1 i n , 1 j 1 , , j 6 m i 1 , E { | ε i j 1 ε i j 2 ε i j 3 | | ( X i j 1 , X i j 2 , X i j 3 ) = x } and E { ε i j 1 ε i j k | ( X i j 1 , , X i j k ) = x } ( k = 2,3,4,6 ) are continuous with respect to x.

(C7) 1 n i = 1 n j = 1 m i ζ i j ζ i j Τ converges to a finite positive definite matrix in probability.

(C8) F 2 ( x , τ 2 ) and G 2 ( x , τ 2 ) are continuous with respect to x. Moreover, for any τ 2 > 0 , F 2 ( x , τ 2 ) < 0 .

(C9) E ( ϕ τ 2 ( ϵ i j ) | X i j = x ) = 0 , E ( | ϕ τ 2 ( ϵ i j ) | 3 | X i j = x ) , E ( ϕ τ 2 ( ϵ i j ) 2 | X i j = x ) , E ( ϕ τ 2 ( ϵ i j ) | X i j = x ) and E ( ϕ τ 2 ( ϵ i j ) 2 | X i j = x ) are continuous with respect to x.

Theorem 1 If regularity conditions (C1)-(C5) hold, then

n ( β ˜ β 0 ) N ( 0, Σ 1 1 Σ 2 Σ 1 1 )

in distribution as n , where

Σ 1 = lim n 1 n i = 1 n j = 1 m i E { F 1 ( X i j , τ 1 ) X i j X i j Τ } ,

Σ 2 = lim n 1 n i = 1 n E ( X i Τ ϕ τ 1 ( ε i ) ϕ τ 1 ( ε i ) Τ X i ) .

Then if the random error ε i j ’s are independent, we have the following corollary, which is similar to Corollary 2 and useful for the choice of tuning parameters in Section 4.2.

Corollary 1 If regularity conditions (C1)-(C4) hold and ε i j ’s are independent, then

n ( β ˜ β 0 ) N ( 0, Σ 1 1 Σ 2 Σ 1 1 )

in distribution as n , where

Σ 1 = lim n 1 n i = 1 n j = 1 m i E { F 1 ( X i j , τ 1 ) X i j X i j Τ } ,

Σ 2 = lim n 1 n i = 1 n j = 1 m i E { G 1 ( X i j , τ 1 ) X i j X i j Τ } .

Theorem 2 If regularity conditions (C1)-(C9) hold, then

n ( θ ^ θ 0 ) N ( 0, Σ 3 1 Σ 4 Σ 3 1 )

in distribution as n , where

Σ 3 = lim n 1 n i = 1 n j = 2 m i E { F 2 ( X i j , τ 2 ) E ( δ i j δ i j Τ | X ¯ i j ) } ,

Σ 4 = lim n 1 n i = 1 n j = 2 m i E { G 2 ( X i j , τ 2 ) E ( δ i j δ i j Τ | X ¯ i j ) } .

In fact, we have

E ( δ i j δ i j Τ | X ¯ i j ) = ( X i j X i j Τ 0 0 H ( X ¯ i j ) ) . (8)

Then it can be deduced that β ^ and γ ^ are asymptotically independent, that is, the following corollaries hold.

Corollary 2 If regularity conditions (C1)-(C9) hold, then

n ( β ^ β 0 ) N ( 0, Σ 5 1 Σ 6 Σ 5 1 )

in distribution as n , where

Σ 5 = lim n 1 n i = 1 n j = 2 m i E { F 2 ( X i j , τ 2 ) X i j X i j Τ } ,

Σ 6 = lim n 1 n i = 1 n j = 2 m i E { G 2 ( X i j , τ 2 ) X i j X i j Τ } .

Corollary 3 If regularity conditions (C1)-(C9) hold, then

n ( γ ^ γ 0 ) N ( 0, Σ 7 1 Σ 8 Σ 7 1 )

in distribution as n , where

Σ 7 = lim n 1 n i = 1 n j = 2 m i E { F 2 ( X i j , τ 2 ) H ( X ¯ i j ) } ,

Σ 8 = lim n 1 n i = 1 n j = 2 m i E { G 2 ( X i j , τ 2 ) H ( X ¯ i j ) } .

4. Implementation of the ESL Estimator

4.1. IRLS Algorithm

In this subsection, we develop an IRLS algorithm to calculate the parameter estimates. The IRLS algorithm has been commonly adopted for general M-estimators. Since the maximizers of (2) and (7) can be regarded as special M-estimators, the IRLS algorithm can be carried out to find β ˜ and θ ^ . In the following, we first develop the IRLS algorithm to find the maximizer of (2), and then we can calculate the maximizer of (7) in a similar way. Later, we summarize the algorithm in detail.

Because β ˜ maximizes (2), we have the following normal equation:

i = 1 n j = 1 m i X i j ϕ τ 1 ( Y i j X i j Τ β ˜ ) = 0,

or

i = 1 n j = 1 m i X i j ϕ τ 1 ( Y i j X i j Τ β ˜ ) ( Y i j X i j Τ β ˜ ) = 0. (9)

Let W i j = ϕ τ 1 ( Y i j X i j Τ β ˜ ) , then (9) can be transformed as

i = 1 n j = 1 m i X i j W i j ( Y i j X i j Τ β ˜ ) = 0,

or X Τ W X β ˜ = X Τ W Y , where Y = ( Y 1 , , Y n ) Τ , X = ( X 1 , , X n ) Τ and W = diag ( W 11 , , W n , m n ) . Given the k-th approximation β ˜ ( k ) , we can compute the corresponding weight matrix W ( k ) with W i j ( k ) = ϕ τ 1 ( Y i j X i j Τ β ˜ ( k ) ) . Then we have

β ( k + 1 ) = ( X Τ W ( k ) X ) 1 X Τ W ( k ) Y . (10)

This iteration of (10) will monotonically non-decrease the objective function (2), that is, L 1 ( β ˜ ( k + 1 ) ) L 1 ( β ˜ ( k ) ) 0 . In fact,

log { L 1 ( β ˜ ( k + 1 ) ) } log { L 1 ( β ˜ ( k ) ) } = log ( i = 1 n j = 1 m i W i j ( k + 1 ) ) log ( i = 1 n j = 1 m i W i j ( k ) ) = log ( i = 1 n j = 1 m i W i j ( k + 1 ) i = 1 n j = 1 m i W i j (k) )

= log ( i = 1 n j = 1 m i W i j ( k ) i = 1 n j = 1 m i W i j ( k ) W i j ( k + 1 ) W i j ( k ) ) = log ( i = 1 n j = 1 m i π i j ( k ) W i j ( k + 1 ) W i j ( k ) ) ,

where π i j ( k ) = W i j ( k ) i = 1 n j = 1 m i W i j ( k ) . Based on the Jensen’s inequality, we have

log { L 1 ( β ˜ ( k + 1 ) ) } log { L 1 ( β ˜ ( k ) ) } i = 1 n j = 1 m i π i j ( k ) log ( W i j ( k + 1 ) W i j ( k ) ) = 1 τ 1 i = 1 n j = 1 m i π i j ( k ) { ( Y i j X i j Τ β ˜ ( k ) ) 2 ( Y i j X i j Τ β ˜ ( k + 1 ) ) 2 } .

From expression (10), we can find out that β ˜ ( k + 1 ) minimizes ( Y X β ) Τ W ( k ) ( Y X β ) , or i = 1 n j = 1 m i W i j ( k ) ( Y i j X i j Τ β ) 2 . Then we have log { L 1 ( β ˜ ( k + 1 ) ) } log { L 1 ( β ˜ ( k ) ) } 0 .

The IRLS algorithm is summarized as follows:

Step 1. Computation of β ˜ by maximizing (2). Take the initial value β ˜ ( 0 ) as the ordinary least squares (OLS) estimator. Given the k-th approximation β ˜ ( k ) , the IRLS iteration updates β ˜ ( k + 1 ) through (10). Repeat this iteration until the convergence occurs. The resulting estimator is denoted as β ˜ .

Step 2. Computation of θ ^ = ( β ^ Τ , γ ^ Τ ) Τ by maximizing (7). Take θ ^ ( 0 ) as the OLS estimator by minimizing i = 1 n j = 2 m i ( Y i j δ ˜ i j Τ θ ) 2 . Similar to (10), given the k-th approximation θ ^ ( k ) , the IRLS iteration updates

θ ^ ( k + 1 ) = ( Δ ˜ Τ V ( k ) Δ ˜ ) 1 Δ ˜ Τ V ( k ) Y ,

where Δ ˜ = ( δ ˜ 1 , , δ ˜ n ) Τ , δ ˜ i = ( δ ˜ i 2 , , δ ˜ i m i ) Τ , V ( k ) = diag ( V 12 ( k ) , , V n , m n ( k ) ) , and V i j ( k ) = ϕ τ 2 ( Y i j δ ˜ i j Τ θ ^ ( k ) ) . Repeat this iteration until the convergence occurs. The resulting estimator is denoted as θ ^ .

4.2. The Choice of Tuning Parameters

In this subsection, we give a data driving method to determine the tuning parameters { τ 1 , τ 2 } . In order to simplify the calculation with respect to τ 1 , we assume that the random error ε i j ’s are independent of each other and X i j ’s. Then from Corollary 1, we can obtain that the ratio between the asymptotic variance of the initial ESL estimator and that of the OLS estimator for β is

r ( τ 1 ) G 1 ( τ 1 ) F 1 2 ( τ 1 ) σ 2 ,

where σ 2 = var ( ε i j ) . Therefore, the ideal choice of τ 1 is

τ 1, opt = arg min τ 1 r ( τ 1 ) = arg min τ 1 G 1 ( τ 1 ) F 1 2 ( τ 1 ) / σ 2 .

Then r ( τ 1 ) can be estimated by G ˜ 1 ( τ 1 ) F ˜ 1 2 ( τ 1 ) / σ ˜ 2 , where

F ˜ 1 ( τ 1 ) = 1 N i = 1 n j = 1 m i ϕ τ 1 ( ε ˜ i j ) and G ˜ 1 ( τ 1 ) = 1 N i = 1 n j = 1 m i ϕ τ 1 ( ε ˜ i j ) 2

with ε ˜ i j = Y i j X i j Τ β ˜ , and σ ˜ is the standard deviation of ε ˜ i j . Then τ 1, opt can be easily obtained using the grid search approach. τ 2 can also be chosen in a similar way.

5. Simulation Studies

In this section, we conduct some simulation studies to investigate the finite sample performance of the proposed approach. We generate 200 datasets and consider sample sizes n = 50 , 100 and 200. In particular, the datasets are generated from the following model:

Y i j = X i j Τ β + ε i j , i = 1 , , n , j = 1 , , m i ,

where m i 1 binomial ( 6,0.8 ) , X i j 1 , X i j 2 N ( 0,1 ) , β = ( 1,2 ) Τ ,

t i j U ( 0,1 ) , Z j , k ( i ) = ( 1, t i j t i k ) Τ , γ = ( 0.5,0.2 ) Τ , D i = I m i , and ε i N ( 0, Σ i ) with Φ i Σ i Φ i Τ = D i .

To investigate robustness, we denote the above datasets as no contamination (NC) situation and consider the following four contaminations:

1) ε i follows the multivariate t-distribution with 2 degrees of freedom and covariance matrix Σ i .

2) ε i follows the multivariate t-distribution with 2 degrees of freedom and covariance matrix Σ i , and randomly choose 2% of X i j to be X i j + ( 6,6 ) Τ .

3) ε i N ( 0, Σ i ) , and randomly choose 2% of ε i j to be ε i j + 15 .

4) ε i N ( 0, Σ i ) , randomly choose 2% of ε i j to be ε i j + 15 and 2% of X i j to be X i j + ( 6,6 ) Τ .

We compare the proposed ESL method with the OLS method, the M-estimator (M) in [26] and the quantile regression (QR) method in [27]. Note that the OLS, M and QR methods follow the estimation procedure similar to that of ESL, while the main difference is that the objective function is respectively replaced by their counterparts. To assess the finite sample performance, we calculate the mean and standard deviation (SD) for the estimators of β and γ . The corresponding simulation results are displayed in Tables 1-5.

From Table 1, it can be observed that the performance of the M, QR and ESL methods is comparable to that of the OLS method, when the error follows a normal distribution and there are no outliers in the data. From Tables 2-5, it can be found out that the M, QR and ESL methods outperform significantly the OLS method, particularly in terms of SD, in several contamination cases; moreover, the ESL method always perform best in these cases. More specifically, Table 2 and Table 3 indicate that the M, QR and ESL methods outperform significantly the OLS method with respect to both β and γ , when the error follows a heavy-tailed error distribution. Table 4 and Table 5 indicate that the M, QR and ESL methods outperform significantly the OLS method with respect to β , when there are outliers in responses; the OLS, M and QR methods perform rather poorly with respect to γ , however, the ESL method sill performs well in this case.

6. Real Data Analysis

In this section, we analyse the CD4 cell study, which was previously analysed by [7] [28]. This dataset comprises CD4 cell counts of 369 HIV-infected men, and there are totally 2376 values collected at different times for each individual, over a period of approximately eight and a half years. The number of measurements for each individual varies from 1 to 12 and the time points are not equally spaced. We use square roots of the CD4 counts [28] to make the response variable closer to the normal distribution, and the related six covariates are respectively time since seroconversion t i j , age relative to arbitrary origin X i j 1 , packs of cigarettes smoked per day X i j 2 , recreational drug use X i j 3 , number of sexual partners X i j 4 and mental illness score X i j 5 . Note that Figure 1 displays the sample regressogram and local linear fitted curve for the square root of CD4 count over time, which reflects polynomial trend with respect to time. In the following, we use the mean model [1].

Table 1. Parameter estimates (with SD in parentheses) for the NC situation.

Table 2. Parameter estimates (with SD in parentheses) for the first contamination situation.

Table 3. Parameter estimates (with SD in parentheses) for the second contamination situation.

Table 4. Parameter estimates (with SD in parentheses) for the third contamination situation.

Table 5. Parameter estimates (with SD in parentheses) for the fourth contamination situation.

Figure 1. Sample regressogram and local linear fitted curve for the CD4 data: square root of CD4+ number versus time.

Y i j = β 1 X i j 1 + β 2 X i j 2 + β 3 X i j 3 + β 4 X i j 4 + β 5 X i j 5 + f ( t i j ) + ε i j ,

where f ( t ) = β 0 + β 6 t + + β 7 t + 2 with t + = t I ( t > 0 ) . We use cubic polynomial to model the generalized autoregressive parameters, that is,

Z j , k ( i ) = ( 1, t i j t i k , ( t i j t i k ) 2 , ( t i j t i k ) 3 ) Τ .

We apply the OLS, M, QR methods and the proposed ESL method to the CD4 cell study. To assess the prediction performance, we randomly split the data into three parts, each with 369/3 = 123 subjects. We use the first two parts as the training dataset to fit the model, and then assess the out of sample performance on the testing dataset (defined as TD) which is left out. This process is repeated 200 times. We define the median absolute prediction error (MAPE) as median of { | Y i j Y ^ i j | , i TD , j = 1, , m i } . To illustrate the estimation robustness of the proposed ESL method compared with the other methods, we re-analyse the dataset by including 5% outliers in the dataset, which are randomly generated by replacing Y i j with Y i j + 300 . Moreover, we also re-analyse the dataset by including 5% outliers only in the training data set, in order to assess the robustness in terms of prediction performance. The results are displayed in Table 6. It can be observed that the estimates for γ are very similar based on different methods, but the estimates for β based on the ESL method are somewhat distinct with those of the OLS, M and QR methods in the no outlier case. Moreover, the MAPE of the proposed ESL method is slightly larger than the others, indicating that these methods possess comparative prediction performance in the case of no outlier. In the 5% outliers case, the MAPEs indicate that the ESL, M and QR methods perform similarly but much better than the OLS method in the prediction performance. Comparing the no outlier with 5% outliers case, we can see that the estimates based on the ESL method varies more slightly than the other methods, especially in terms of the estimates for γ ; the ESL, M and QR methods are robust to the outliers, while the OLS method is adversely affected by the outliers.

7. Conclusions

In this paper, based on the ESL function, we proposed a robust estimation approach for the mean and generalized autoregressive parameters with longitudinal

Table 6. Parameter estimates and prediction results for the CD4 data.

data. The generalized autoregressive parameters that resulted from the MCD of the covariance matrix are unconstrained and can be well interpreted by statistical concepts in the framework of time series. Then the mean and generalized autoregressive parameters can be estimated via linear regression models using the ESL function. Moreover, the balance between the robustness and efficiency can be achieved by choosing appropriate data adaptive tuning parameters. Under certain conditions, we established the theoretical properties. Simulation studies and real data analysis were also carried out to illustrate the finite sample performance of our approach.

Several further problems need to be investigated. First, the dimension of the covariates in regression models is assumed to be fixed, thus it is interesting to extend our approach to the high-dimensional settings. Second, the models can be extended to nonparametric and semiparametric models. For more discussion along this line, references including [10] [29] may be helpful. Moreover, this paper targets the conditional mean of the response given covariates, which suffers from difficulties when the conditional distribution of the response is asymmetric. In this case, the conditional mode may be a more useful summary than the conditional mean, and thus modal linear regression [30] may be an interesting problem.

Acknowledgements

We thank the editor and the referee for their comments. This work was supported by the National Natural Science Foundation of China (No. 11971001) and the Beijing Natural Science Foundation (No. 1182002).

Appendix

Lemma 1 If regularity conditions (C1)-(C5) hold, then with probability approaching to 1, there exists a local maximizer of (2), denoted as β ˜ , such that

β ˜ β 0 = O P ( 1 n ) .

Proof of Lemma 1. Let R 1 ( β ) = 1 n L 1 ( β ) = 1 n i = 1 n j = 1 m i ϕ τ 1 ( Y i j X i j Τ β ) . It suffices to show that for any given ρ > 0 , there exists a large constant C > 0 such that

P { sup v = C R 1 ( β 0 + v / n ) < R 1 ( β 0 ) } 1 ρ , (1)

for any p-dimensional vector v satisfying that v = C . Based on the Taylor expansion, we have

R 1 ( β 0 + v / n ) R 1 ( β 0 ) = 1 n i = 1 n j = 1 m i { ϕ τ 1 ( Y i j X i j Τ ( β 0 + v / n ) ) ϕ τ 1 ( Y i j X i j Τ β 0 ) } = 1 n i = 1 n j = 1 m i { ϕ τ 1 ( ε i j v Τ X i j / n ) ϕ τ 1 ( ε i j ) } = 1 n i = 1 n j = 1 m i { ϕ τ 1 ( ε i j ) v Τ X i j / n + 1 2 n ϕ τ 1 ( ε i j ) ( v Τ X i j ) 2 1 6 n 3 2 ϕ τ 1 ( ε i j ) ( v Τ X i j ) 3 } I 1 + I 2 + I 3 ,

where ε i j lies between ε i j v Τ X i j / n and ε i j . Then, for I 1 , we have

E ( I 1 ) = 1 n i = 1 n j = 1 m i E { ϕ τ 1 ( ε i j ) v Τ X i j / n } = 1 n i = 1 n j = 1 m i E { E ( ϕ τ 1 ( ε i j ) | X i j ) v Τ X i j / n } = 0 ,

and

var ( I 1 ) = E { 1 n i = 1 n j = 1 m i ϕ τ 1 ( ε i j ) v Τ X i j / n } 2 = 1 n 3 i = 1 n E { j = 1 m i ϕ τ 1 ( ε i j ) v Τ X i j } 2 .

Using the Cr inequality and condition (C1), we can obtain that

E { j = 1 m i ϕ τ 1 ( ε i j ) v Τ X i j } 2 m i j = 1 m i E { ϕ τ 1 ( ε i j ) v Τ X i j } 2 M j = 1 m i E { ϕ τ 1 ( ε i j ) v Τ X i j } 2

for i = 1 , , n , then we have

var ( I 1 ) M n 3 i = 1 n j = 1 m i v Τ E { ϕ τ 1 ( ε i j ) 2 X i j X i j Τ } v = M n 3 i = 1 n j = 1 m i v Τ E { G 1 ( X i j , τ 1 ) X i j X i j Τ } v = O ( v 2 n 2 ) = O ( C 2 n 2 ) .

Therefore, we can deduce that I 1 = E ( I 1 ) + O P ( var ( I 1 ) ) = O P ( C n 1 ) . Similarly, we get I 3 = O P ( n 3 / 2 ) . For I 2 , we have

I 2 = 1 2 n 2 i = 1 n j = 1 m i v Τ E ( F 1 ( X i j , τ 1 ) X i j X i j Τ ) v ( 1 + o P ( 1 ) ) = O ( v 2 n 1 ) = O ( C 2 n 1 ) .

Note that v = C , we can choose a sufficiently large C such that I 2 dominates both I 1 and I 3 with a probability of at least 1 ρ . From condition (C3), we get F 1 ( x , τ 1 ) < 0 , then (1) holds. The proof of Lemma 1 is finished.

Proof of Theorem 1. Let φ ˜ i j = X i j Τ ( β ˜ β 0 ) , then β ˜ satisfies the following equation:

0 = 1 n i = 1 n j = 1 m i X i j ϕ τ 1 ( ε i j φ ˜ i j ) = 1 n i = 1 n j = 1 m i X i j { ϕ τ 1 ( ε i j ) ϕ τ 1 ( ε i j ) φ ˜ i j + 1 2 ϕ τ 1 ( ε i j ) φ ˜ i j 2 } I 4 + I 5 + I 6 ,

where ε i j locates between ε i j φ ˜ i j and ε i j . It can be easily shown that

I 5 = 1 n i = 1 n j = 1 m i ϕ τ 1 ( ε i j ) X i j X i j Τ ( β ˜ β 0 ) = 1 n i = 1 n j = 1 m i E ( F 1 ( X i j , τ 1 ) X i j X i j Τ ) ( β ˜ β 0 ) + o P ( 1 ) .

Since | φ ˜ i j | = | X i j Τ ( β ˜ β 0 ) | = O P ( β ˜ β 0 ) , then | φ ˜ i j | 2 = O P ( β ˜ β 0 2 ) . Thus, I 6 = o P ( I 5 ) . Then,

1 n i = 1 n j = 1 m i E ( F 1 ( X i j , τ 1 ) X i j X i j Τ ) n ( β ˜ β 0 ) = 1 n i = 1 n j = 1 m i X i j ϕ τ 1 ( ε i j ) + o P ( 1 ) .

Notice that E ( X i j ϕ τ 1 ( ε i j ) ) = 0 . For any p × 1 vector κ , let e i = κ Τ U i , where U i = j = 1 m i X i j ϕ τ 1 ( ε i j ) = X i Τ ϕ τ 1 ( ε i ) . By Cr inequality and condition (C1), we have

E | e i | 3 M j = 1 m i E | ( κ Τ X i j ) ϕ τ 1 ( ε i j ) | 3 = M j = 1 m i E { E ( | ϕ τ 1 ( ε i j ) | 3 | X i j ) | κ Τ X i j | 3 } K 1 ,

where K 1 is a constant independent of i. Then by condition (C5), 1 n i = 1 n E ( U i U i Τ ) converges to a finite positive definite matrix, which is denoted by Σ 2 here. Thus, we obtain that

1 n i = 1 n var ( e i ) = 1 n i = 1 n E ( e i 2 ) = 1 n i = 1 n κ Τ E ( U i U i Τ ) κ κ Τ Σ 2 κ > 0.

So the proof of theorem 1 is completed by the Lyapunov central limit theorem and Slutsky’s theorem.

Lemma 2 If regularity conditions (C1)-(C9) hold, then with probability approaching to 1, there exists a local maximizer of (7), denoted as θ ^ , such that

θ ^ θ 0 = O P ( 1 n ) .

Proof of Lemma 2. Since β ˜ β 0 = O P ( 1 n ) , we can replace β ˜ with β 0 during the proof from now on. Then δ ˜ i j = ( X i j Τ , ζ ˜ i j Τ ) Τ can be substituted by δ i j = ( X i j Τ , ζ i j Τ ) Τ , where ζ i j = ( k = 1 j 1 ε i , k Z j , k , 1 ( i ) , , k = 1 j 1 ε i , k Z j , k , q ( i ) ) Τ . Let

R 2 ( θ ) = 1 n i = 1 n j = 2 m i ϕ τ 2 ( Y i j X i j Τ β ζ i j Τ γ ) = 1 n i = 1 n j = 2 m i ϕ τ 2 ( Y i j δ i j Τ θ ) .

It suffices to show that for any given ρ > 0 , there exists a large constant C > 0 such that

P { lim ν = C R 2 ( θ 0 + ν / n ) < R 2 ( θ 0 ) } 1 ρ , (2)

for any ( p + q ) -dimensional vector ν satisfying that ν = C . Based on the Taylor expansion, we have

R 2 ( θ 0 + ν / n ) R 2 ( θ 0 ) = 1 n i = 1 n j = 2 m i { ϕ τ 2 ( Y i j δ i j Τ ( θ 0 + ν / n ) ) ϕ τ 2 ( Y i j δ i j Τ θ 0 ) } = 1 n i = 1 n j = 2 m i { ϕ τ 2 ( ϵ i j ν Τ δ i j / n ) ϕ τ 2 ( ϵ i j ) } = 1 n i = 1 n j = 2 m i { ϕ τ 2 ( ϵ i j ) ν Τ δ i j / n + 1 2 n ϕ τ 2 ( ϵ i j ) ( ν Τ δ i j ) 2 1 6 n 3 2 ϕ τ 2 ( ϵ i j ) ( ν Τ δ i j ) 3 } I 7 + I 8 + I 9 ,

where ϵ i j lies between ϵ i j ν Τ δ i j / n and ϵ i j . Recall that X ¯ i j = ( X i 1 , , X i j ) Τ . It should be emphasized that δ i j consists of both covariates and ε i 1 , , ε i , j 1 , and thus ϵ i j is independent of δ i j conditional on X ¯ i j . In addition, from (8), we have

E ( δ i j δ i j Τ ) = ( E ( X i j X i j Τ ) 0 0 E ( ζ i j ζ i j Τ ) ) .

Then, for I 7 , we have

E ( I 7 ) = 1 n i = 1 n j = 2 m i E { ϕ τ 2 ( ϵ i j ) ν Τ δ i j / n } = 1 n i = 1 n j = 2 m i E { E ( ϕ τ 2 ( ϵ i j ) | X i j ) E ( ν Τ δ i j | X ¯ i j ) / n } = 0 ,

and

var ( I 7 ) = E { 1 n i = 1 n j = 2 m i ϕ τ 2 ( ϵ i j ) ν Τ δ i j / n } 2 = 1 n 3 i = 1 n j = 2 m i ν Τ E { ϕ τ 2 ( ϵ i j ) 2 δ i j δ i j Τ } ν = 1 n 3 i = 1 n j = 2 m i ν Τ E { G 2 ( X i j , τ 2 ) E ( δ i j δ i j Τ | X ¯ i j ) } ν = O ( ν 2 n 2 ) = O ( C 2 n 2 ) .

Therefore, we can deduce that I 7 = E ( I 7 ) + O P ( var ( I 7 ) ) = O P ( C n 1 ) . Similarly, we get I 9 = O P ( n 3 / 2 ) . For I 8 , we have

I 8 = 1 2 n 2 i = 1 n j = 2 m i ν Τ E { F 2 ( X i j , τ 2 ) E ( δ i j δ i j Τ | X ¯ i j ) } ν ( 1 + o P ( 1 ) ) = O ( ν 2 n 1 ) = O ( C 2 n 1 ) .

Note that ν = C , we can choose a sufficiently large C such that I 8 dominates both I 7 and I 9 with a probability of at least 1 ρ . From condition (C8), we get F 2 ( x , τ 2 ) < 0 , then (2) holds. The proof of Lemma 2 is finished.

Proof of Theorem 2. Let α ^ i j = δ i j Τ ( θ ^ θ 0 ) , then θ ^ satisfies the following equation:

0 = 1 n i = 1 n j = 2 m i δ i j ϕ τ 2 ( ϵ i j α ^ i j ) = 1 n i = 1 n j = 2 m i δ i j { ϕ τ 2 ( ϵ i j ) ϕ τ 2 ( ϵ i j ) α ^ i j + 1 2 ϕ τ 2 ( ϵ i j ) α ^ i j 2 } I 10 + I 11 + I 12 ,

where ϵ i j locates between ϵ i j α ^ i j and ϵ i j . It can be easily obtained that

I 11 = 1 n i = 1 n j = 2 m i ϕ τ 2 ( ϵ i j ) δ i j δ i j Τ ( θ ^ θ 0 ) = 1 n i = 1 n j = 2 m i E { F 2 ( X i j , τ 2 ) E ( δ i j δ i j Τ | X ¯ i j ) } ( θ ^ θ 0 ) + o P ( 1 ) ,

and

I 12 = 1 2 n i = 1 n j = 2 m i ϕ τ 2 ( ϵ i j ) δ i j { δ i j Τ ( θ ^ θ 0 ) } 2 = O P ( θ ^ θ 0 2 ) = o P ( I 11 ) .

Then

1 n i = 1 n j = 2 m i E { F 2 ( X i j , τ 2 ) E ( δ i j δ i j Τ | X ¯ i j ) } n ( θ ^ θ 0 ) = 1 n i = 1 n j = 2 m i δ i j ϕ τ 2 ( ϵ i j ) + o P ( 1 ) .

Notice that E ( δ i j ϕ τ 2 ( ϵ i j ) ) = 0 . For any ( p + q ) × 1 vector η , let ψ i = j = 2 m i ( η Τ δ i j ) ϕ τ 2 ( ϵ i j ) . By Cr inequality and condition (C1), we have

E | ψ i | 3 M j = 2 m i E | ( η Τ δ i j ) ϕ τ 2 ( ϵ i j ) | 3 = M j = 2 m i E { E ( | ϕ τ 2 ( ϵ i j ) | 3 | X i j ) E ( | η Τ δ i j | 3 | X ¯ i j ) } K 2 ,

where K 2 is a constant independent of i. Moreover,

1 n i = 1 n var ( ψ i ) = 1 n i = 1 n E ( ψ i 2 ) = 1 n i = 1 n E { j = 2 m i ( η Τ δ i j ) ϕ τ 2 ( ϵ i j ) } 2 = 1 n i = 1 n j = 2 m i E { ( η Τ δ i j ) 2 ϕ τ 2 ( ϵ i j ) 2 } = 1 n i = 1 n j = 2 m i η Τ E { G 2 ( X i j , τ 2 ) E ( δ i j δ i j Τ | X ¯ i j ) } η K 3 > 0.

So the proof is completed by the Lyapunov central limit theorem and Slutsky’s theorem.

List of Main Symbols

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

Lu, F. , Xue, L. and Cai, X. (2020) ESL-Based Robust Estimation for Mean-Covariance Regression with Longitudinal Data. Open Journal of Statistics, 10, 10-30. doi: 10.4236/ojs.2020.101002.

References

[1] Diggle, P.J., Heagerty, P.J., Liang, K.-Y. and Zeger, S. (2002) Analysis of Longitudinal Data. Oxford University Press, Oxford.
[2] Diggle, P.J. and Verbyla, A.P. (1998) Nonparametric Estimation of Covariance Structure in Longitudinal Data. Biometrics, 91, 403-415.
https://doi.org/10.2307/3109751
[3] Liang, K.-Y. and Zeger, S.L. (1986) Longitudinal Data Analysis Using Generalized Linear Models. Biometrika, 73, 13-22.
https://doi.org/10.1093/biomet/73.1.13
[4] Pourahmadi, M. (1999) Joint Mean-Covariance Models with Applications to Longitudinal Data: Unconstrained Parameterisation. Biometrika, 86, 677-690.
https://doi.org/10.1093/biomet/86.3.677
[5] Pourahmadi, M. (2000) Maximum Likelihood Estimation of Generalised Linear Models for Multivariate Normal Covariance Matrix. Biometrika, 87, 425-435.
https://doi.org/10.1093/biomet/87.2.425
[6] Pan, J. and Mackenzie, G. (2003) On Modelling Mean-Covariance Structures in Longitudinal Studies. Biometrika, 90, 239-244.
https://doi.org/10.1093/biomet/90.1.239
[7] Ye, H. and Pan, J. (2006) Modelling of Covariance Structures in Generalised Estimating Equations for Longitudinal Data. Biometrika, 93, 927-941.
https://doi.org/10.1093/biomet/93.4.927
[8] Pourahmadi, M. (2007) Cholesky Decompositions and Estimation of a Covariance Matrix: Orthogonality of Variance-Correlation Parameters. Biometrika, 94, 1006-1013.
https://doi.org/10.1093/biomet/asm073
[9] Daniels, M.J. and Pourahmadi, M. (2009) Modeling Covariance Matrices via Partial Autocorrelations. Journal of Multivariate Analysis, 100, 2352-2363.
https://doi.org/10.1016/j.jmva.2009.04.015
[10] Leng, C., Zhang, W. and Pan, J. (2010) Semiparametric Mean-Covariance Regression Analysis for Longitudinal Data. Journal of the American Statistical Association, 105, 181-193.
https://doi.org/10.1198/jasa.2009.tm08485
[11] Zhang, W. and Leng, C. (2012) A Moving Average Cholesky Factor Model in Covariance Modelling for Longitudinal Data. Biometrika, 99, 141-150.
https://doi.org/10.1093/biomet/asr068
[12] Zhang, W., Leng, C. and Tang, C.Y. (2015) A Joint Modelling Approach for Longitudinal Studies. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77, 219-238.
https://doi.org/10.1111/rssb.12065
[13] Maadooliat, M., Pourahmadi, M. and Huang, J.Z. (2013) Robust Estimation of the Correlation Matrix of Longitudinal Data. Statistics and Computing, 23, 17-28.
https://doi.org/10.1007/s11222-011-9284-6
[14] Zheng, X., Fung, W.K. and Zhu, Z. (2013) Robust Estimation in Joint Mean-Covariance Regression Model for Longitudinal Data. Annals of the Institute of Statistical Mathematics, 65, 617-638.
https://doi.org/10.1007/s10463-012-0383-8
[15] Zheng, X., Fung, W.K. and Zhu, Z. (2014) Variable Selection in Robust Joint Mean and Covariance Model for Longitudinal Data Analysis. Statistica Sinica, 24, 515-531.
[16] Lv, J. and Guo, C. (2017) Efficient Parameter Estimation via Modified Cholesky Decomposition for Quantile Regression with Longitudinal Data. Computational Statistics, 32, 947-975.
https://doi.org/10.1007/s00180-017-0714-6
[17] Lv, J., Guo, C., Yang, H. and Li, Y. (2017) A Moving Average Cholesky Factor Model in Covariance Modeling for Composite Quantile Regression with Longitudinal Data. Computational Statistics & Data Analysis, 112, 129-144.
https://doi.org/10.1016/j.csda.2017.02.015
[18] Lv, J., Guo, C. and Wu, J. (2019) Smoothed Empirical Likelihood Inference via the Modified Cholesky Decomposition for Quantile Varying Coefficient Models with Longitudinal Data. TEST, 28, 999-1032.
https://doi.org/10.1007/s11749-018-0616-0
[19] Lv, J. and Guo, C. (2019) Quantile Estimations via Modified Cholesky Decomposition for Longitudinal Single-Index Models. Annals of the Institute of Statistical Mathematics, 71, 1163-1199.
https://doi.org/10.1007/s10463-018-0673-x
[20] Wang, X., Jiang, Y., Huang, M. and Zhang, H. (2013) Robust Variable Selection with Exponential Squared Loss. Journal of the American Statistical Association, 108, 632-643.
https://doi.org/10.1080/01621459.2013.766613
[21] Lv, J., Yang, H. and Guo, C. (2015) An Efficient and Robust Variable Selection Method for Longitudinal Generalized Linear Models. Computational Statistics & Data Analysis, 82, 74-88.
https://doi.org/10.1016/j.csda.2014.08.006
[22] Wang, K. and Lin, L. (2016) Robust and Efficient Estimator for Simultaneous Model Structure Identification and Variable Selection in Generalized Partial Linear Varying Coefficient Models with Longitudinal Data. Statistical Papers, 1-28.
[23] Li, S., Wang, K. and Ren, Y. (2018) Robust Estimation and Empirical Likelihood Inference with Exponential Squared Loss for Panel Data Models. Economics Letters, 164, 19-23.
https://doi.org/10.1016/j.econlet.2017.12.029
[24] Yang, H., Lv, J. and Guo, C. (2015) Robust Variable Selection in Modal Varying-Coefficient Models with Longitudinal. Journal of Statistical Computation and Simulation, 85, 3064-3079.
https://doi.org/10.1080/00949655.2014.949716
[25] Wang, K., Li, S., Sun, X. and Lin, L. (2019) Modal Regression Statistical Inference for Longitudinal Data Semivarying Coefficient Models: Generalized Estimating Equations, Empirical Likelihood and Variable Selection. Computational Statistics & Data Analysis, 133, 257-276.
https://doi.org/10.1016/j.csda.2018.10.010
[26] Huber, P.J. (2011) Robust Statistics. Springer, Berlin, Heidelberg.
https://doi.org/10.1007/978-3-642-04898-2_594
[27] Koenker, R. (2005) Quantile Regression (Econometric Society Monographs; No. 38). Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511754098
[28] Zeger, S.L. and Diggle, P.J. (1994) Semiparametric Models for Longitudinal Data with Application to CD4 Cell Numbers in HIV Seroconverters. Biometrics, 50, 689-699.
https://doi.org/10.2307/2532783
[29] Yao, W. and Li, R. (2013) New Local Estimation Procedure for a Non-Parametric Regression Function for Longitudinal Data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75, 123-138.
https://doi.org/10.1111/j.1467-9868.2012.01038.x
[30] Yao, W. and Li, L. (2014) A New Regression Model: Modal Linear Regression. Scandinavian Journal of Statistics, 41, 656-671.
https://doi.org/10.1111/sjos.12054

  
comments powered by Disqus

Copyright © 2020 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.