Estimation of Nonparametric Regression Models with Measurement Error Using Validation Data

Abstract

We consider the problem of estimating a function g in nonparametric regression model when only some of covariates are measured with errors with the assistance of validation data. Without specifying any error model structure between the surrogate and true covariables, we propose an estimator which integrates orthogonal series estimation and truncated series approximation method. Under general regularity conditions, we get the convergence rate of this estimator. Simulations demonstrate the finite-sample properties of the new estimator.

Share and Cite:

Liu, F. and Yin, Z. (2017) Estimation of Nonparametric Regression Models with Measurement Error Using Validation Data. Applied Mathematics, 8, 1454-1463. doi: 10.4236/am.2017.810106.

1. Introduction

Consider the following nonparametric regression model of a scaler response Y on multi-covariates ( X , Z )

Y = g ( X , Z ) + ε , (1)

where g ( ) is an unknown function and ε is a noise variable with E ( ε | X , Z ) = 0 and E ( ε 2 ) < . It is not uncommon that Z is measured exactly but X is measured with error and instead only its surrogate variable W can be observed. Throughout we assume

E [ ε | W , Z ] = 0 withprobability 1 , (2)

which is always satisfied if, for example, W is a function of X and some independent noise (see [1] ).

The relationship between the true variable and the surrogate variable can be rather complicated. Misspecification of this relationship may lead to a serious misinterpretation of the data. Common solution is to use the help of validation data to infer the missing information. To be specific, one observes independent replicates ( W i , Z i , Y i ) , 1 i N , of ( W , Z , Y ) rather than ( X , Z , Y ) , where the relationship between W i and X i may or may not be specified. If not, the missing information for the statistical inference will be taken from a sample ( X j , W j , Z j ) , N + 1 j N + n , of so-call validation data independent of the primary (surrogate) sample. We aim at estimating the unknown function g ( ) by using the surrogate data { ( Y i , W i , Z i ) } i = 1 N and the validation data { ( X j , W j , Z j ) } j = N + 1 N + n .

Recently, statistical inference based on surrogate data and a validation sample has attracted considerable attention (see [2] - [13] ), and the above referenced authors developed suitable methods for different models. However, all these works mostly are concerned with the parametric or semi-parametric relationships between covariates and responses, and these approaches are difficult to generalize to nonparametric regression model. [14] and [15] proposed two nonparametric estimators for nonparametric regression model with measurement error using validation data, but their methods are not applicable to our problem since [14] assumes the response rather than the covariable is measured with error, and the method proposed by [15] applies for one-dimensional explanatory variable only.

This article is organized as follows. In Section 2 we propose a regularization- based method. Under general regularity conditions, we give the convergence rate of our estimator in Section 3. Section 4 provides some numerical results from simulation studies, whereas proofs of the theorems are presented in Appendix.

2. Description of the Estimator

Recall model (1) and the assumptions below it. We assume that X, W and Z are all real-valued random variables. The extension to random vectors complicates the notation but does not affect the main ideas and results. Without loss of generality, let the supports of X, W and Z all be contained in [ 0,1 ] (otherwise, one can carry out monotone transformations of X, W and Z).

Let f X W Z and f W Z denote respectively the joint density of ( X , W , Z ) and marginal density of ( W , Z ) . Then, according to (2), we have

E ( Y | W = w , Z = z ) = E [ g ( X , Z ) | W = w , Z = z ] = g ( x , z ) f X W Z ( x , w , z ) f W Z ( w , z ) d x . (3)

Let m ( w , z ) = E ( Y | W = w , Z = z ) f W Z ( w , z ) and

L 2 ( [ 0 , 1 ] ) = { φ : [ 0 , 1 ] R , s . t . φ = ( | φ ( x ) | 2 d x ) 1 / 2 < } .

Define the operator T z : L 2 ( [ 0,1 ] ) L 2 ( [ 0,1 ] ) as

( T z φ z ) ( w , z ) = φ z ( x ) f X W Z ( x , w , z ) d x ,

where φ z ( ) = φ ( , z ) is any function on L 2 ( [ 0,1 ] ) . So that Equation (3) is equivalent to the operator equation

m ( w , z ) = ( T z g ) ( w , z ) . (4)

According to Equation (4), the function g is the solution of a Fredholm integral equation of the first kind, and this inverse problem is known to be ill- posed and needs a regularization method. A variety of regulation schemes are available in the literature (see e.g. [16] ) but we focus in this paper on the Tikhonov regularized solution:

g α = arg min g [ T z g m 2 + α g 2 ] , (5)

where the penalization term α > 0 is the regularization parameter.

We define the adjoint operator T z of T z

( T z ψ ) ( x ) = ψ z ( w ) f X W Z ( x , w , z ) d w ,

where ψ z ( w ) L 2 ( [ 0,1 ] ) . Then the regularized solution (5) is equivalently:

g α = ( α I + T z T z ) 1 T z m . (6)

To obtain the estimator of g ( x , z ) , we consider the orthogonal series method and kernel method. Under some regularity conditions in Section 3, for each z [ 0,1 ] , f X W Z ( , , z ) and m ( , z ) may be approximated by a truncated orthogonal series,

f X W Z K ( x , w , z ) = k = 1 K l = 1 K d z k l ϕ k ( x ) ϕ l ( w ) and m K ( w , z ) = k = 1 K m z k ϕ k ( w ) ,

where

d z k l = ϕ k ( x ) ϕ l ( w ) f X W Z ( x , w , z ) d x d w = E [ ϕ k ( X ) ϕ l ( W ) | Z = z ] f Z ( z ) ,

and

m z k = ϕ k ( w ) E ( Y | W = w , Z = z ) f W Z ( w , z ) d w = E [ Y ϕ k ( W ) | Z = z ] f Z ( z ) ,

Here, { ϕ k } is an orthonormal basis of L 2 ( [ 0,1 ] ) which may be trigonometric, polynomial, spline, wavelet, and so on. A discussion of different bases and their properties can be found in the literature (see e.g. [17] , [18] ). Only to be specific, here and in what follows we are considering the normalized Legendre polynomials on [ 0,1 ] , which can be obtained through the Rodrigues’ formula

ϕ k ( x ) = 1 k ! 2 k + 1 d k d x k [ ( x 2 x ) k ] . (7)

The integer K is a truncation point which is the main smoothing parameter in the approximating series.

Let K h ( u ) = K ( u / h ) where K ( ) is a kernel function and h > 0 is a bandwidth. We consider the following estimators,

d ^ z k l = 1 n h n j = N + 1 N + n ϕ k ( X j ) ϕ l ( W j ) K h n ( z Z j ) ,

and

m ^ z k = 1 N h N i = 1 N Y i ϕ k ( W i ) K h N ( z Z i ) .

Then, for each z [ 0,1 ] , we have

f ^ X W Z ( x , w , z ) = k = 1 K l = 1 K d ^ z k l ϕ k ( x ) ϕ l ( w ) and m ^ ( w , z ) = k = 1 K m ^ z k ϕ k ( w ) .

The operators T z and T z can then be estimated by

( T ^ z φ z ) ( w , z ) = φ z ( x ) f ^ X W Z ( x , w , z ) d x and ( T ^ z ψ z ) ( x , z ) = ψ z ( w ) f ^ X W Z ( x , w , z ) d w .

Hence, for each z [ 0,1 ] , the estimator of g ( x , z ) is

g ˜ α = ( α I + T ^ z T ^ z ) 1 T ^ z m ^ z . (8)

Remark 2.1. Let D ˜ n be the K × K matrix whose ( k , l ) element is d ^ z k l , and b ˜ N = ( m ^ z 1 , , m ^ z K ) T . Then estimator (8) has the following form

g ˜ α ( x , z ) = k = 1 K g ˜ z k ϕ k ( x ) , (9)

where { g ˜ z k , k = 1 , , K } is given by ( g ˜ z 1 , , g ˜ z K ) T = ( α I K + D ˜ n D ˜ n T ) 1 D ˜ n b ˜ N .

3. Theoretical Properties

In this section, we introduce the assumptions that will be used below to study the statistical properties of the estimator. We shall consider the following assumptions:

(A1) (i) The support of ( X , W , Z ) is contained in [ 0,1 ] 3 ; (ii) Conditioning on Z = z , the joint density f X W Z of ( X , W , Z ) is square integrable w.r.t the Lesbegue measure on [ 0,1 ] 2 .

(A2) (i) The r order partial or mixed partial derivative of f X W Z with respect to ( x , w ) , and the r order partial derivative of f X W Z with respect to z, are both continuous in ( x , w ) [ 0,1 ] 2 and for each z [ 0,1 ] ; (ii) The s order partial derivative of m ( w , z ) with respect to z, is continuous in w [ 0,1 ] and for each z [ 0,1 ] .

(A3) E ( Y 2 | W = w , Z = z ) is uniformly bounded in ( w , z ) [ 0,1 ] 2 .

(A4) The kernel function K ( ) is a symmetrical, twice continuously differentiable function on [ 1,1 ] , and 1 1 u j K ( u ) d u = 0 for j = 1, , r 1 and 1 1 u r K ( u ) d u = c , with c 0 being some finite constant.

Assumption (A1) is sufficient condition for T z to be a Hilbert-Schmidt operator and therefore to be compact (see [19] , Theorem 2.34). As a result of compactness, there exists a singular values decomposition. For each z [ 0,1 ] , let λ z k , k 0 be the sequence of singular values of T z , then there exist the two orthonormal sequences φ z k , k 0 , and ψ z l , l 0 such that:

T z φ z k = λ z k ψ z k , T z * ψ z k = λ z k φ z k ; T z * T z φ z k = λ z k 2 φ z k , T z T z * ψ z k = λ z k 2 ψ z k .

Note that the regularization bias is

g g α = [ I ( α I + T z * T z ) 1 T z * T z ] g = k = 1 α α + λ z k 2 g , φ z k φ z k .

In order to control the speed of convergence to zero of the regularization bias g g α , we introduce the following regularity space Ψ β for β > 0 :

Ψ β = { φ z L 2 ( [ 0,1 ] ) such that k 0 φ z , φ z k 2 λ z k 2 β < + } .

We then obtain the following result by applying Proposition 3.11 in Carrasco et al. (2007).

Proposition 3.1. Suppose Assumption (A1) hold, for each z [ 0,1 ] , if g ( , z ) Ψ β , then we have g ( , z ) g α ( , z ) 2 = O ( α β 2 ) , where β 2 = min { β , 2 } .

Therefore, when the regularization parameter α is pushed towards zero, the smoother the function g of interest (i.e. g Ψ β for larger β ) is, the faster the rate of convergence to zero of the regularization bias will be.

Let γ = min { r , s } and τ = 2 r γ / [ ( γ + 1 ) ( 2 r + 1 ) ] , we then obtain the following convergence rate of g ˜ α ( x , z ) .

Theorem 3.1. Suppose Assumptions (A1)-(A4) are satisfied. Then, for each z [ 0,1 ] , if g ( , z ) Ψ β , we have

g ˜ α ( , z ) g ( , z ) 2 = O P { [ K ( h N 2 γ + 1 N h N ) + 1 K 2 γ + K 2 ( h n 2 r + 1 n h n ) ] α 1 + [ 1 K 2 r + K 2 ( h n 2 r + 1 n h n ) ] α ( β 1 ) 0 + α β 2 } .

Specially, let h n = O ( n 1 / ( 2 r + 1 ) ) , h N = O ( N 1 / ( 2 γ + 1 ) ) , K = O ( n r / [ ( γ + 1 ) ( 2 r + 1 ) ] ) , α = O ( N τ / ( β 2 + 1 ) ) and l i m N / n ( 0, ) . If r s or s < r 2 s ( s + 1 ) , then, for each z [ 0,1 ] , we have

g ˜ α ( , z ) g ( , z ) 2 = O P ( N τ β 2 β 2 + 1 ) .

The proofs of all the results are reported in the Appendix.

4. Simulation Studies

In this section, we briefly illustrate the finite-sample performance of the estimator discussed above. We compare our estimator to the standard Nadaraya-Watson estimator (denoted as g ˜ N ( x , z ) ) base on the primary dataset { ( Y i , W i , Z i ) } i = 1 N . In fact, g ˜ N ( x , z ) is a gold standard in the simulation study, even if it is practically unachievable due to measurement errors. Moreover, the performance of estimator g e s t is assessed by using the square root of average square errors (RASE)

RASE = { 1 M s = 1 M [ g e s t ( u s ) g ( u s ) ] 2 } 1 / 2 ,

where u s , s = 1 , , M , are grid points at which g e s t ( u s ) is evaluated.

We considered model (1) with the regression function being

g ( x , z ) = 1 2 π exp ( 0.5 x 2 0.5 z 2 ) ,

and ε being distributed as N ( 0,0.1 ) . The covariate is generated according to ( X , Z ) T ~ N ( 0, Σ ) with v a r ( X ) = v a r ( Z ) = 1 and the correlation coefficient between X and Z being 0.6, and W = X + η v , v ~ N ( 0,1 ) . Results for η = 0.2 , η = 0.4 and η = 0.6 are reported. Simulations were run with different validation and primary data sizes ( n , N ) ranging from ( 10,30 ) to ( 50,250 ) according to the ratio ρ = N / n = 3 and ρ = N / n = 5 , respectively. We generate 500 datasets for each sample size of ( n , N ) .

To calculate g ˜ α ( x , z ) , we used the normalized Legendre polynomials as basis and the standard normal kernel (denote K 0 ( ) ). For g ˜ N ( x , z ) , we used an product kernel K ( x 1 , x 2 ) = K 0 ( x 1 ) K 0 ( x 2 ) , and the bandwidth was selected by generalized cross-validation approach (GCV). For our estimator g ˜ α ( x , z ) , we used the cross-validation approach to choosing the four parameters h N , h n , K and α . For this purpose, h N , h n and ( K , α ) are selected separately as follows.

Define

f ^ Z ( z ; h n ) = 1 n h n j = N + 1 N + n K h n ( z Z j ) .

and

f ˜ Z ( z ; h N ) = 1 N h N i = 1 N K h N ( z Z i ) ,

Here, we adopt the cross-validation (CV) approach to estimate h n by

h ^ n = a r g m i n h n 1 n j = N + 1 N + n { Z j f ^ Z ( j ) ( Z j ; h n ) } 2 ,

where the subscript j denotes the estimator being constructed without using the jth observation. Similarly, we get h ^ N . After obtaining h ^ N and h ^ n , we then select ( K , α ) by

( α ^ , K ^ ) = a r g m i n ( α , K ) 1 N i = 1 N { Y i k = 1 K g ˜ z k ( i ) ϕ k ( W i ) } 2 ,

where the subscript i denotes the estimator being constructed without using the ith observation ( Y i , W i , Z i ) .

We compute the RASE at 200 × 200 grid points of ( x , z ) . Table 1 presents

Table 1. The RASE ( × 10 1 ) comparison for the estimators g ˜ α ( x , z ) and g ˜ N ( x , z ) .

the RASE for estimating curves g ( x , z ) when η = 0.2 , η = 0.4 and η = 0.6 for various sample sizes. It is obvious that our proposed estimator g ˜ α has much smaller RASE than g ˜ N . As is expected, our proposed estimating method produces more accurate estimators than the Nadaraya-Watson estimators. Moreover, there is a drastic improvement in accuracy by using our estimator over the Nadaraya-Watson estimator; this improvement increases with ρ .

Acknowledgments

This work was supported by GJJ160927 and Natural Science Foundation of Jiangxi Province of China under grant number 20142BAB211018.

Appendix

Proof of Theorem 3.1:

Lemma 6.1. Suppose Assumptions (A1), (A2) (i) and (A4) hold. For each z [ 0,1 ] , we have

f ^ X W Z f X W Z 2 = O P { K 2 [ ( n h n ) 1 + h n 2 r ] + K 2 r } .

Proof of Lemma 6.1.. For each z [ 0,1 ] , by Assumptions (A2) (i) and (A4), we have

E ( d ^ z k l ) = 1 h n ϕ k ( x ) ϕ l ( w ) K h n ( z u ) f X W Z ( x , w , u ) d x d w d u = ϕ k ( x ) ϕ l ( w ) K ( u ) f X W Z ( x , w , z + u h n ) d x d w d u = ϕ k ( x ) ϕ l ( w ) f X W Z ( x , w , z ) d x d w + { h n r u r K ( u ) d u r f X W Z ( x , w , z ) z r ϕ k ( x ) ϕ l ( w ) d x d w } ( 1 + o ( 1 ) ) = d z k l + h n r d z k l ( r ) u r K ( u ) d u ( 1 + o ( 1 ) ) ,

where

d z k l ( t ) = r f X W Z ( x , w , z ) z r ϕ k ( x ) ϕ l ( w ) d x d w .

Note that ϕ k are orthonormal and complete basis functions on L 2 ( [ 0,1 ] ) . Under Assumptions (A2) (i), for each z [ 0,1 ] , we have r f X W Z ( x , w , z ) / z r L 2 ( [ 0,1 ] 2 ) . Then, using Cauchy-Schwarz inequality, d z k l ( r ) is bounded in absolute value for each z [ 0,1 ] . Hence, we obtain that

E ( d ^ z k l ) = d z k l + O ( h n r ) .

Moreover, for each z [ 0,1 ] , we have

V a r ( d ^ z k l ) 1 n h n 2 E { [ ϕ k ( X ) ϕ l ( W ) ] 2 K h n 2 ( z Z ) } = 1 n h n 2 [ ϕ k ( x ) ϕ l ( w ) ] 2 K h n 2 ( z u ) f X W Z ( x , w , u ) d x d w d u = K ( u ) 2 n h n [ ϕ k ( x ) ϕ l ( w ) ] 2 f X W Z ( x , w , z ) d x d w ( 1 + o ( 1 ) ) = O [ 1 / ( n h n ) ] ,

where we have used the fact that f X W Z is uniformly bounded on [ 0,1 ] 2 .

We conclude that

d ^ z k l = d z k l + O ( h n r ) + O P ( 1 / n h n ) . (10)

By the triangle inequality and Jensen inequality, we have

f ^ X W Z f X W Z 2 2 [ f ^ X W Z f X W Z K 2 + f X W Z K f X W Z 2 ] .

Under Assumption (A2) (i), we can show that f X W Z K f X W Z 2 = O ( K 2 r ) (see Lemma A1 of [20] ).

By construction of the estimator, we have

f ^ X W Z f X W Z K 2 = k = 1 K l = 1 K [ d ^ z k l d z k l ] 2 = O P { K 2 [ ( n h n ) 1 + h n 2 r ] } ,

where the last equality is due to (10). The desired result follows immediately.,

Proof of Theorem 3.1. Define A ^ α z = ( α I + T ^ z * T ^ z ) 1 and A α z = ( α I + T z * T z ) 1 . Notice that g z α = A α z T z * T z g z where g z = g ( , z ) . Then we have

g ˜ α g z 2 4 [ A ^ α z T ^ z * 2 m ^ z T ^ z g z 2 + A ^ α z T ^ z * 2 T ^ z T z 2 g z g z α 2 + A ^ α z 2 T ^ z * T z * 2 T z ( g z g z α ) 2 + g z α g z 2 ] .

It follows from Lemma 6.1 that T ^ z T z 2 or T ^ z * T z * 2 are O P { K 2 [ ( n h n ) 1 + h n 2 r ] + K 2 r } . Under Assumptions (A1), we have g z α g z 2 = O ( α β 2 ) and T z ( g z g z α ) 2 = O ( α ( β + 1 ) 2 ) . Moreover, A ^ α z T ^ z * 2 = O P ( 1 / α ) and A ^ α z 2 = O P ( 1 / α 2 ) . The main task remained is to establish the order of the term m ^ z T ^ z g z 2 . By the triangle inequality and Jensen inequality, we have

m ^ z T ^ z g z 2 2 [ m ^ z m z 2 + ( T ^ z T z ) g z 2 ] .

Similar to the proof of Lemma 6.1, under Assumptions (A2)(ii), (A3) and (A4), it is easy to show that

m ^ z m z 2 = O P { K 2 s + K [ ( N h N ) 1 + h N 2 γ ] } .

Then, according to the Lemma 6.1, we have

m ^ z T ^ z g z 2 = O P { K 2 [ h n 2 r + ( n h n ) 1 ] + K 2 γ + K [ ( N h N ) 1 + h N 2 γ ] } .

Let h N = O ( N 1 2 γ + 1 ) , h n = O ( n 1 2 r + 1 ) , K = O ( n r ( 2 r + 1 ) ( γ + 1 ) ) and α = O ( N τ β 2 + 1 ) , if r s or s < r 2 s ( s + 1 ) , combining all these results, we complete the proof.,

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Stute, W., Xue, L. and Zhu, L. (2007) Empirical Likelihood Inference in Nonlinear Errors-in-Covariables Models with Validation Data. Journal of the American Statistical Association, 102, 332-346. https://doi.org/10.1198/016214506000000816
[2] Carroll, R.J. and Stefanski, L.A. (1990) Approximate Quasi-Likelihood Estimation in Models with Surrogate Predictors. Journal of the American Statistical Association, 85, 652-663.
https://doi.org/10.1080/01621459.1990.10474925
[3] Carroll, R.J. and Wand, M.P. (1991) Semiparametric Estimation in Logistic Measurement Error Models. Journal of the Royal Statistical Society: Series B, 53, 573-585.
[4] Carroll, R.J., Gail, M.H. and Lubin, J.H. (1993) Case-Control Studied with Errors in Covariables. Journal of the American Statistical Association, 88, 185-199.
[5] Cook, J.R. and Stefanski, L.A. (1994) Simulation-Extrapolation Estimation in Parametric Measurement Error Models. Journal of the American Statistical Association, 89, 1314-1328.
https://doi.org/10.1080/01621459.1994.10476871
[6] Sepanski, J. and Lee, L.F. (1995) Estimation of Linear and Nonlinear Errors-in-Variables Models Using Validation Data. Journal of the American Statistical Association, 90, 130-140.
https://doi.org/10.1080/01621459.1995.10476495
[7] Stefanski, L.A. and Buzas, J.S. (1995) Instrumental Variable Estimation in Binary Regression Measurement Error Models. Journal of the American Statistical Association, 90, 541-550.
https://doi.org/10.1080/01621459.1995.10476546
[8] Wang, Q. and Rao, J.N.K. (2002) Empirical Likelihood-Based Inference in Linear Errors-in-Covariables Models with Validation Data. Biometrika, 89, 345-358.
https://doi.org/10.1093/biomet/89.2.345
[9] Wang, Q. and Yu, K. (2007) Likelihood-Based Kernel Estimation in Semiparametric Errors-In-Covariables Models with Validation Data. Journal of Multivariate Analysis, 98, 455-480.
[10] Lü, Y.-Z., Zhang, R.-Q. and Huang, Z.-S. (2013) Estimation of Semi-Varying Coefficient Model with Surrogate Data and Validation Sampling. Acta Mathematicae Applicatae Sinica English, 29, 645-660. https://doi.org/10.1007/s10255-013-0241-3
[11] Xiao, Y. and Tian, Z. (2014) Dimension Reduction Estimation in Nonlinear Semiparametric Error-in-Response Models with Validation Data. Mathematica Applicata, 27, 730-737.
[12] Xu, W. and Zhu, L. (2015) Nonparametric Check for Partial Linear Errors-in-Cova- riables Models with Validation Data. Annals of the Institute of Statistical Mathematics, 67, 793-815.
https://doi.org/10.1007/s10463-014-0476-7
[13] Zhang, Y. (2015) Estimation of Partially Linear Regression for Errors-in-Variables Models with Validation Data. Springer International Publishing, 322, 733-742. https://doi.org/10.1007/978-3-319-08991-1_76
[14] Wang, Q. (2006) Nonparametric Regression Function Estimation with Surrogate Data and Validation Sampling. Journal of Multivariate Analysis, 97, 1142-1161.
https://doi.org/10.1016/j.jmva.2005.05.008
[15] Du, L., Zou, C. and Wang, Z. (2011) Nonparametric Regression Function Estimation for Error-in-Variable Models with Validation Data. Statistica Sinica, 21, 1093-1113.
https://doi.org/10.5705/ss.2009.047
[16] Kress, R. (1999) Linear Integral Equations. Springer, New York. https://doi.org/10.1007/978-1-4612-0559-3
[17] Devroye, L. and Gyorfi, L. (1985) Nonparametric Density Estimation: The L1 View. John Wiley Sons, New York.
[18] Efromovich, S. (1999) Nonparametric Curve Estimation: Methods, Theory and Applications. Springer, New York.
[19] Carrasco, M., Florens, J.P. and Renault, E. (2007) Linear Inverse Problems in Structural Econometrics: Estimation Based on Spectral Decomposition and Regularization. Elsevier, North Holland, 5633-5751.
[20] Wu, X. (2010) Exponential Series Estimator of Multivariate Densities. Journal of Econometrics, 156, 354-366. https://doi.org/10.1016/j.jeconom.2009.11.005

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.