Orthogonal Series Estimation of Nonparametric Regression Measurement Error Models with Validation Data

Abstract

In this article we study the estimation method of nonparametric regression measurement error model based on a validation data. The estimation procedures are based on orthogonal series estimation and truncated series approximation methods without specifying any structure equation and the distribution assumption. The convergence rates of the proposed estimator are derived. By example and through simulation, the method is robust against the misspecification of a measurement error model.

Share and Cite:

Yin, Z. (2017) Orthogonal Series Estimation of Nonparametric Regression Measurement Error Models with Validation Data. Applied Mathematics, 8, 1820-1831. doi: 10.4236/am.2017.812130.

1. Introduction

Let Y be a scalar response variable and X be an explanatory variable in regression. We consider the nonparametric regression model

Y = g ( X ) + ε (1)

where g ( ) is an unknown nonparametric regression function, ε is a noise variable, and given X the errors ε = Y g ( X ) are assumed to be independent and identically distributed. We consider the model (1) with explanatory variable X measured with error and Y measured exactly. That is, instead of the true X, the surrogate variable W is observed. Throughout we assume

E [ ε | W ] = 0 with probability 1 (2)

which is always satisfied if, for example, W is a function of X and some independent noise (see e.g. [1] ).

Nonparametric regression model (1) in presence of errors in covariables has attracted considerable attention in the literature, and is by now well understood. See Carroll et al. [2] for an excellent source of references for various approaches. However, all these works mostly focus on specifying error model structure between the true variables X and the surrogate variables W (e.g. the classical error structure and the Berkson error structure). In practice, the relationship between the surrogate variables and the true variables can be rather complicated compared to the classical or Berkson error structural equations usually assumed. This situation presents serious difficulties in making valid statistical inferences. Common solution is to use the help of validation data to infer the missing information about relationship between W and X.

We consider settings where some validation data are available for relating X and W. To be specific, we assume that independent validation data ( W j , X j ) , N + 1 j N + n are available in addition to the independent primary data { ( Y i , W i ) } i = 1 N . Recently, several approaches to statistical inference based on surrogate data and a validation sample are available (see, for example, [1] , [3] - [12] and among others). But these approaches do not applicable for handling nonparametric regression measurement error model with the availability of a validation data set. Actually, the models considered by the above referenced authors are some parametric or semiparametric models, and the model (1) is a nonparametric one. With the help of validation data, [13] , [14] and [15] developed estimation methods for the nonparametric regression model (1) with measurement error. However, [13] assumes that the response Y but not the covariable X is measured with error; The method proposed by [14] cannot be extended to the subject assume explanatory variable X is a vector; The approach proposed by [15] is too complicated to calculate.

In this paper, without specifying any structural equations, an orthogonal series method is proposed to estimate g with the help of validation data. As explained in Section 2, we estimate g by solving the following Fredholm equation of the first kind,

T g = m (3)

Here, we propose orthogonal series estimator of T using the validation data. Using a similar approach, we estimate m based on primary data set. Then an estimator of g is obtained by Tikhonov regularization method.

This paper is arranged as follows. In Section 2, we define an orthogonal series estimation method. In Section 3, we state the convergence rates of the proposed estimator. Simulation results are reported in Section 4 and a brief discussion is given in Section 5. Proofs of the theorems are presented in Appendix.

2. Model and Series Estimation

2.1. Model

Recall model (1) and the assumptions below it. Assume that in addition to the primary data set consisting of N independent and identically distributed obser- vations { ( Y i , W i ) } i = 1 N from model (1), validation consisting of n independent and identically distributed observations { ( X j , W j ) } j = N + 1 N + n are available. Furthermore, we suppose that X and W are both real-valued random variables. The extension to random vectors complicates the notation but does not affect the main ideas and results. Without loss of generality, let the supports of X and W both be contained in [ 0,1 ] (otherwise, one can carry out monotone transformations of X and W).

Let f X W and f W denote respectively the joint density of ( X , W ) and marginal density of W. Then, according to (2), we have

E ( Y | W = w ) = E [ g ( X ) | W = w ] = g ( x ) f X W ( x , w ) f W ( w ) d x (4)

Let m ( w ) = E ( Y | W = w ) f W ( w ) and

L 2 ( [ 0 , 1 ] ) = { φ : [ 0 , 1 ] R , s .t . φ = ( | φ ( x ) | 2 d x ) 1 / 2 < }

Define the operator T : L 2 ( [ 0,1 ] ) L 2 ( [ 0,1 ] ) as

( T φ ) ( w ) = φ ( x ) f X W ( x , w ) d x

So that Equation (4) is equivalent to the operator equation

m ( w ) = ( T g ) ( w ) (5)

According to Equation (5), the function g is the solution of a Fredholm integral equation of the first kind, and this inverse problem is known to be ill-posed and needs a regularization method. A variety of regulation schemes are available in the literature (see e.g. [16] ) but we focus in this paper on the Tikhonov regularized solution:

g α = arg min g [ T g m 2 + α g 2 ] (6)

where the penalization term α > 0 is the regularization parameter.

We define the adjoint operator T of T

( T ψ ) ( x ) = ψ ( w ) f X W ( x , w ) d w

where ψ ( w ) L 2 ( [ 0,1 ] ) . Then the regularized solution (6) is equivalently:

g α = ( α I + T T ) 1 T m (7)

2.2. Orthogonal Series Estimation

In order to estimate the solution (7), we need to estimate T , T and m . In this paper, we consider the orthogonal series method. Under some regularity conditions in Section 4.1, the density function f X W ( x , w ) and m ( w ) may be approximated with any wished accuracy by a truncated orthogonal series,

f X W K ( x , w ) = k = 1 K l = 1 K d k l ϕ k ( x ) ϕ l ( w ) and m K ( w ) = k = 1 K m k ϕ k ( w )

where

d k l = f X W ( x , w ) ϕ k ( x ) ϕ l ( w ) d x d w and m k = m ( w ) ϕ k ( w ) d w

Here, { ϕ k } is an orthonormal basis of L 2 ( [ 0,1 ] ) which may be trigonome- tric, polynomial, spline, wavelet, and so on. A discussion of different bases and their properties can be found in the literature (see e.g. [17] ). Only to be specific, here and in what follows we are considering the normalized Legendre polynomials on [ 0,1 ] , which can be obtained through the Rodrigues’ formula

ϕ k ( x ) = 1 k ! 2 k + 1 d k d x k [ ( x 2 x ) k ] (8)

The integer K is a truncation point which is the main smoothing parameter in the approximating series, and d k l and m k represent the generalized Fourier coefficients of f X W and m, respectively.

Note that d k l = E [ ϕ k ( X ) ϕ l ( W ) ] and m k = E [ Y ϕ k ( W ) ] . Intuitively, we can obtain the estimators of d k l , f X W ( x , w ) , m k and m ( w ) by

d ^ k l = 1 n j = N + 1 N + n ϕ k ( X j ) ϕ l ( W j ) , f ^ X W ( x , w ) = k = 1 K l = 1 K d ^ k l ϕ k ( x ) ϕ l ( w )

m ^ k = 1 N i = 1 N Y i ϕ k ( W i ) and m ^ ( w ) = k = 1 K m ^ k ϕ k ( w )

respectively. The operators T and T can then be consistently estimated by

( T ^ φ ) ( w ) = φ ( x ) f ^ X W ( x , w ) d x and ( T ^ ψ ) ( x ) = ψ ( w ) f ^ X W ( x , w ) d w

Conclude that, the estimator of g ( x ) is obtained by

g ^ α = ( α I + T ^ T ^ ) 1 T ^ m ^ (9)

3. Theoretical Properties

The main objective of this section is to derive the statistical properties of the estimator proposed in Section 2.2. For this purpose, we assume:

Assumption 1. 1) The support of ( X , W ) is contained in [ 0,1 ] 2 ; 2) The joint density of ( X , W ) is square integrable w.r.t the Lesbegue measure on [ 0,1 ] 2 .

This is sufficient condition for T to be a Hilbert-Schmidt operator and therefore to be compact (see [18] ). As a result of compactness, there exists a singular values decomposition. Let λ k , k 0 be the sequence of the nonzero singular values of T and the two orthonormal sequences φ k , k 0 , and ψ l , l 0 such that (see [16] ):

T φ k = λ k ψ k , T * ψ k = λ k φ k ; T * T φ k = λ k 2 φ k , T T * ψ k = λ k 2 ψ k , for k 0

We define Φ β as a b-regularity space for β > 0 :

Φ β = { φ L 2 ( [ 0 , 1 ] ) such that k 0 φ , φ k λ k 2 β < + }

Here and blow, we denote by , the scalar product in L 2 ( [ 0,1 ] ) .

Assumption 2. We have g Φ β for some β > 0 .

We then obtain the following result (see [18] ).

Proposition 3.1. Suppose Assumptions 1 and 2 hold, then we have g g α 2 = O ( α β 2 ) , where β 2 = min { β , 2 } .

In order to obtain the rate of convergence for g ^ α g 2 , we impose the following additional conditions:

Assumption 3. 1) The joint density f X W is r-times continuously differen- tiable on [ 0,1 ] 2 ; 2) The function m ( ) is s-times continuously differentiable on [ 0,1 ] .

Assumption 4. The function E ( Y 2 | W = w ) is bounded uniformly on [ 0,1 ] .

Assumption 5. 1) lim n / N = μ [ 0 , ) ; 2) α 0 , K = K ( N , n ) , K / N 0 , K 2 / n 0 as n , N .

Theorem 3.1. Suppose Assumptions 1 - 5 hold. Let γ = min { r , s } , then we have

g ^ α g 2 = O P [ 1 α × ( K N + 1 K 2 γ + K 2 n ) + α β 2 ] (10)

In (10), the term K 2 γ arises from the bias of g ^ caused by truncating the series approximation of f X W and m . The truncation bias decreases as γ increases. The terms N 1 K and n 1 K 2 are respectively induced by random surrogate sampling errors and random validation sampling errors in the estimates of the generalized Fourier coefficients m k and d k l . By Theorem 3.1, it is easy to obtain the following corollary.

Corollary 3.1. Suppose the assumptions of Theorem 3.1 are satisfied. Let K = O ( n 1 / ( 2 γ + 2 ) ) and α = O ( n γ / [ ( γ + 1 ) ( β 2 + 1 ) ] ) , then we have

g ^ α g 2 = O P ( n κ β 2 β 2 + 1 )

where κ = γ / ( γ + 1 ) .

The proofs of all the results are reported in the Appendix.

4. Simulation Studies

In this section, we conducted simulation studies of the finite-sample perfor- mance of the proposed estimators. First, for comparison, we consider the standard Nadaraya-Watson estimator base on the primary dataset { ( Y i , W i ) } i = 1 N (denoted as g ^ N ). It should be pointed out that g ^ N can serve as a gold standard in the simulation study, even though it is practically unachievable due to measurement errors. Second, The performance of estimator g e s t is assessed by using the square root of average square errors (RASE)

RASE = { 1 M s = 1 M [ g e s t ( u s ) g ( u s ) ] 2 } 1 / 2

where u s , s = 1 , , M , are grid points at which g e s t ( u s ) is evaluated.

We considered model (1) with the regression function being

1) g ( x ) = ϕ 0,1.5 ( 4 x ) + ϕ 1,2 ( 4 x ) + ϕ 2,5 ( 4 x ) , ε ~ N ( 0,0.2 ) ,

2) g ( x ) = 5 s i n ( 2 x ) e x p ( 16 x 2 / 50 ) , ε ~ N ( 0,0.2 ) ,

where ϕ μ , σ is the density of an Normal ( μ , σ 2 ) variable. To perform this simulation, we generate W from f W and δ from f δ , and put X = W + δ . The densities f W and f δ , chosen in the beta family, are

f W ( w ) = ( 1 w 2 / 4 ) 2 B ( 1 / 2 ,2 ) I ( w [ 2,2 ] )

f δ ( u ) = ( 1 u 2 ) ρ δ B ( 1 / 2 , ρ δ + 1 ) I ( u [ 1 , 1 ] )

where we chose ρ δ = 1 , ρ δ = 3 and ρ δ = 5 (in fact, the greater the value of ρ δ , the smaller the variance of δ ). Simulations were run with different validation and primary data sizes ( n , N ) ranging from ( 20,60 ) to ( 50,250 ) according to the ratio κ = N / n = 3 and κ = N / n = 5 , respectively. For each case, 500 simulated data sets were generated for each sample size of ( n , N ) .

To implement our method (9), the regularization parameter α and truncating parameter K should be chosen. Here, we estimate α and K by minimizing the following two-dimensional cross-validation score selection criterion

CV ( α , K ) = i = 1 N { Y i k = 1 K g ^ k ( i ) ϕ k ( W i ) } 2

where g ^ k ( i ) are the solutions based on (9), after deleting the ith primary observation ( Y i , W i ) . In addition, for the naive estimator g ^ N , we used the standard normal kernel, and the bandwidth was selected by leave-one-out CV approach. In all graphs, to illustrate the performance of an estimator, we show the estimated curves corresponding to the first (Q1), second (Q2) and third (Q3) quartiles of the ordered RASEs. The target curve is always represented by a solid curve.

Figure 1 shows the regression function curve, the quartile curves of 500 estimates g ^ α ( x ) under different values of ρ δ for sample size ( N , n ) = ( 90,30 ) , in the example (a). From this figure, we clearly see that the proposed estimator g ^ α ( x ) appeared to perform very well in this study. Taking the measurement error levels into account, as the variances of δ decrease, g ^ α ( x ) tends to have smaller bias at the peaks of the regression curve.

Figure 2 illustrates the way in which the estimator improves as sample size increases. We compare the results obtained when estimating curve (b) under different settings of sample size ( N , n ) for ρ δ = 3 . We see clearly that, as the sample size increases, the quality of the estimators improves significantly.

Table 1 compares, for various sample sizes, the results obtained for estimating curves (a) and (b) when ρ δ = 1 , ρ δ = 3 and ρ δ = 5 . The estimated RASEs which were evaluated at 27 grid points of x are presented. Our results show that

Figure 1. Estimation of regression function (a) for samples of size ( N , n ) = ( 90 , 30 ) , when ρ δ = 1 (left panel), ρ δ = 3 (middle panel) and ρ δ = 5 (right panel). The solid curve is the target curve.

(a) (b)

Table 1. The RASE comparison for the estimators g ^ α ( x ) and g ^ N ( x ) . Let κ = N / n .

Figure 2. Estimation of regression function (b) for ρ δ = 3 , when ( N , n ) = ( 60,20 ) (left panel), ( N , n ) = ( 90,30 ) (middle panel) and ( N , n ) = ( 150,50 ) (right panel). The solid curve is the target curve.

the estimator g ^ α outperforms g ^ N . Also, the performance of g ^ α improves (i.e., the corresponding RASEs decrease) considerably as the sample sizes increases. For any nonparametric method in measurement error regression problem, the quality of the estimator also depends on the discrepancy of the observed sample. That is, the performance of the estimator depends on the variances of measurement error. Here, we compare the results for different values of ρ δ . As expected, Table 1 shows that the effect of the variances on the estimator performance is obvious.

5. Discussion

In this paper, we have proposed a new method for estimating non-parametric regression models when the explanatory variable is measured with error under the assumption that a proper validation data set is available. The validation data set allows us to estimate joint density f X W of the true variable and the surrogate variable via an orthogonal series method. In practice, our proposed method can be extended to multidimensional cases in which X may be a p-variate explanatory variable. When the dimension of X and hence of W is large, the curse of dimensionality may occur because of the multivariate density estimation of f X W . In this case, exponential series estimator proposed by [19] ensures the positiveness of the estimated density. After obtaining the exponential series estimator of f X W , we can obtain results similar to those in the previous sections. Asymptotic theory in this setting still needs to be pursued in the further research.

Acknowledgements

This work was supported by GJJ160927 and Natural Science Foundation of Jiangxi Province of China under grant number 20142BAB211018.

Appendix

Proofs of Theorem 3.1 and Corollary 3.1:

We first present some Lemmas that are need to prove the main theorem.

Lemma 7.1. Suppose Assumptions 1 and 3(1) hold. Then:

1) T ^ T H S 2 = O P ( K 2 r + n 1 K 2 ) ;

2) T ^ * T * H S 2 = O P ( K 2 r + n 1 K 2 ) .

where H S denotes the Hilbert-Schmidt norm, i.e.:

T ^ T H S 2 = [ f ^ X W ( x , w ) f X W ( x , w ) ] 2 d x d w

Proof of Lemma 7.1. According to Lemma A1 of Wu [19] , we have

f X W f X W K 2 = O ( K 2 r )

Note that the Legendre polynomials ϕ k in (8) are orthonormal and complete on L 2 ( [ 0,1 ] ) . Then

f ^ X W f X W K 2 = k = 1 K l = 1 K ( d ^ k l d k l ) 2

By E d ^ k l = d k l , we have

E { | k = 1 K l = 1 K ( d ^ k l d k l ) 2 | } = k = 1 K l = 1 K V a r ( d ^ k l ) 1 n k = 1 K l = 1 K E [ ϕ k ( X ) ϕ l ( W ) ] 2 = O [ 1 n k = 1 K l = 1 K ϕ k ( x ) ϕ l ( w ) 2 ] = O ( K 2 / n )

where we have used the fact that f X W is uniformly bounded on [ 0,1 ] 2 .

By Chebyshev’s inequality, then we have f ^ X W f X W K 2 = O P ( K 2 / n ) . Then the desired result follows immediately.

Lemma 7.2. Suppose Assumptions 1, 3 and 4 hold. Let γ = m i n { r , s } , then

m ^ T ^ g 2 = O P ( N 1 K + K 2 γ + n 1 K 2 )

Proof of Lemma 7.2. Note that T g = m . By the triangle inequality and Jensen inequality, we have

m ^ T ^ g 2 2 [ m ^ m 2 + ( T T ^ ) g 2 ]

If g L 2 ( [ 0,1 ] ) , Lemma 7.1 gives ( T T ^ ) g 2 = O P ( K 2 r + n 1 K 2 ) . According to the proof of Lemma 7.1, under Assumptions 3(2) and 4, we can show that m ^ m 2 = O P ( K 2 s + N 1 K ) . Then we obtain the result in Lemma 7.2.

Proof of Theorem 3.1. Define A ^ α = ( α I + T ^ * T ^ ) 1 and A α = ( α I + T * T ) 1 . Notice that g α = A α T * T g , then we have

g ^ α g = A ^ α ( T ^ * m ^ T ^ * T ^ g ) + ( A ^ α T ^ * T ^ g A α T * T g ) + ( g α g )

The second right-hand side term can itself be decomposed into two components:

A ^ α T ^ * T ^ g A α T * T g = A ^ α ( T ^ * T ^ T * T ) g + ( A ^ α A α ) T * T g

Actually, since in this case A ^ α = ( α I + T ^ * T ^ ) 1 and A α = ( α I + T * T ) 1 , the identity B 1 C 1 = B 1 ( C B ) C 1 gives:

A ^ α A α = A ^ α ( T ^ * T ^ T * T ) A α

Thus,

A ^ α T ^ * T ^ g A α T * T g = A ^ α ( T ^ * T ^ T * T ) ( g g α )

From the properties of norm, we have

g ^ α g 2 3 [ A ^ α T ^ * ( m ^ T ^ g ) 2 + A ^ α ( T ^ * T ^ T * T ) ( g g α ) 2 + g α g 2 ]

Let us consider the first term, we have

A ^ α T ^ * ( m ^ T ^ g ) 2 A ^ α T ^ * 2 m ^ T ^ g 2

The first norm A ^ α T ^ * 2 = ( α I + T ^ * T ^ ) 1 T ^ * 2 is equal to the larger eigen value of the operator. These eigen values converges to λ k / ( α + λ k 2 ) and are then smaller than 1 / α . It follows from Lemma 7.2 that

A ^ α T ^ * ( m ^ T ^ g ) 2 = O P [ α 1 ( N 1 K + K 2 γ + n 1 K 2 ) ] (11)

Next, we consider the term A ^ α ( T ^ * T ^ T * T ) ( g g α ) 2 . Note that

T ^ * T ^ T * T = T ^ * ( T ^ T ) + ( T ^ * T * ) T

Then

A ^ α ( T ^ * T ^ T * T ) ( g g α ) 2 2 [ A ^ α T ^ * 2 T ^ T 2 g g α 2 + A ^ α 2 T ^ * T * 2 T ( g g α ) 2 ]

We have A ^ α T ^ * 2 = O P ( 1 / α ) , and A ^ α 2 = O P ( 1 / α 2 ) (see [20] ). According to Lemma 7.1, we have T ^ T 2 or T ^ * T * 2 are O P ( K 2 r + n 1 K 2 ) .

By Proposition 3.1:

g α g 2 = O ( α β 2 ) (12)

The term T ( g g α ) identical to α A α T * g , is the regularity bias of T * g equal to O ( α ( β + 1 ) 2 ) .

Therefore, we have

A ^ α ( T ^ * T ^ T * T ) ( g g α ) 2 = O P [ α ( β 1 ) 0 ( K 2 r + n 1 K 2 ) ] (13)

Combining (11), (12) and (13) gives the desired result of Theorem 3.1.

Proof of Corollary 3.1. By Theorem 3.1, the proof of Corollary 3.1 is straightforward and is omitted.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Stute, W., Xue, L. and Zhu, L. (2007) Empirical Likelihood Inference in Nonlinear Errors-in-Covariables Models with Validation Data. Journal of the American Statistical Association, 102, 332-346.
https://doi.org/10.1198/016214506000000816
[2] Carroll, R.J., Ruppert, D., Stefanski, L.A. and Crainiceanu, C.M. (2006) Measurement Error in Nonlinear Models. 2nd Edition, Chapman and Hall CRC Press, Boca Raton.
https://doi.org/10.1201/9781420010138
[3] Carroll, R.J. and Stefanski, L.A. (1990) Approximate Quasi-Likelihood Estimation in Models with Surrogate Predictors. Journal of the American Statistical Association, 85, 652-663.
https://doi.org/10.1080/01621459.1990.10474925
[4] Carroll, R.J. and Wand, M.P. (1991) Semiparametric Estimation in Logistic Measurement Error Models. Journal of the Royal Statistical Society B, 53, 573-585.
[5] Sepanski, J.H. and Lee, L.F. (1995) Semiparametric Estimation of Nonlinear Errors-in-Variables Models with Validation Study. Journal of Nonparametric Statistics, 4, 365-394.
https://doi.org/10.1080/10485259508832627
[6] Cook, J.R. and Stefanski, L.A. (1994) Simulation-Extrapolation Estimation in Parametric Measurement Error Models. Journal of the American Statistical Association, 89, 1314-1328.
https://doi.org/10.1080/01621459.1994.10476871
[7] Carroll, R.J., Gail, M.H. and Lubin, J.H. (1993) Case-Control Studied with Errors in Covariables. Journal of the American Statistical Association, 88, 185-199.
[8] Stefanski, L.A. and Buzas, J.S. (1995) Instrumental Variable Estimation in Binary Regression Measurement Error Models. Journal of the American Statistical Association, 90, 541-550. https://doi.org/10.1080/01621459.1995.10476546
[9] Lv, Y.-Z., Zhang, R.-Q. and Huang, Z.-S. (2013) Estimation of Semi-Varying Coefficient Model with Surrogate Data and Validation Sampling. Acta Mathematicae Applicatae Sinica English, 29, 645-660. https://doi.org/10.1007/s10255-013-0241-3
[10] Xiao, Y. and Tian, Z. (2014) Dimension Reduction Estimation in Nonlinear Semiparametric Error-in-Response Models with Validation Data. Mathematica Applicata, 27, 730-737.
[11] Yu, S.H. and Wang, D.H. (2014) Empirical Likelihood for First-Order Autoregressive Error-in-Variable of Models with Validation Data. Communications in Statistics Theory Methods, 43, 1800-1823. https://doi.org/10.1080/03610926.2012.679763
[12] Xu, W. and Zhu, L. (2015) Nonparametric Check for Partial Linear Errors-in-Covariables Models with Validation Data. Annals of the Institute of Statistical Mathematics, 67, 793-815.
https://doi.org/10.1007/s10463-014-0476-7
[13] Wang, Q. (2006) Nonparametric Regression Function Estimation with Surrogate Data and Validation Sampling. Journal of Multivariate Analysis, 97, 1142-1161.
https://doi.org/10.1016/j.jmva.2005.05.008
[14] Du, L., Zou, C. and Wang, Z. (2015) Nonparametric Regression Function Estimation for Error-in-Variable Models with Validation Data. Open Journal of Statistics, 5, 808-819.
https://doi.org/10.4236/ojs.2015.57080
[15] Yin, Z. and Liu, F. (2011) Orthogonal Series Estimation of Nonparametric Regression Measurement Error Models with Validation Data. Statistica Sinica, 21, 1093-1113.
https://doi.org/10.5705/ss.2009.047
[16] Darolles, S., Florens, J.P., Renault, E. and Kress, R. (1999) Linear Integral Equations. Springer, New York.
[17] Efromovich, S. (1999) Nonparametric Curve Estimation: Methods, Theory and Applications. Springer, New York.
[18] Carrasco, M., Florens, J.P. and Renault, E. (2007) Linear Inverse Problems in Structural Econometrics: Estimation Based on Spectral Decomposition and Regularization. Elsevier, North Holland, 5633-5751.
[19] Wu, X. (2010) Exponential Series Estimator of Multivariate Densities. Journal of Econometrics, 156, 354-366. https://doi.org/10.1016/j.jeconom.2009.11.005
[20] Groetsch, C. (1984) The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. Pitman, London.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.