A Multiplicative Bias Correction for Nonparametric Approach and the Two Sample Problem in Sample Survey

Let two separate surveys collect related information on a single population U. Consider situation where we want to best combine data from the two surveys to yield a single set of estimates of a population quantity (population parameter) of interest. This Article presents a multiplicative bias reduction estimator for nonparametric regression to two sample problem in sample survey. The approach consists to apply a multiplicative bias correction to an estimator. The multiplicative bias correction method which was proposed, by Linton & Nielsen, 1994, assures a positive estimate and reduces the bias of the estimate with negligible increase in variance. Even as we apply this method to the two sample problem in sample survey, we found out through the study of it asymptotic properties that it was asymptotically unbiased, and statistically consistent. Furthermore an empirical study was carried out to compare the performance of the developed estimator with the existing ones.

Keywords

Share and Cite:

Stephane, K. , Otieno, R. and Mageto, T. (2017) A Multiplicative Bias Correction for Nonparametric Approach and the Two Sample Problem in Sample Survey. Open Journal of Statistics, 7, 1053-1066. doi: 10.4236/ojs.2017.76073.

1. Introduction

Sometimes, it happens that two separate surveys gather related information on a variable of interest of a population, U, having perhaps distinct designs and mode of sampling. It becomes very important on how to combine the data from the two surveys.

Take as example, the students of the sub-regional institute of statistics and apply economics (ISSEA), and those of the polytechnic institute, both in different ways with different importances to collect data on unemployment in Cameroon. Researchers at the national institute of statistics (Cameroon) are faced with the following problem: how can the data from these two distinct surveys joined together to produce a single data and have a better representation of the population?

Some great scientists have been looking into these problems for several years. The approach to this problem have been in different ways; one of which involve getting estimates of the two surveys separately and using the inverse of the estimated variances as weights to weigh them together as seen in  .  went further by using empirical likelihood method to combine information from multiple survey. Another option to this consist of putting the two data sets in a single data set, taking into account the weight on individual sample units. Developed in  are some of these methods which include; the pseudo- likelihood, missing information principle and iterated post-stratified estimator. After simulations on two different populations, it was concluded that, in neither population the design based ways of combining data yield best results. The iterated post-stratified estimator looks to be a very promising non-parametric way to combined data from two sources.

Just recently  used the Nonparametric regression, which is the model-based sampler’s method of choice when there is a serious doubt about the suitability of a linear or other simple parametric models for the survey data at hand. The nonparametric regression supersedes the need for use of design weights and standard design-based weights. Recognition of this is especially helpful in confronting problems in sampling situations where design weights are missing or questionable.

This study made use of kernel smoothers, especially the Nadaraya Watson smoother. However, estimators based on Nadaraya Watson smoothing weights are normally biased in small samples and at boundary points.

There exist alternative techniques of reducing the bias. For a detailed review see  -  . These methods improve the performance of nonparametric regression at points of large curvature. But in this framework, we consider a multiplicative bias correction approach to nonparametric regression to have an estimate with a smaller bias than existing ones.

Outline of the Paper

The remaining part of this paper is organized as follows: In Section 2, a multiplicative bias corrected estimator ${\stackrel{^}{T}}_{MBC}$ for the finite population totals is proposed. In Section 3, the asymptotic properties of the proposed estimator are derived. In Section 4, an empirical study of the derived properties is presented. In Section 5 we give a conclusion to the paper.

2. Proposed Estimator

Consider a finite population, $U=1,2,\cdots ,N$ and let ${y}_{1},{y}_{2},\cdots ,{y}_{n}$ represent the combined random sample drawn from the population using different sampling techniques. Suppose that to each of these ${{y}^{\prime }}_{i}s$ , there is an auxiliary information ${x}_{1},{x}_{2},\cdots ,{x}_{n}$ .

Let consider the following model;

$E\left({Y}_{i}/{X}_{i}={x}_{i}\right)=h\left({x}_{i}\right)$ (1)

$\mathrm{cov}\left({Y}_{i},{Y}_{j}/{X}_{i}={x}_{i},{X}_{j}={x}_{j}\right)=\left\{\begin{array}{l}{\sigma }^{2}\left({x}_{i}\right),i=j\\ 0,i\ne j\end{array}$ (2)

where $h\left({x}_{i}\right)$ and ${\sigma }^{2}\left({x}_{i}\right)$ are twice continuously differentiable functions (that is lipschitz continuous). With these assumptions on $h\left({x}_{i}\right)$ and ${\sigma }^{2}\left({x}_{i}\right)$ , one can estimate $h\left({x}_{i}\right)$ and ${\sigma }^{2}\left({x}_{i}\right)$ non-parametrically.

Let ${ϵ}_{i}={Y}_{i}-h\left({X}_{i}\right)$ be i.i.d. with zero mean, and variance ${\sigma }^{2}$ . We can refer to this set-up as the weak model. In this scheme, we can ignore which of the original samples, the ${{Y}^{\prime }}_{i}s$ are available from.

Usually in the computation of finite population total,we have the formula given by

$T=\underset{i\in U}{\sum }\text{ }\text{ }{y}_{i}=\underset{i\in s}{\sum }\text{ }\text{ }{y}_{i}+\underset{j\in r}{\sum }\text{ }\text{ }{y}_{j}$ (3)

where, s refers to the sample and r refers to the nonsampled part of the population. Since the values of the sample part is known, the process of estimating the finite population total is equivalent to predicting the nonsample part of the population.

To do this, the multiplicative bias corrected technique is employed in which case the proposed estimator of the population total is now defined as

${\stackrel{^}{T}}_{MBC}=\underset{i\in s}{\sum }\frac{{y}_{i}-\stackrel{^}{h}\left({x}_{i}\right)}{{\pi }_{i}}+\underset{j\in r}{\sum }\text{ }\text{ }\stackrel{^}{h}\left({x}_{j}\right)$ (4)

where

${\pi }_{i}$ is the inclusion probability

$\stackrel{^}{h}\left({x}_{i}\right)$ is the multiplicative bias corrected estimator.

The principal objective of the multiplicative bias corrected technique is to correct the insufficiences of the kernel smoother that is the bias problem at the boundaries. Given a pilot smoother of the regression function

$\stackrel{˜}{h}\left(x\right)=\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}{Y}_{j}$ (5)

The inverse relative estimation error of the smoother at each of the observations is given by $\frac{h\left(x\right)}{\stackrel{˜}{h}\left(x\right)}$ .

A noisy estimate of the ratio, $\frac{h\left(x\right)}{\stackrel{˜}{h}\left(x\right)}$ , is given by

$\beta \left(x\right)=\frac{{Y}_{j}}{\stackrel{˜}{h}\left({X}_{j}\right)}$ (6)

Smoothing the noisy estimate $\beta \left(x\right)$ leads to

$\stackrel{˜}{\beta }\left(x\right)=\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\beta \left(x\right)$ (7)

Above gives a better estimate for the inverse of the relative estimation error at each particular observation and can therefore be used as a multiplicative correction of the pilot smoother.

$\stackrel{^}{h}\left(x\right)=\stackrel{˜}{\beta }\left(x\right)\stackrel{˜}{h}\left(x\right)$ (8)

For both $\stackrel{˜}{h}\left(x\right)$ and $\stackrel{˜}{\beta }\left(x\right)$ , we use the same weighting scheme;

${w}_{xj}=\frac{1}{nh}K\left(\frac{x-{X}_{j}}{h}\right)$ (9)

where

h is the bandwidth

K is a probability density function, symmetric about zero.

n is the sample size

Bandwidth Selection Techniques

● Implement biased cross-validation (bcv).

● Implement unbiased cross-validation (ucv).

● Implements a rule-of-thumb for choosing the bandwidth of a Gaussian kernel density estimator (ndr0)

● Can use a more common variation given by Scott (1992) (ndr)

3. Properties of Proposed Estimator

3.1. Assumptions

The following assumptions are made in the estimation of $\stackrel{^}{h}\left({x}_{i}\right)$ .

● The regression function is bounded and strictly positive, that is, $b\ge h\left(x\right)\ge a>0$ for all x

● The regression function is twice continuously differentiable everywhere.

$ϵ$ has finite fourth moments and has a symmetric distribution around zero.

● The bandwidth h is such that, $h\to 0$ , $nh\to \infty$ and ${\left(nh\right)}^{2}\to \infty$ as $n\to \infty$

3.2. Asymptotic Unbiasedness of the Proposed Estimator

We want to show that $E\left({\stackrel{^}{T}}_{MBC}-T\right)\to 0$ as $n\to \infty$ . Under the model based, the bias of the estimator ${\stackrel{^}{T}}_{MBC}$ is defined as follows;

$E\left[{\stackrel{^}{T}}_{MBC}-T\right]=E\left[{\stackrel{^}{T}}_{MBC}\right]-E\left[T\right]$ (10)

Now, we have the expected value of the proposed estimator for the finite population total given by;

$E\left[{\stackrel{^}{T}}_{MBC}\right]=E\left[\underset{i\in s}{\sum }\frac{{y}_{i}-\stackrel{^}{h}\left({x}_{i}\right)}{{\pi }_{i}}+\underset{j\in r}{\sum }\text{ }\text{ }\stackrel{^}{h}\left({x}_{j}\right)\right]$ (11)

$=E\left[\underset{i\in s}{\sum }\frac{{y}_{i}-\stackrel{^}{h}\left({x}_{i}\right)}{{\pi }_{i}}\right]+E\left[\underset{j\in r}{\sum }\text{ }\text{ }\stackrel{^}{h}\left({x}_{j}\right)\right]$ (12)

$=\underset{i\in s}{\sum }\frac{1}{{\pi }_{i}}E\left({y}_{i}-\stackrel{^}{h}\left({x}_{i}\right)\right)+\underset{U|s}{\sum }\text{ }\text{ }E\left(\stackrel{^}{h}\left({x}_{j}\right)\right)$ (13)

$E\left(\stackrel{^}{h}\left({x}_{j}\right)\right)$ is obtained by analysing the individual terms of the stochastic approximation of $\stackrel{^}{h}\left(x\right)$ . Let us then establish the stochastic approximatiom of $\stackrel{^}{h}\left(x\right)$ as shown by (Hengartner 2009).

From (8),

$\stackrel{^}{h}\left(x\right)=\stackrel{˜}{\beta }\left(x\right)\stackrel{˜}{h}\left(x\right)$ (14)

$=\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{{Y}_{j}}{\stackrel{˜}{h}\left({X}_{j}\right)}\stackrel{˜}{h}\left(x\right)=\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{˜}{h}\left(x\right)}{\stackrel{˜}{h}\left({X}_{j}\right)}{Y}_{j}$ (15)

$=\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}{R}_{j}\left(x\right){Y}_{j}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}{R}_{j}\left(x\right)=\frac{\stackrel{˜}{h}\left(x\right)}{\stackrel{˜}{h}\left({X}_{j}\right)}$ (16)

Let define, $\stackrel{¯}{h}=E\left(\stackrel{˜}{h}\left(x\right)|{X}_{1},{X}_{2},\cdots ,{X}_{n}\right)$ then we can express ${R}_{j}\left(x\right)$ as.

$\begin{array}{c}{R}_{j}\left(x\right)=\frac{\stackrel{˜}{h}\left(x\right)}{\stackrel{˜}{h}\left({X}_{j}\right)}=\left(\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\right)\ast \left(\frac{\stackrel{˜}{h}\left(x\right)}{\stackrel{¯}{h}\left(x\right)}\right)\ast {\left(\frac{\stackrel{˜}{h}\left({X}_{j}\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\right)}^{-1}\\ =\left(\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\right)\ast \left(\frac{\stackrel{˜}{h}\left(x\right)-\stackrel{¯}{h}\left(x\right)+\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left(x\right)}\right)\ast {\left(\frac{\stackrel{˜}{h}\left({X}_{j}\right)-\stackrel{¯}{h}\left({X}_{j}\right)+\stackrel{¯}{h}\left({X}_{j}\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\right)}^{-1}\\ =\left(\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\right)\ast \left(\frac{\stackrel{˜}{h}\left(x\right)-\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left(x\right)}+1\right)\ast {\left(\frac{\stackrel{˜}{h}\left({X}_{j}\right)-\stackrel{¯}{h}\left({X}_{j}\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}+1\right)}^{-1}\\ =\left(\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\right)\ast \left(R\left(x\right)+1\right)\ast {\left(R\left({X}_{j}\right)+1\right)}^{-1}\end{array}$

Through the series expansion,

$\begin{array}{c}{\left(R\left({X}_{j}\right)+1\right)}^{-1}=\frac{1}{R\left({X}_{j}\right)+1}=\frac{1}{1-\left(-R\left({X}_{j}\right)\right)}=\underset{n=0}{\overset{\infty }{\sum }}{\left[-R\left({X}_{j}\right)\right]}^{n}\\ =1-R\left({X}_{j}\right)+R{\left({X}_{j}\right)}^{2}+\cdots \end{array}$

${R}_{j}\left(x\right)=\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\ast \left[1+R\left(x\right)-R\left({X}_{j}\right)+{r}_{j}\left(x,{X}_{j}\right)\right]$

is an approximation of the quantity R.

Replacing both ${Y}_{j}$ and ${R}_{j}$ in (16), we obtain

$\begin{array}{c}\stackrel{^}{h}\left(x\right)=\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left[1+R\left(x\right)-R\left({X}_{j}\right)+{r}_{j}\left(x,{X}_{j}\right)\right]\left(h\left({X}_{j}\right)+{ϵ}_{j}\right)\\ =\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}h\left({X}_{j}\right)+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left({ϵ}_{j}+h\left({X}_{j}\right)\right)\left(R\left(x\right)-R\left({X}_{j}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left(R\left(x\right)-R\left({X}_{j}\right)\right){ϵ}_{j}+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}{r}_{j}\left(x,{X}_{j}\right)\left(h\left({X}_{j}\right)+{ϵ}_{j}\right)\end{array}$

Using the assumption $nh\to \infty$ the remainder term turns to zero in probability and the expression reduces to;

$\begin{array}{c}\stackrel{^}{h}\left(x\right)=\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}h\left({X}_{j}\right)+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left({ϵ}_{j}+h\left({X}_{j}\right)\right)\left(R\left(x\right)-R\left({X}_{j}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left(R\left(x\right)-R\left({X}_{j}\right)\right){ϵ}_{j}+{0}_{p}\left(\frac{1}{nh}\right)\end{array}$

To solve Equation (16), we need to find $E\left(\stackrel{^}{h}\left({x}_{j}\right)\right)$ hence,

$\begin{array}{l}E\left(\stackrel{^}{h}\left({x}_{j}\right)\right)=E\left[\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}h\left({X}_{j}\right)+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left({ϵ}_{j}+h\left({X}_{j}\right)\right)\left(R\left(x\right)-R\left({X}_{j}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left(R\left(x\right)-R\left({X}_{j}\right)\right){ϵ}_{j}+{0}_{p}\left(\frac{1}{nh}\right)\right]\\ =\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}E\left(h\left({X}_{j}\right)\right)+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}E\left({ϵ}_{j}\right)+\underset{j=1}{\overset{n}{\sum }}{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}h\left({X}_{j}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}×E\left(R\left(x\right)-R\left({X}_{j}\right)\right)+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left(R\left(x\right)-R\left({X}_{j}\right)\right)E\left({ϵ}_{j}\right)+{0}_{p}\left(\frac{1}{nh}\right)\\ =\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}h\left({X}_{j}\right)+\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}h\left({X}_{j}\right)E\left(\frac{\stackrel{˜}{h}\left(x\right)}{\stackrel{¯}{h}\left(x\right)}-\frac{\stackrel{˜}{h}\left({X}_{j}\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\right)+{0}_{p}\left(\frac{1}{nh}\right)\end{array}$

since $E\left({ϵ}_{j}=0\right)$

$E\left(\stackrel{^}{h}\left({x}_{j}\right)\right)=\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}h\left({X}_{j}\right)+{o}_{p}\left(\frac{1}{nh}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{since}\text{\hspace{0.17em}}\stackrel{¯}{h}\left(x\right)=E\left(\stackrel{˜}{h}\left(x\right)\right)$ (17)

Hence,

$\begin{array}{c}E\left[{\stackrel{^}{T}}_{MBC}\right]=\underset{i\in s}{\sum }\frac{1}{{\pi }_{i}}E\left({y}_{i}\right)-\left(\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{i}\right)}h\left({X}_{i}\right)+{o}_{p}\left(\frac{1}{nh}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{U|s}{\sum }\left(\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{i}\right)}h\left({X}_{j}\right)\right)+{o}_{p}\left(\frac{1}{nh}\right)\end{array}$ (18)

The above expression can be reduced by considering a limited Taylor series of $\frac{h\left({X}_{j}\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}$ about a point x. Hence

$\frac{h\left({X}_{j}\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}=\frac{h\left(x\right)}{\stackrel{¯}{h}\left(x\right)}+\left({X}_{j}-x\right){\left(\frac{h\left(x\right)}{\stackrel{¯}{h}\left(x\right)}\right)}^{\prime }+{\left({X}_{j}-x\right)}^{2}{\left(\frac{h\left(x\right)}{\stackrel{¯}{h}\left(x\right)}\right)}^{\prime \text{​}\prime }+{o}_{p}\left(1\right)$ (19)

Now, substituting the first two terms in (18) gives

$\begin{array}{c}E\left[{\stackrel{^}{T}}_{MBC}\right]=\underset{i\in s}{\sum }\frac{1}{{\pi }_{i}}E\left({y}_{i}\right)-E\left(\stackrel{^}{h}\left({x}_{i}\right)\right)+\underset{U|s}{\sum }\left(\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\stackrel{¯}{h}\left(x\right)\left(\frac{h\left(x\right)}{\stackrel{¯}{h}\left(x\right)}+\left({X}_{j}-x\right){\left(\frac{h\left(x\right)}{\stackrel{¯}{h}\left(x\right)}\right)}^{\prime }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{o}_{p}\left(\frac{1}{nh}\right)\end{array}$ (20)

But ${\sum }_{j=1}^{n}\text{ }{w}_{xj}=1$ and ${\sum }_{j=1}^{n}\left({X}_{j}-x\right){w}_{xj}=0$ , therefore

$E\left[{\stackrel{^}{T}}_{MBC}\right]=\underset{U|s}{\sum }\text{ }\text{ }h\left(x\right)+{o}_{p}\left(\frac{1}{nh}\right)$ (21)

Furthermore,

$E\left(T\right)=\underset{i\in s}{\sum }\text{ }\text{ }E\left({y}_{i}\right)+\underset{j\in r}{\sum }\text{ }E\left({y}_{j}\right)=\underset{i\in s}{\sum }\text{ }\text{ }\stackrel{¯}{y}+\underset{j\in r}{\sum }\text{ }\text{ }h\left( x \right)$

Hence the asymptotic bis of the estimator is given by

$BIAS\left({\stackrel{^}{T}}_{MBC}\right)=E\left(\frac{{\stackrel{^}{T}}_{MBC}-T}{N}\right)=\frac{1}{N}\underset{i\in s}{\sum }\text{ }\text{ }\stackrel{¯}{y}+{o}_{p}\left(\frac{1}{nh}\right)$

The bias of ${\stackrel{^}{T}}_{MBC}$ will be of order ${o}_{p}\left(\frac{1}{nh}\right)$ . Thus it converges to zero at a faster rate compared to the existing non-parametric estimators which generally converge at the rate ${o}_{p}\left({h}^{2}\right)$ .

3.3. Asymptotic Variance of the Proposed Estimator

The variance of the finite population total is given by;

$\begin{array}{c}Var\left[{\stackrel{^}{T}}_{MBC}\right]=Var\left[\underset{i\in s}{\sum }\frac{{y}_{i}-\stackrel{^}{h}\left({x}_{i}\right)}{{\pi }_{i}}+\underset{j\in r}{\sum }\text{ }\text{ }\stackrel{^}{h}\left({x}_{j}\right)\right]\\ =Var\left[\underset{i\in s}{\sum }\frac{{y}_{i}-\stackrel{^}{h}\left({x}_{i}\right)}{{\pi }_{i}}\right]+Var\left[\underset{j\in r}{\sum }\text{ }\text{ }\stackrel{^}{h}\left({x}_{j}\right)\right]\\ =\underset{i\in s}{\sum }{\left(\frac{1}{{\pi }_{i}}\right)}^{2}Var\left({y}_{i}-\stackrel{^}{h}\left({x}_{i}\right)\right)+\underset{U|s}{\sum }\text{ }\text{ }Var\left(\stackrel{^}{h}\left({x}_{j}\right)\right)\end{array}$

Firstly,

$Var\left(\stackrel{^}{h}\left({x}_{j}\right)\right)=Var\left(\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left[1+R\left(x\right)-R\left({X}_{j}\right)+{r}_{j}\left(x,{X}_{j}\right)\right]\left(h\left({X}_{j}\right)+{ϵ}_{j}\right)\right)$ (22)

Using the assumption $nh\to \infty$ , the remainder terms converge to zero in probability. Therefore ${r}_{j}\left(x,{X}_{j}\right)\left(h\left({X}_{j}\right)+{ϵ}_{j}\right)={0}_{p}\left(\frac{1}{nh}\right)$ and Equation (22) reduces to

$Var\left(\stackrel{^}{h}\left({x}_{j}\right)\right)=Var\left(\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\left[1+R\left(x\right)-R\left({X}_{j}\right)\right]\left(h\left({X}_{j}\right)+{ϵ}_{j}\right)+{0}_{p}\left(\frac{1}{nh}\right)\right)$ (23)

Truncating the binomial expansion at the first term yields

$\begin{array}{c}Var\left(\stackrel{^}{h}\left({x}_{j}\right)\right)=Var\left(\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}{y}_{j}\right)+{0}_{p}\left(\frac{1}{{\left(nh\right)}^{2}}\right)\\ =\underset{j=1}{\overset{n}{\sum }}{\left({w}_{xj}\frac{\stackrel{¯}{h}\left(x\right)}{\stackrel{¯}{h}\left({X}_{j}\right)}\right)}^{2}{\sigma }^{2}\left({x}_{j}\right)+{0}_{p}\left(\frac{1}{{\left(nh\right)}^{2}}\right)\end{array}$

Simplify the above expression by considering the first and second part of the Taylor series of $\frac{{\sigma }^{2}\left({x}_{j}\right)}{{\stackrel{¯}{h}}^{2}\left({X}_{j}\right)}$ . So we obtain

$Var\left(\stackrel{^}{h}\left({x}_{j}\right)\right)=\underset{j=1}{\overset{n}{\sum }}{\left({w}_{xj}\right)}^{2}{\sigma }^{2}\left({x}_{j}\right)+{0}_{p}\left(\frac{1}{{\left(nh\right)}^{2}}\right)$ (24)

Therefore,

$Var\left[{\stackrel{^}{T}}_{MBC}\right]=\underset{i\in s}{\sum }{\left(\frac{1}{{\pi }_{i}}\right)}^{2}{\sigma }^{2}\left({x}_{i}\right)+\underset{U}{\sum }\underset{j=1}{\overset{n}{\sum }}{\left({w}_{xj}\right)}^{2}{\sigma }^{2}\left({x}_{j}\right)+{0}_{p}\left(\frac{1}{{\left(nh\right)}^{2}}\right)$ (25)

Thus the asymptotic variance is given by

$Var\left(\frac{{\stackrel{^}{T}}_{MBC}}{N}\right)=\frac{1}{{N}^{2}}\underset{i\in s}{\sum }{\left(\frac{1}{{\pi }_{i}}\right)}^{2}{\sigma }^{2}\left({x}_{i}\right)+\frac{1}{{N}^{2}}\underset{U}{\sum }\underset{j=1}{\overset{n}{\sum }}{\left({w}_{xj}\right)}^{2}{\sigma }^{2}\left({x}_{j}\right)+{0}_{p}\left(\frac{1}{{\left(nh\right)}^{2}}\right)$ (26)

This implies that ${\stackrel{^}{T}}_{MBC}$ is more efficient than the usual non-parametric regression estimator proposed by Dorfman (1992).

3.4. Asymptotic Mean Square Error

The asymtotic mean square error of the estimator ${\stackrel{^}{T}}_{MBC}$ is given by

$MSE\left[{\stackrel{^}{T}}_{MBC}\right]=Var\left[{\stackrel{^}{T}}_{MBC}\right]+{\left[Bias\left({\stackrel{^}{T}}_{MBC}\right)\right]}^{2}$ (27)

$\begin{array}{c}MSE\left[{\stackrel{^}{T}}_{MBC}\right]=\frac{1}{{N}^{2}}\underset{i\in s}{\sum }{\left(\frac{1}{{\pi }_{i}}\right)}^{2}{\sigma }^{2}\left({x}_{i}\right)+\frac{1}{{N}^{2}}\underset{U}{\sum }\underset{j=1}{\overset{n}{\sum }}{\left({w}_{xj}\right)}^{2}{\sigma }^{2}\left({x}_{j}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{0}_{p}\left(\frac{1}{{\left(nh\right)}^{2}}\right)+{\left[\frac{1}{N}\underset{i\in s}{\sum }\text{ }\text{ }\stackrel{¯}{y}+{o}_{p}\left(\frac{1}{nh}\right)\right]}^{2}\end{array}$ (28)

As $n\to \infty$ and $h\to \infty$ , the $MSE\left[{\stackrel{^}{T}}_{MBC}\right]$ turns to 0 indicating that, the proposed estimator is statistically consistent.

4. Empirical Study

4.1. Population

In this section, the theory developed in the previous section was tested using a set of simulation studies, with a mix of survey designs, and employing various approaches to selecting the best bandwidths. We employ a population U of countries in the world of size, N = 188, with auxiliary variable x = gross national product (GNI) and variable of interest y = human development index(HDI), of interest is the population total of the HDI, $y={\sum }_{l\in U}\text{ }{y}_{l}$ .

Figure 1 below shows the scatter diagram of the population. Where HDI is on the vertical axis and GNI on the horizontal axis, where there exist a quadratic relationship between the two variables.

We suppose, for each run of the experiment that two samples are taken:

Sample 1 ( ${s}_{1}$ ): srswor $\left({n}_{1}=32\right)$

Sample 2 ( ${s}_{2}$ ): stratsrs-four strata equal in each, and 8 units taken at random

Figure 1. Scatter diagram.

in each, so that ${n}_{2}=32$ . The total experiment consists of 500 runs of pairs of samples. Table 1 gives the estimators considered.

For an estimator $\stackrel{^}{T}$ we considered three measures of relative success across the 500 runs:

i) Unconditional relative bias measured as ratio of mean value (across runs) to target

$\text{Bias}={\sum }_{runs}\left(\stackrel{^}{T}-T\right)/T$

ii) Unconditional relative root mean square error divided by target

$\text{rmse}=\sqrt{{\left({\sum }_{runs}\left(\stackrel{^}{T}-T\right)\right)}^{2}}/T$

4.2. Results

Results obtained are tabulated in Table 2.

From the results obtained, we observe that the unbiased cross validation approach is a viable means of selecting bandwidth as it gives the lowest bias and root mean square error across all the estimators. The proposed estimator to the two sample problem gives better estimates of the population total compared to those realized using the estimator proposed by  , and  respectively.

Furthermore, we study the conditional performances of the selected estimators. 500 samples obtained were sorted by the values of the mean of the auxiliary variable and put in 25 groups each containing 20 values. We then compute the bias and root mean square error of each group. The plots of conditional performances against the average of the sorted mean auxiliary variable. We then report the behaviour of the conditional bias for the different bandwidth.

Table 1. Estimators.

Table 2. Empirical results.

Figure 2 and Figure 3 indicate the conditional bias and conditional root mean square respectively, with each of the plot drawn at different bandwidth. The population mean of auxiliary variable x was found to be 1.701. Under the conditional bias plots, it is observed that, the proposed estimator outperforms the two currently used estimatorsin terms of conditional biases especially with the unbiased cross-validation and the biased cross-validation method of selecting bandwidth. This trend persist in the case of conditional root mean square error.

(a)(b)(c)(d)

Figure 2. Plots indicating the conditional biases of three estimators. (a) Biased cross-validation (bcv); (b) Rule-of-thumb for choosing the bandwidth of a Gaussian kernel density estimator (ndr0); (c) Common common variation given by Scott (1992); (d) Unbiased cross-validation (ucv).

(a)(b)(c)(d)

Figure 3. Plots indicating the conditional root mean square error of three estimators. (a) Biased cross-validation (bcv); (b) rule-of-thumb for choosing the bandwidth of a Gaussian kernel density estimator (ndr0); (c) Common variation given by Scott (1992); (d) unbiased cross-validation (ucv).

5. Conclusion

The aim of this study was to develop an estimator with the lowest bias for the finite population total using the multiplicative bias corrected approach to non parametric regression. This study reveals that the proposed estimator is more efficient than the modified nonparametric estimator (NPT). With a suitable bandwidth selection (ucv), the proposed estimator has the smallest bias and root mean square error values. It has therefore proven to be efficient in resolving the boundary value problem that is associated with the existing nonparametric smoothers.

Acknowledgements

My first appreciation goes to my supervisors Professor Odhiambo and Doctor Mageto for accompanying me through this work. Also, alot of thanks to the African Union for providing for this scientific reseach and placing such confident in its youth. Lastly but not the least, thanks to my family for their support.

Conflicts of Interest

The authors declare no conflicts of interest.

  Merkouris, T. (2004) Combining Independent Regression Estimators from Multiple Surveys. Journal of the American Statistical Association, 99, 1131-1139. https://doi.org/10.1198/016214504000000601  Wu, C.B. (2004) Combining Information from Multiple Surveys through the Emperical Likelihood Method. The Canadian Journal of Statistics, 32, 15-26. https://doi.org/10.2307/3315996  Dorfman, A.H. (2008) The Two Sample Problem. Proceedings of the Joint Statistical Meetings, Section on Survey Research Methods, Denver, 3-7 August 2008.  Dorfman, A.H. (2009) Nonparametric Regression and the Two Sample Problem. Proceedings of the Joint Statistical Meetings, Section on Survey Research Methods, Washington DC, August 1-6 2009, 277-270.  Marron, J.S. and Hardle, W. (1986) Random Approximations to Some Measures of Accuracy in Nonparametric Curve Estimation. Journal of Multivariate Analysis, 20, 91-113. https://doi.org/10.1016/0047-259X(86)90021-7  Bierens, H.J. (1987) Kernel Estimators of Regression Functions. Advances in Econometrics: Fifth World Congress, Cambridge University Press, Cambridge, 99-144. https://doi.org/10.1017/CCOL0521344301.003  Muller, H.-G. and Stadtmuller, U. (1987) Variable Bandwidth Kernel Estimators of Regression Curves. The Annals of Statistics, 15, 182-201. https://doi.org/10.1214/aos/1176350260  Linton, O. and Nielsen, J.P. (1994) A Multiplicative Bias Reduction Method for Nonparametric Regression. Statistics & Probability Letters, 19, 181-187. https://doi.org/10.1016/0167-7152(94)90102-3  Fan, J.Q. (1992) Design-Adaptive Nonparametric Regression. Journal of the American Statistical Association, 87, 998-1004. https://doi.org/10.1080/01621459.1992.10476255  Hirukawaa, M. and Sakudo, M. (2014) Nonnegative Bias Reduction Methods for Density Estimation Using Asymmetric Kernels. Computational Statisticsand Data Analysis, 92, 112-123. https://doi.org/10.1016/j.csda.2014.01.012  Hengartner, N. and Matzner-Lober, E., Rouviere, L. and Burr, T, (2009) Multiplicative Bias Corrected Nonparametric Smoothers. arXiv Preprint arXiv:0908.0128.  Dorfman, A.H. (1992) Nonparametric Regression for Estimating Totals in Finite Populations. Proceedings of the Section on Survey Research Methods, American Statistical Association Alexandria, Washington DC, 622-625. 