The Coordinate-Free Prediction in Finite Populations with Correlated Observations

In this paper, we got the best linear unbiased predictor of any linear function of the elements of a finite population under coordinate-free models. The optimal predictor of these quantities was obtained in an earlier work considering models with a known diagonal covariance matrix. We extended this result assuming any known covariance matrix. It is shown that in the particular case of the coordinatized models, this general predictor coincides with the optimal predictor of the total population under a regression super population model with correlated observations.


Introduction
A coordinate-free approach in finite populations was introduced by [1] as an alternative to the Gauss-Markov set up, used with the purpose of predicting linear functions.The Gauss-Markov approach is characterized by a dependence on a particular basis matrix, but in the coordinate-free language, we need only to describe a parametric subspace of IR N , where N is the size of the finite po- pulation.Coordinate-free models in the linear models context are discussed by [2] and [3].
In a finite population { }  be the value of a random variable y associated to each population unit.Under the superpopulation approach, we will assume that Y is a random vector such that Y Q ∈ , where Q is an N -dimensional real vector space with the usual inner product.
The superpopulation model is expressed by Var , where Ω is a p -dimensional subspace of Q , 2 σ is a unknown positive pa- rameter and V is a known positive definite matrix.The considered model is coordinate free, in the sense that no basis is defined for Ω , the parametric space of µ .
Our main objective is predicting Y ′  , a linear combination of the elements of Y .With this purpose, a sample of n observations is drawn of the population and the values of i y in Y become known for the sample elements.Let s and r be the sets of sample and non sample elements, respectively, such that P s r = ∪ .
We will consider, without loss of generality that Y and V are reordered as and , with s Y containing the n observed sample elements, r Y containing the unobserved elements, ( ) Under a less general model, with Var Y D σ = , D a known diagonal matrix, [1] presented the optimal linear predictor of Y ′  .In the next section, we extended the result, obtaining the best linear unbiased predictor of Y ′  in the model (1.1) and this was the main contribution of the paper.In Section 3, we show that under the coordinatized model, this predictor coincides with that given by [4].Finally, we conclude the paper with some examples in Section 4.

Best Linear Unbiased Predictor of Linear Functions
The linear function to be predicted may be written as where ( ) diag , , , We note that with this notation, The class of all linear unbiased predictors of Y ′  will be denoted by U  .Finally, next definition states the concept of optimality of the linear predictor of θ .
Definition.The linear predictor 0 θ is the best linear unbiased predictor of θ or the optimal linear predictor of θ if 0 ˆU θ ∈  and ( ) ( ) where * 0 ˆˆr and P Ω is the orthogonal projector onto Ω .
Returning to the model (1.1), with a non diagonal covariance matrix V , let us consider the decomposition V P P′ = ⋅ , with P a lower triangular matrix.
As shown by [5] (Theorem 7.2.1)there is a unique lower triangular matrix P such that V PP′ = .In addition, P is nonsingular.Then, we define the random vector and, as a consequence, by multivariate properties of covariance matrix of random vectors and matrix results, ( ) Var .
V a known positive definite matrix, the optimal linear predictor of any linear , 0 is the null vector of dimension n , ˆr Y is the solution in r Y of the system of linear equations , and P Ω is the orthogonal projection matrix onto Ω . Proof.Let with P the lower triangular matrix such that , 0 is the null vector of dimension n and ˆr Z the solution in r Z by [1] results, the optimal linear predictor of , where 0 is the null vector of dimension n and ˆr Z obtained by (1.2) is the solution of the system of linear equations , this predictor reduces to Γ and 2), we have just proved that Γ is the optimal linear predictor of h Y ′ .
To finish the proof, it is enough to show that ŝ . For this purpose we write some of matrices already defined in the partitioned form as where the submatrix are of dimension n n × , ( ) n N n × − , ( ) ˆˆâ nd .

=
and after some calculations we have s s s s r

I I P P Y I I P P PP Y I I P P Y
, Now, with this notation,

Z h PI Z h P h P h P Z CY
and because

PC I P P P C P B P B I
It is important to observe that P Ω has ( ) N N + unknown elements and it may be difficult to calculate by the above definition.But it can be obtained as ( ) , when A is a basis matrix for Ω .
Some applications of the result in Theorem 1 will be presented in the examples.

Best Linear Unbiased Predictor in the Coordinatized Model
We now consider a coordinatized version of the model (1.1), given by σ > , with V a known positive definite matrix and X a basis matrix of Ω .
Under this formulation, X is a N p × matrix of full rank p and there exists a unique IR p β ∈ such that X µ β = .Regression models are included in the class of models defined in (3.1).
[4] derived the best linear unbiased predictor of the population total This predictor, adapted to the notation introduced here and to predict any linear combination of Y is given by ( ) ( ) where ( ) Next theorem shows that in the coordinatized model (3.1), the optimal linear predictor obtained in Theorem 1 reduces to the Royall's predictor defined in Proof.We must show that ˆr Y in (2.1) is equal to ( ) As proved in Theorem 1 ( ) ( ) which is equivalent to ( ) Applying (A.3), (A.1) and (A.2) of the appendix, it follows that ) ( ) and employing (A.2), last expression reduces ) Finally, using (A.5), we

Examples
In this section, we present two examples to illustrate the optimal predictors that are obtained in the theorems.
In the first one, we consider a coordinate free model and the predictor is derived applying Theorem 1. Second example shows an application of Theorem 2 in a particular coordinatized model.
Example 1.Our objective is to predict the population total with ρ a known parameter and 2 0 σ > .
Because of the great quantity of calculations, without loss of generality, we restrict the attention to the situation where In this case, a base for Ω is given by [ ] Then, it is easy to see that ( ) By Theorem 1, the optimal linear predictor of T is It is interesting to note that, if 0 ρ = , such that V I = and i y and j y are uncorrelated, i j ≠ , then ˆ4 s T y = , where s y is the sample mean.In this case, T is the expansion predictor which was found by [1] under the model unobserved elements.Before stating the predicting results, it is necessary to introduce some de-Since after the sample is observed, s I Y will be known, we restrict our atten- tion to linear predictors of Y ′  in the form a N -dimensional vector.Definition.A linear predictor θ of θ is unbiased if and only if Next theorem presents the best linear unbiased predictor of Y ′  under model (1.1).Theorem 1.In the model (1.1) in (2.1) is equal to T .
to calculate the best linear unbiased predictor of the populathis situation, the model is coordinatized, and by Theorem 2, it is enough to obtain the value