Model Averaging by Stacking

The paper introduces a new Frequentist model averaging estimation procedure, based on a stacked OLS estimator across models, implementable on cross-sectional, panel, as well as time series data. The proposed estimator shows the same optimal properties of the OLS estimator under the usual set of assumptions concerning the population regression model. Relatively to available alternative approaches, it has the advantage of performing model averaging exante in a single step, optimally selecting models’ weight according to the MSE metric, i.e., by minimizing the squared Euclidean distance between actual and predicted value vectors. Moreover, it is straightforward to implement, only requiring the estimation of a single OLS augmented regression. By exploiting ex-ante a broader information set and benefiting of more degrees of freedom, the proposed approach yields more accurate and (relatively) more efficient estimation than available ex-post methods.


Introduction
The Classical Linear Regression Model (CLRM) is grounded on a basic set of assumptions concerning its specification and distributional properties of control variables and error term. In this respect, under what is usually held as Assumption 1, the population regression model is required to be linear in the parameters, and control variables are known and all included in the model. However, the latter correct specification assumption might not always be appropriate in Economics; for instance, there might be more than a single set of variables, i.e., more than a single candidate model, which could be employed in estimation, also when economic theory has clear-cut implications for the causal linkage of interest.
Think of the relationship linking  to , when both variables can be measured in different ways, i.e., when there exist   and   ,  = 1   ,  = 1  ; then, in principle, up to  ×  different models could be estimated. 1 Two solutions have so far been proposed in the literature to the above model selection problem. On the one hand, by maintaining the assumption of correct specification, a selection of a single model out of the  × candidates can be performed, based on various specification strategies (see [2] for a general account; see also [3] for recent developments in model selection). Alternatively, all of the  ×  models can be estimated, and a weighted average across models computed ex-post for the parameters of interest. In the latter case, the assumption of correct specification does not have necessarily to be maintained.
Several model averaging procedures have been proposed in the literature, making use of either Bayesian or Frequentist procedures (see [4], [5]). Admittedly, relatively to Bayesian, the Frequentist approach to model averaging is fairly underdeveloped. The current paper then aims at filling this gap in the literature, by proposing an ex-ante, Mean Square Error-optimal, model averaging procedure. The proposed procedure is grounded on a stacked OLS estimator across models, implementing model averaging ex-ante in a single step, optimally selecting models' weight according to the MSE metric, i.e., by minimizing the squared Euclidean distance between actual and predicted value vectors. Moreover, it is straightforward to compute, only requiring the estimation of a single OLS augmented regression. By exploiting a broader information set ex-ante, i.e., by making use of all the available information jointly, and benefiting of more degrees of freedom, the proposed estimator then yields more accurate and (relatively) more efficient estimation than available ex-post methods. Extension to other estimation frameworks, i.e., GIVE or GMM, is also straightforward.
The rest of the paper is organized as follows. In Section 2 the proposed approach is illustrated by means of a simple example. Then, the econometric methodology is outlined in full in Section 3, while Section 4 deals with its statistical properties. Finally Section 5 concludes.

Ex-ante model averaging: An example
For sake of clarity, consider the following bivariate example where the dependent variable  is a linear function of the independent variable .
Ex-post model averaging then yields a robust consistent estimates  of , by computing a weighted average of the four available estimates 11 , 12 , 21 , 22 , with weights determined according to Bayesian or Frequentist approaches. 1 In Economics the above situation is not unusual. For instance, be  a measure of income distribution inequality and  the degree of financial development of a country; in the latter case, the Gini Index, net or gross, or top-to-bottom income distribution quantile ratios (top to bottom 1% or 10%) would all be valid candidate dependent variables; moreover, concerning financial deepening, the GDP share of liquidity (M2 or M3), stock market capitalization, or credit to the private sector, might be alternatively employed (see [1]). 2  is not necessarily a temporal index; applications to cross-sectional data are as viable as to time series.
For instance, within a Frequentist model averaging approach [2], one haŝ where the weights  can be computed by means of information criteria as in [6], setting where   is the Akaike or Schwarz-Bayes information criterion for model  . Other approaches are also available, based on Mallow's criterion [7] or cross-validation [8].
On the other hand, the proposed model averaging strategy is single-step and implemented by means of an augmented regression model using all the available data jointly. It then requires the construction of the auxiliary dependent (  ) and independent (  ) variables, by appropriately stacking the actual data   and   in single column vectors.
With reference to the set of models in (2), consider the stacked model obtained from their union, i.e., Alternatively, the regression model can be written as The stacked OLS problem is then stated as yielding, after some algebra, The ex-ante model averaging or stacked OLS estimator of  is then equivalent to its ex-post counterpart, with weights determined according to the relative variation of the candidate regressors.
Moreover, consistent OLS estimation of  2 from the generic  th disjoint model yields Hence, the stacked OLS estimator of  2 is equivalent to the arithmetic mean, across models, of the disjoint OLS estimators of  2 .
Issues related to the (relative) efficiency of the stacked OLS estimator and the gain in terms of higher degrees of freedom are discussed below.

Ex-ante model averaging by stacking
Consider the regression function and suppose that  candidate dependent variables are available, i.e., y 1 , y 2 , ..., y  , where y  ,  = 1   , is a  × 1 column vector of observations. For simplicity, three cases for the specification of the design matrix are considered: 2. The case of  candidates for one of the  regressors in the model, ordered first for simplicity, i.e., x 1 ,  = 1  , yielding up to  different X  design matrices.
3. The case of  candidates for each of the  regressors in the model, yielding up to   different design matrices X  ,  = 1    .

The case of a single design matrix
In case 1 up to  models could be estimated, i.e., vector of observations on the  available candidate dependent variables, obtained by stacking the  column vectors y  on top of one other; X P1 is the ( ×  ) ×  joint design matrix obtained by staking  times the matrix X on top of itself, i.e., Hence, the sample size of the stacked model is  =  ×  Disjoint OLS estimation of the th generic model in (15) yields (see [9]) while for the variance, in large samples,

The ex-ante model averaging estimator
Ex-ante model averaging is obtained by OLS estimation of the stacked model in (16), yieldinĝ The linkage between ex-ante and ex-post model averaging can then be gauged by noting that (19) can be stated asβ Hence, in this case, ex-ante OLS model averaging is equivalent to ex-post arithmetic model averaging across the  disjoint OLS estimatorsβ  .
which also is the arithmetic average, across the  available models, of the disjoint estimators 2  .

The case of multiple design matrices
In the case of multiple design matrices, up to  regression models can be computed, with  =  in case 2 and  =   in case 3, i.e., . . .
The disjoint OLS estimator for the generic  th model,  = 1   ,  = 1  , in (23) while for the variance, in large samples, On the other hand, the union of the above disjoint models yields the stacked model where β is the  × 1 vector of parameters, 1) vectors,  = 1   , which are then stacked on top of one other  times,  is the vectorization operator, ⊗ is the Kronecker product and i  a  × 1 unitary vector. 3 3 Hence, y PG = ¤ 0 the ( ×  ) ×  matrix obtained by stacking the  candidate design matrices on top of one another, X PG is then the ( ×  ×  ) ×  design matrix yield by staking  times the matrix X * on top of itself, i.e., The stacked OLS estimator is then computed aŝ

The case of a single candidate dependent variable
For sake of simplicity, consider first the case where  = 1; hence,  =  ×  , y PG = y 1G = i  ⊗ y 1 , and the design matrix in the stacked model is The stacked OLS estimator in (28) can then be stated and so on. By substitution in (30), it followŝ Using matrix inversion rules 4 , one has Given matrices  and , non singular and of proper dimensions for their sum, it holds (+ Optimal ex-ante weights, contained in the  × matricesW *  ,  = 1  , are then computed by taking into account all the information available on the various candidate regressors, being proportional to their relative variation. In fact, multiplying both sides of (32) by X 0  X  , one has and therefore Moreover, givenε PG =ε 1G , one has Hence, 2  is the arithmetic average, across the available  models, of the disjoint estimators 2 1 .

The case of multiple candidate dependent variable
Consider now the case in which more than single candidate dependent variable is available, i.e.,   1.

The stacked OLS estimator in (28) is then
and so on, by substitution in (34), one then haŝ

by substitution in (35) one eventually haŝ
where, as for the previous case, The optimal ex-ante weights, contained in the  ×  matricesW *  ,  = 1  , are again computed by taking into account all the information available on the various candidate regressors and are proportional to their relative variation. Averaging is then performed across all possible models which can be estimated according to the  candidate dependent variables.
Then, ex-ante model averaging estimation of the variance 2  is computed as the arithmetic average, across all the  ×  models, of the disjoint estimators 2  .

Statistical properties
Assume that the properties of the classical linear regression model hold, i.e.: 1. The population regression function is linear in the  parameters, i.e., y = Xβ + ε 2. {   x  } is a candidate stationary and ergodic process,  = 1   ;  = 1  ,  =    ; where x  is a  ×1 vector of regressors (belonging to the th design matrix X  ) at observation ,  = 1   ;    3. The regressors x  are at least contemporaneously orthogonal to the residuals, i.e.,  [  |x  ] = 0, where   is the residual from the generic th model at observation .
4. Any of the  ×  design matrices X  has rank equal to  with probability 1, with plim a finite, symmetric, invertible, positive semidefinite matrix.
5. The conditional variance covariance matrix of the residuals   is a scalar identity matrix, i.e., Under the above assumptions (even relaxing the conditional homoskedasticity property), the disjoint OLS estimatorβ  in (25) andσ 2  in (26) is consistent and asymptotically normal (see [9]). The same properties hold for the stacked OLS estimator. Proofs for the most general case are reported below; results for the intermediate cases can be straightforwardly derived from those provided, by setting  = 1 or  = 1.

Large sample properties
In so far as plim Moreover, in so far as plim Under properties 1 to 5, by means of a CLT (see [9]), one also has The asymptotic distribution ofβ  then followŝ s well as its feasible formβ In the case of conditional heteroskedasticity (Σ = ( 2  )), it would be straightforward to prove that which is a finite, symmetric, positive semidefinite  ×  matrix, as  2  0 and  − 1   0, both finite, and (I + K *  ) (X 0  X  ) −1 is a finite, symmetric, positive semidefinite  ×  matrix by construction (X  is real and of full column rank    for any ).
Finally, the gain in terms of degrees of freedom yield by the stacked over the disjoint OLS estimator is equal to ( ×  − 1) ×  . In fact, by recalling that the number of degrees of freedom of the residual term is equal to the rank of the annihilator matrix (see [9]), the gain yield by stacked over disjoint OLS estimation can be established by comparing the rank of the annihilator matrix in the two cases The increase in degrees of freedom yield by stacked over disjoint OLS estimation is then

Small sample properties
If the stronger assumption of strict exogeneity is made in 3. above, i.e.,  [  |X  ] = 0, the disjoint OLS estimatorsβ  in (25) andσ 2  =ε 0 ε  −  are also (conditionally and unconditionally) , i.e., best unbiased and efficient (within the class of linear estimators) (see [9]) 5 . Moreover, if the assumption of conditional Normality of the error term is added, i.e.,ε  |X * ∼  ¡ 0  2 ¢ , OLS is (conditionally and unconditionally)  , i.e., best unbiased (within the class of linear and non linear estimators), as well as (conditionally and unconditionally) Normally distributed is a finite, nonsingular, symmetric, positive semidefinite matrix of rank    . The above properties can also be established for the stacked OLS estimator, in the same way as for the disjoint OLS estimator (see [9]), yieldinĝ a finite, nonsingular, symmetric, positive semidefinite matrix of rank   , and feasible formβ Then, by comparing the conditional variances ofβ  andβ  , one has again as for the asymptotic case. Moreover, which similarly is a finite, symmetric and positive semidefinite  ×  matrix by construction. Finally, the gain in terms of degrees of freedom yield by stacked over disjoint OLS estimation is again ( ×  ×  − ) − ( − ) = ( ×  − 1) ×  , as already shown for the asymptotic case.

Conclusions
The paper introduces an ex-ante model averaging approach, requiring the estimation of a single augmented model obtained from the union of all the possible candidate models, rather than their disjoint estimation. In this framework, optimal weights are implicitly computed according to the MSE metric, i.e., by minimizing the squared Euclidean distance between actual and predicted value vectors, and are proportional to the relative variation of the regressors. By exploiting ex-ante all the available information on the various candidate set of variables, and relying on more degrees of freedom, it then leads to more accurate and (relatively) more efficient estimation than available ex-post methods. Moreover, the proposed estimator shows the same optimal properties of the disjoint OLS estimator, under the usual set of assumptions concerning the population regression model. While the method is proposed to be used within the OLS estimator framework, extension to GIVE and GMM is straightforward. We point to [1] for an empirical application, using OLS and GMM estimation.