Selection of Macroeconomic Forecasting Models: One Size Fits All?

Abstract

The main distinction between this paper and traditional approach is the assumption that variables affect the economy through different horizons. Under this alternative hypothesis, a variable considered as an unimportant detail from a short-horizon perspective may become an essential factor in a long-horizon standpoint, this paper, therefore, suggests selecting variables specific to the horizon. My findings confirm that a model that allows the variables particular to the horizon has a lower Schwarz Bayesian Information Criterion (SBIC) value than a model that does not. My outcomes also show that the vector autoregression (VAR) model in general forecasts poorly compared with my approach. Likewise, I contribute to the literature by setting predictions equal to the sample mean as a benchmark and showing that the out-of-sample forecasts of the VAR model with lag length higher than one fail to outperform the sample mean. Additionally, I select principal components derived from 190 different time series to forecast a time series as the time horizon varies. Again, the results show that some of the principal components may be more important at some horizons than at others, thus I suggest selecting the principal components in a factor-augmented VAR (FAVAR) model specific to the horizon. According to above results, I conclude that long-horizon and deep-rooted economic problems cannot be fixed with short-horizon and surface-level interventions. I also reach my argument via simulation.

Share and Cite:

Lv, Y. (2017) Selection of Macroeconomic Forecasting Models: One Size Fits All?. Theoretical Economics Letters, 7, 643-682. doi: 10.4236/tel.2017.74048.

1. Introduction

The standard practice in vector autoregression (VAR) modeling is to select the lag length and variables to be included using a one-step-ahead model. That mo- del is then applied to make forecasts for all time horizons. This is considered optimal if the one-step-ahead model, including the distribution of the error term, is correctly specified. Nevertheless, it is well-known in the forecasting literature that this procedure may not be optimal in practice.

The papers in the literature review provide the theoretical background of how different time series may affect the economy over multiple time scales. Given the possibility that variables may affect the economy through various time spans, we likely need to include different variables into different-step-ahead forecasting models as the forecasting horizon increases because it is infeasible to accommodate all key variables corresponding to their own horizons in a traditional one- step-ahead model. The new assumption raises the question of whether the common practice of the VAR model using the same variables at all horizons is appropriate. It is straightforward to test this doubt without making any strong assumption.

My primary goal in this paper is to determine whether a model allowing different variables specific to a given horizon has a lower Schwarz Bayesian Information Criterion (SBIC) value than a model that does not. I also would like to gauge if a multiple-step-ahead model in which variables are selected corresponding to their horizons has a lower out-of-sample mean squared error (MSE) ratio by iterated forecasts than the standard VAR model. Does a model allowing different variables specific to the horizon have a lower out-of-sample MSE than a model which does not?

Furthermore, for a factor model, conventional methods focus on selecting the principal components from the front to be modeled. Nevertheless, they ignore the possibility that the selected principal components may vary as the corresponding horizon differs. I make use of 190 different time series to calculate the principal components and then select the optimal principal components to forecast a single time series, one by one, specific to a given horizon. I check whether a model allowing different principal components specific to the horizon has a lower out-of-sample MSE by direct forecasts than a model that does not. Do the principal components at the end have lower out-of-sample MSEs than the principal components in the front if we allow the horizon to differ? As I shall argue, I find my assumption more appealing than the conventional assumption that variables should be same for all horizons.

I contribute to the literature in four ways:

・ This paper constructs a novel framework providing a systematic way to select variables specific to the horizon, with fewer coefficients than a VAR model. I demonstrate that variables should be modeled specific to the horizon. Including all variables in a one-step-ahead model is not sufficient to resolve the question of the relative importance of different variables which may change as the horizon varies.

・ I also set the sample mean as a benchmark to judge the forecasting performance of VAR models and find that it is better to make forecasts by the sample mean than traditional VAR models with lags longer than one. I demonstrate that the one-step-ahead VAR model forecasts GDP poorly during recessions relative to the multi-step-ahead models I select. This in turn indicates that the model which allows variables specific to the horizon enhance the predictive ability of the VAR model using out-of-sample forecasts.

・ My results indicate that we should reselect principal components as the time horizon changes. The principal components from the front do not necessarily forecast better than the principal components from the end as the time horizon varies.

・ Finally, the selection results are done to see if the same problem plagues factor-augmented VAR (FAVAR) models. The FAVAR model is considered to include information on all variables using a few factors. My results suggest that since some of the principal components may be more important at some horizons than at others, we have to select the principal components in a FAVAR model specifically to the horizon.

A potential criticism of my approach is that I will arbitrarily select some variables by some criterion through computer programs. Since my primary focus is to demonstrate that the variables in forecasting models will change as the horizon changes, it is clearly the case that the variables are same for all time horizons if my selected results will be same for all horizons. Otherwise, if any variable changes with the time horizon, it is possible to show that the importance of variables may depend on the exact horizon. The selected forecasting models through computer programs need further regression analysis. I do not consider that this limitation to be overly problematic. I am rather interested in verifying the possibility that variables vary as horizon changes, as opposed to explaining it. In other words, I try to prove that we should build scale-wise models with variables specific to the horizon rather than actually build a theoretical model by computers in this paper.

Additionally, as Box (1979) [1] noted, “All models are false, but some are useful.” Stock and Watson (1999) [2] mention that even if the model is misspecified, it may still produce reasonable one-period-ahead forecasts. In this paper, I claim that forecasts derived by iterating forward multi-step-ahead projects with variables selected in the multi-step-ahead model may enable us to improve the forecast accuracy of some time series during recessions, even though these variables may be ignored by traditional one-step-ahead model analysis. Even though the omitted variables in the error term that affect the economy directly through other horizons may be correlated with the variables on the right-hand side (R.H.S.) of a model1, I claim that my approach may be appropriate insofar as selecting forecasting models for recessions relative to the conventional models. The remainder of the paper is organized as follows. Section 2 presents a literature review in support of my assumption that variables in a model may not be the same for all horizons. Section 3 outlines my novel methodology and Section 4 conducts a small simulation experiment to determine the probability that my argument is spurious. Section 5 compares my results with the out-of-sample forecasts of an SVAR model, while section 6 analyzes the selected forecasting models with principal components. To provide further empirical evidence, I investigate the forecasts of US real GDP and the inflation rate during recessions and discuss the implications of the results. Concluding comments and directions for future research are given in Section 7. The Appendix summarizes the data sources.

2. Literature Review

Since Sims (1980) [4] , Doan et al. (1984) [5] , Littlerman (1986a) [6] , the VAR model has become a useful tool for making out-of-sample forecasts in macroeconomics, which approximately captures the coefficients of multiple variables in a one-step-ahead model and predicts the fluctuations of variables in the future. VAR models tend to suffer from over-parameterization and problematic predictions by caused by an excess of free insignificant parameters. Shrinkage methods have been proposed to resolve this problem of VAR modelling, like variable selection, factor models (Stock and Watson, 2005) [7] , FAVAR models (Bernanke et al., 2004) [8] and so on. For variable selection, traditional methods focus on which―and how many―variables to include in the model from the candidate variables nevertheless ignore the possibility that selected variables may vary as the corresponding horizon differs. The same problem may also plague factor models and FAVAR models. Models, especially for macroeconomic models, will always omit variables. The key, however, is knowing if the omitted variables are important and how they are going to impact our models. By selecting the variables specific to the horizon, I try to include the important variables in each horizon to decrease free insignificant parameters.

The assumption that variables should be the same for all horizons is in fact almost always subject to serious challenges such as variance decomposition evidence and so on. Despite this, the poor forecasting performance of VAR models has not been attributed to the characteristics of variable variation across horizons. For example, Friedman (1961) [9] advocates that for the eighteen non-war business cycles since 1870, monetary policy affects economic conditions only after a lag which is long and variable.

Likewise, Blanchard and Quah (1989) [10] appeal to an analogous argument regarding that some variables are more important at some horizons than at others. When they check the forecast error variance decompositions of output at multiple horizons, they find that the contribution of demand disturbances to output is above 80% before the 8th forecast period, while it drops sharply to 39.3 after 40 periods. This indicates a decline of the contribution of demand disturbances in explaining the movements of the output. At the same time, the contribution of supply disturbances to output increases over time. They point out that demand disturbances have a hump-shaped effect on output, which disappears after about two years, while supply disturbances have a continually increasing effect on the output which reaching a plateau after five years. Their findings are consistent with my argument that the supply and demand shocks may play important roles in explaining output at different horizons. If we focus only on the short-horizon evidence, we may make use of only demand interventions to analyze in the model, and many variables affecting output persistently and strongly in some long horizons may be ignored.

Kilian (2009a) [11] initially identifies oil-specific demand shocks and oil supply shocks. He postulates that global oil production does not response to oil demand shocks contemporarily based on costs to adjusting production and anecdotal evidence on OPEC production decisions. Furthermore, his model imposes a delay restriction on feedback from fluctuations in the real price of oil to global real activity. Kilian rules out instantaneous feedback within the month. His delay restrictions advocate that not all variable will affect other variables in the economy immediately. Lippi and Nobili (2009) [12] implement a closely related approach to decompose oil demand and oil supply shocks. In their Table 3, they provide compelling evidence that the US aggregate demand shocks explains the largest share at the short horizons (1 - 6 months) and its role becomes smaller than the role of US aggregate supply shocks at all subsequent horizons. The variance decomposition part of an extensive number of studies on identifying different kinds of shocks explores the idea that the variables considered to play essential roles swing over different time horizons.

Cassou and Vázquez (2012) [13] contribute to the VAR literature that the well-known lead and lag patterns between output and inflation arise mostly over medium-term forecast horizons.

These papers provide evidence for my assumption that variables do not necessarily require a relationship through one-step ahead, which is the foundational principle behind the approach outlined in the following section.

3. Methodology

In the usual approach to making multi-step-ahead forecasts, economists select a one-step-ahead VAR model and use the same VAR model to make forecasts multi-step ahead. Researchers typically proceed as if they are absolutely certain that the variables are same for all forecasting horizons while having no useful information about another perspective that the substantial contributions of a variable may change as horizon differs. Though the approach implementing the same variables for all horizons is extremely prevalent in the literature, this paper selects macroeconomic variables and lag lengths in the multiple-step-ahead VAR model using a criterion specific to the horizon, which is mostly disregarded in mainstream discussion.

Equation (1) displays one equation in my multiple-step-ahead VAR model. Each equation has a single variable on the left-hand side, denoted as y t + s . Other variables on the right hand side (R.H.S.) as explanatory variables, denoted as X t i . Equation (1) regresses a multi-period-ahead value of a predicted variable on its past values and the past values of other explanatory variables selected by the lowest SBIC. This criterion is recommended by Diebold (2015) [14] as it is based on the full sample data. I employ the following forecast equation:

y t + s = α s + i = 1 p γ i s + 1 y t i + i = 1 p β i s + 1 X t i + ε t + s s , S = 0 , 1 , 2 , , h (1)

where y t + s is the dependent variable we want to forecast s steps ahead for h different forecast horizons. y t i is the i t h lag of the left-hand-side variable, X t i is a vector of explanatory variables for each lag i, and the number of deterministic variables in X is N. P denotes the number of lags, with a limit of P 12 in this paper. α s is the constant in the s-step-ahead forecasting equation, β i s + 1 denotes the matrix of parameters corresponding to the i t h lag of N variables in X t i , and ε t + s s is the forecast error term of this s-step-ahead equation.

I will use Equation (1) to reselect variables in X from all combinations of variables, and lag length by SBIC. The lag length P, variables in X, and the number of variables N need not be same across different horizons.

For instance, I have m variables: x 1 , x 2 , , and x m . To select the variables and lag length in the model for s steps ahead, I can proceed in the following steps:

1) In step one, for the s-step-ahead forecasting model with one variable in X in Equation (1):

a) First, I regress the s-horizon-ahead value of the variable, y t + s , on its past value y t 1 and the past value x 1, t 1 , with the lag length equal to one. I estimate this equation to calculate the SBIC.

b) Second, I use the same variable x 1 , whereas change the lag length P in Equation (1) from one to two and so on, finally to twelve to get twelve SBICs in total.

c) Third, I use another variable x 2 , with lag length from one to twelve to get twelve SBICs. SBICs are then estimated. And the final SBICs with only one variable in X are estimated from regressions of y t + s on its past values, and the past values of the last variable x m , with the lag length from one to twelve. The total number of SBICs for only one variable in X is thus 12m.

2) In step two, for two variables in X, I regress y t + s on its past values and the past values of any combination from all combinations of two variables in X, with the lag length from one to twelve. I then get 6 m ! / ( m 2 ) ! SBICs.

3) In step m, for all m variables in X, the last SBICs are calculated from regressions of y t + s on its past values and past values of all m variables, with a lag length P from 1 to 12.

After I collect all SBICs for all combinations of variables for Horizon s, I select the most favorable model by the lowest SBIC. Further, I reselect the variables and lag length by changing horizons. Finally, I can compare the SBIC outcomes for different time horizons and check if the selected variables change as the time horizon changes.

Briefly, the coefficients of the unselected variables are set to zero derived by data set automatically. This means that some variables may not have a substantial contribution in some horizons. I set the sample mean as the out-of-sample forecasting benchmark.

4. Simulation Evidence

I use simulation to argue that under my assumption, we need to include different variables in the model corresponding to different horizons. It is a basic tenet that the only relevant information for output now is localized in the short run, and no valuable gain is obtained by instead directly incorporating long-horizon information and coefficients estimated from the long horizon models, and we can use the variables selected one-step ahead to mechanically infer the dynamics of the dependent variable through forward aggregation by these same variables for all horizons. The simulation process tries to show that if the data generation process (DGP) is such that two shocks affect the economy differently at different horizons, the SBIC of a model that allowing the variables to change at each horizon will be lower than when the variables do not change at each horizon. In such a case, mechanically generating the dynamics of the dependent variable through forward aggregation of the selected variables one-step ahead will miss the effect of the variables that affect the dependent variable over long horizons2.

First, I assume that there are two types of shocks, each uncorrelated with the other. They affect the economy through different horizons. I interpret the disturbances that affect output y in horizon h = 0 : 6 as being demand disturbances, and those only affect output y in horizon h = 14 : 20 as supply disturbances. The effect of demand disturbances on y can change to be negative in h = 4 : 6 . Moreover, the supply disturbances are assumed to have a lower frequency than demand disturbances. The function is as following:

y t = 0.4 e D , t + 0.3 e D , t 1 + 0.2 e D , t 2 + 0.1 e D , t 3 0.3 e D , t 4 0.2 e D , t 5 0.1 e D , t 6 + 0.4 e S , t 14 + 0.3 e S , t 16 + 0.2 e S , t 18 + 0.1 e S , t 20 (2)

While e D , t , e S , t N ( 0 , 1 ) . I also postulate that the demand variable u t and the supply variable z t are affected by the demand shocks and the supply shocks, respectively:

u t = 0.4 e D , t + 0.2 e D , t 1 (3)

z t = 0.4 e S , t + 0.2 e S , t 1 (4)

To simulate data, I first draw 500 normally distributed random values for each type of shocks and use the above functions to calculate y t , u t , and z t . Then I select the variables and lag length ( 12 ) from all combinations of variables to forecast output y t using Equation (1) by SBIC in each horizon. Then I repeat the above process 1000 times and count the number of u , z , and both u and z which are selected in each horizon.

Table 1 shows the times of the selected variables to forecast y from all combinations of variables u and z by the lowest SBIC (denote by X in Equation (1)) at each horizon after performing 1000 simulations. The corresponding time horizon, the times of u , z , and both u and z which are selected at this horizon are listed in column 1 to 4, respectively. I set the selected lag length p for the selected variables less or equal to 12. For the results of Table 1, the demand

Table 1. Selected variables specific to the horizon (out of 1000 times).

variable u will always be selected in my one-step-ahead forecasting model, while for the two-step ahead, the demand variable u is included in the forecasting model 999 times out of 1000. However, for a three-step-ahead model, the probability of u to be selected is 29.2% and 70.8% is for both u and z . For horizons longer than four, the supply variable has a higher probability to be selected for forecasting output than the demand variable.

According to the simulation results, I show that we may need to include different variables in the forecasting model corresponding to different horizons if different shocks affect the economy through different horizons. If the supply variable z caused the recessions through long horizons, it is unlikely to use the selected variable u from the near term to forecast y during the recessions. According to Equation (2), even though the demand increases can be used to stimulate y in the short term, it may also negatively impact output in the long term. Moreover, we cannot fix the problems in z by increasing u . Maximizing output y by investing on the demand variable u based on the one-step- ahead evidence may decrease investment in the supply variable z and limit the potential growth of GDP in the long run. In other words, chasing short-horizon benefits and solutions instead of more difficult long-term solutions may actually cause recessions.

Finally, the crucial logic from u to y over short horizons may be substituted by the logic from z to y as the time horizon changes. As I add more factors to the system, it may be harder to identify the crucial logic. Hence, I argue that we should reveal the importance of the long-horizon factors through long-horizon models, combine the crucial logic from different horizons to evaluate the effects of a shock, and pursue a healthy economy in the long run.

5. Application to the Small Data Set

Christiano et al. (CEE, 2005) [16] construct a model with a moderate degree of nominal rigidities that prevent a sharp rise in marginal costs, generating inertial inflation and persistent output movements after an expansionary shock to monetary policy.

The goal of this section is to use empirical evidence from the multiple-step- ahead model to answer the question: Does a multiple-step-ahead model in which variables are selected corresponding to its horizon has a lower out-of-sample MSE than the VAR model? I focus on the comparison of the out-of-sample forecasts from my approach to that of the best VAR which include the same variables at all horizons. I will first briefly summarize the CEE methodology.

The form of the CEE model is as follows:

S t = k 0 + B 1 S t 1 + + B p S t p + C × ε t (5)

S t contains nine quarterly series. The lag length p of the model is set to 4. The order of variables is real gross domestic product (GDP), real consumption (RPCE), the GDP deflator (GDPDEF), real investment (INVEST), the real wage (WAGE), labor productivity (PROD), the federal funds rate (FEDFUNDS), real profits (PROFIT) and the growth rate of M2 (M2).

The matrix C is taken to be lower triangular with ones along the principal diagonal. It implies that the variables, except real profits and M2 growth rate, will not respond instantaneously to monetary policy innovations.

All estimates reported in this paper are based on the original dataset from 1965Q3-1995Q2 in CEE (2005) [16] 3. All data can be downloaded from the Federal Reserve Economic Database (FRED) provided by the Federal Reserve Bank of St. Louis. Real GDP, the real consumption, real investment, the real wage, labor productivity, and real profits are measured as 100 times the natural logarithm of the original data. The federal funds rate is expressed as annualized percentage points. Inflation is 100 times the natural logarithm of the ratio of the indexes for C P I t and C P I t 1 . Money growth M2 is the first difference of 100 times the natural logarithm of the original data. The first eight transformed variables are still not stationary, I use the first difference of these variables, and leave the money growth of M2 alone.

One way to gauge whether my approach enhances the predictability of the VAR model is through the evaluation of out-of-sample forecasts, which divides the dataset into two subsamples. My estimation period is 1965Q3 to 1982Q4, and the remaining data from 1983Q1 to 1995Q2 is set aside for evaluating the forecast performance. To assess the out-of-sample forecast performance, I employ the out-of-sample MSE ratio.

First, I set the out-of-sample forecasting benchmark. Since all data are now stationary, I calculate the mean of each variable from 1965Q3 to 1982Q4. I set this mean as the values of the corresponding variable from 1983Q1 to 1995Q2. I use the mean squared differences between the sample mean and real data as the benchmark. If a model provides a higher out-of-sample MSE than this benchmark, I consider using the mean of the sample rather than such model to make forecasts.

I use the in-sample data to estimate the coefficients of the CEE model and obtain the one-step-ahead out-of-sample forecasts from a recursive forecasting scheme as shown below. I recursively iterate on the estimated VAR model using observations at 1982Q4-p for the first forecast and at 1995Q1-p for the last forecasts, where p is the selected lag length of the model. To complete my analysis, I also change the lag length of the CEE model and compare the out-of-sample MSEs of VAR models with different lag lengths.

I begin by looking at the MSE results of the selected VAR models with different lags in Table 1. The first row lists the mean squared difference between the sample mean for the period from 1965Q3 to 1982Q4 and the real data from 1983Q1 to 1995Q2 for each variable as the benchmark. The maximum lag length I set is in the 2nd column, and the best lag length selected by AIC is listed in the 3rd column. For a maximum lag length of 1, 2, or 3, the selected lag length equals 1. From the outcomes of Table 2, if we set the maximum lag length is 6, the best lag length selected by AIC is 6 rather than 1. Nevertheless, the forecast using the model with one lag outperforms the model with longer lags.

To construct the multiple-step-ahead model by my approach for a single horizon, I use the same sample period from 1965Q3 to 1982Q4 to select explanatory variables and lag length specific to the horizon. Then forecasts for 1 to 50 quarters in the future are generated using a recursive updating scheme. That is, for a single horizon h = 1, the forecasts for 1983Q1 are based on the data up to 1982Q4. Next, I add the forecasts to the data, use the same coefficients and calculate forecasts for 1983Q2 based on data up to 1983Q1 and so on. In other words, I will recursively iterate on the estimated model with constant estimated coefficients from each horizon to calculate the out-of-sample forecasts. Further, I can compare the forecasting performance of the VAR(1) model to the predictions of multiple-step-ahead models with variables particular to that horizon.

Table 3 reports the selection results. Instead of reporting the selected variables for all horizons, I only report the selected results in the horizon h = 1:2. The outcomes from all horizons demonstrate that the procedure which allows variables to change specific to the horizon has a lower SBIC than the VAR model. I would like to point out that the GDP deflator has not been selected in my one-step-ahead model, whereas modeled in the two-step-ahead model.

Table 4 reports the out-of-sample MSE for the VAR model with one lag in the second row and the out-of-sample MSE relative to my selected one-step ahead

Table 2. The out-of-sample MSEs of the VAR models (1983Q1-1995Q2).

Table 3. Selected multiple-step-ahead models by my approach.

Table 4. The out-of-sample MSEs of the selected models with constant coefficients (1983Q1-1995Q2).

model by imposing all variables on the R.H.S. with only one lag in the third row.

Since I suggest that different variables may affect the economy over different horizons, a one-step-ahead model may not be the best model for predicting recessions. I also use my models from 1st-50th step ahead to make forecasts, respectively, and get the out-of-sample MSEs for each step ahead equation. The fourth row demonstrates the lowest MSE from 1-to-50 step ahead models. I change the lag length of my model from one to four to reselect variables and get the forecasting results. From the evidence in Table 4, the VAR model fails to improve on the benchmark for GDPDEF, the real wage, the labor productivity, and the federal funds rate.

Turning to the multiple-step-ahead model with different variables, the multiple-step-ahead model selected by my approach outperforms the benchmark. The minimum MSE in terms of investment is 2.423, much lower than 3.502, indicating that using variables corresponding to the horizon is vital to enhance the forecasting ability of economic models. Most notably, for forecasting the GDP deflator, real wage, or real productivity, a one-step-ahead model forecasts poorly relative to the benchmark regardless of how many lags were used. Multiple-step- ahead models yield substantive gains to making forecasts.

Stock and Watson (1996) [17] discuss the instability of the VAR system and suggest forecasting by allowing parameters to change. In accordance with this concern, I recursively iterate on the estimated model with varying coefficients from each horizon to calculate the out-of-sample forecasts. That is, to demonstrate, I add the forecasts to the data, reestimate the coefficients with the newly forecasted data and calculate forecasts for 1983Q2 based on data up to 1983Q1 and so on. The out-of-sample MSE results of my method, which allows the varying coefficients, are displayed in Table 5. It appears that allowing the changes in coefficients does not necessarily improve the out-of-sample forecasts if we allow variables specific to the horizon.

In summary, the iterated forecasting results of the VAR models with variables specific to its equation horizons dominate the forecasts from a VAR model with same variables for all horizons. I turn next to the principal components on the large data set.

6. Application to the Large Data Set

In this section, I forecast a single time series, US real GDP, by selecting principal components specific to the horizon and investigate whether that the selected principal components by the SBIC change as time horizon changes. The empirical evidence appears to favor the hypothesis that different principal components should be used to construct forecasts in different time horizons. Traditional methods, which postulate that principal components are the same for all time horizons, is not appropriate.

Moreover, comparison of the optimal principal components over different time horizons, in contrast with the assumption of a large body of literature that the principal components from the front are more important in making forecasts

Table 5. The out-of-sample MSEs of the selected models with varying coefficients (1983Q1-1995Q2).

than other principal components, yields results that the principal components with large variances from the front are not necessarily more useful for making forecasts than the principal components with small variances as time horizon extends, which has since been ignored.

The use of big data and a large number of variables, both to summarize the information in a large number of economic time series and capture the concepts and characters of economic activities like demand and supply, has been an interesting subject of current macroeconomic applications in recent years. One of the important issues is which―and how many―principal components to include in the model. Bai and Ng (2002) [18] set up the determination of principal components by model selection. Interestingly, when I use the simulated data, the Jushan Bai and Serena Ng’s method provides the right number of principal components. Nevertheless, when I use my data or other real macroeconomic data, the trend of Akaike Information Criterion (AIC), which I get from Jushan Bai and Serena Ng’s method, looks like a hyperbola that opens downwards and has no global minimum. The AICs go up and down and I can only choose local minimums. This in fact implies that the principal components at the end are also important. In addition, Preacher, et al. (2013) [19] mention that the common factor does not have the potential to perfectly describe the population factor structure so there does not exist a correct, finite number of principal components.

Bernanke et al. (2004) [8] construct a FAVAR model to extract common factors from a large number of variables. They choose the number of principal components and use the first several principal components to make multi-step- ahead forecasts for all time horizons. Their methodology implies that they suppose that principal components are the same for all time horizons, and that those principal components in the front can construct forecasts better than other principal components which may not be valid as time horizon changes.

6.1. Methodology

This paper uses 190 different time series to calculate the principal components and select the optimal principal components to forecast a single time series, one by one, using the lowest SBIC specific to the horizon. I try to check whether the model which allows different principal components specific to the horizon has a lower out-of-sample MSE than the model does not, and whether the principal components at the end have lower out-of-sample MSEs than the principal components in the front if we allow the horizon changes.

My forecasting can be carried out in two steps. First, the principal components are estimated from the original data; second, the principal components are selected to make forecasts specific to horizons.

To be specific, following the notation for calculating the principal components in Bai and NG (2002) [18] , let X _ i be a T × 1 vector for the i t h cross section unit. At a given i, I have

X _ i = F 0 λ i 0 + e _ i (6)

where X _ i = ( X i 1 , X i 2 , , X i T ) , F 0 = ( F 1 0 , F 2 0 , , F T 0 ) , and e _ i = ( e i 1 , e i 2 , , e i T ) . In Equation (6), X are transformed to be stationary and standardized, so the transformed data do not have high correlations and they all have the same variance. F t 0 and λ i 0 denote the principal components and the loadings, respectively. Following Bai and NG (2002) [18] , if T > N , the loadings are Λ ¯ = N × eigenvector ( X X ) and the principal components are F ¯ = X Λ ¯ / N , corresponding to the sorted eigenvalues of the T × T matrix X X from the largest to the lowest. The loading is the correlation between the original variables and the principal components and the largest loading is the key to understanding the underlying nature of the factor. Firstly, I calculate all principal components F t = ( P C 1 t , P C 2 t , , P C N t ) . The principal component PC1 corresponds to the largest eigenvalue of the T × T matrix X X in each time horizon. Stock and Watson (2002) [20] show that the principal components estimated from candidate predictor series are consistent, even in the presence of time variation in the factor model. This article assumes that the principal components and the loadings will not change when I change the time horizons.

Since my approach was fully described above, here I provide only a brief recap of the approach. Stock and Watson (1998) [21] use a criterion that minimizes the mean squared forecast errors of a forecasting variable to determine the number of principal components, whereas I use the lowest SBIC to determine the lag length and principal components in each time horizon. This is analogous to the local projection approach used in Jordà (2005) [22] , which calculates the long-term impulse responses by adding h time periods on the left-hand side of the traditional VAR model. He only suggests changing the time horizons and estimating the local projections at each period directly, but he does not change the models as the time horizon changes. I add h time periods on the left-hand side of the model and reselect principal components to make forecasts in each time horizon.

The regression takes the form:

y t + s = α s + i = 1 P γ i s + 1 y t i + i = 1 P β i s + 1 P C t i + ε t + s s S = 0 , 1 , 2 , , h (7)

where the β i S + 1 are the matrices of principal components’ coefficients for each lag i and horizon s + 1 , P is the number of lags, and P 12 .. P C t i is the vector of principal components for each lag i and horizon s + 1 . The lag lengths for all principal components in the model are the same.

In Equation (7), I exchange the variables X t i in Equation (1) with principal components P C t i . I select the best principal components one by one by using the lowest SBIC from the 191 principal components.

I use a parsimonious method for the selection of principal components. The advantage of this approach is that it provides a procedure to select principal components from all principal components rather than only from the first couple of principal components. The results may not be the same as the results from all combinations mentioned in Section Three, but it is good enough to demonstrate that the principal components may change as the time horizon changes.

Since I have too many options, I employ a parsimonious method to select principal components with the lowest SBIC for each time horizon. For instance, we have m variables: x 1 , x 2 , , x m . We employ Equation (6) to calculate m principal components: p c 1 , p c 2 , , p c m . Principal components and loadings will not change as the horizon changes. To select principal components and lag length for the h-step-ahead forecasting equation, I can follow the steps below:

1) In step one, for the h-step-ahead forecasting model with one principal components in X in Equation (1):

a) Second, I use principal component p c 2 in Equation (4), with lag length from one to twelve to get twelve SBICs. SBICs are then reestimated and the final SBICs with only one variable in PC are estimated from regressions of variable y t + h on its past values and the past values of the last variable p c m , with lag length from one to twelve. The total number of SBICs for only one variable in X is thus 12m.

b) Third, I select the optimal principal component p c ¯ 1 with the lowest S B I C ¯ 1 from results of these m principal components.

2) In step two, for the h-step-ahead forecasting model, for the remaining m-1 principal components, I use the procedures described to select the second most favorable principal component.

a) First, I first regress the h-period-ahead value of the dependent variable y t + h on its past value y t 1 , the selected principal component p c ¯ 1, t 1 and the past value of the first PC of the remaining m − 1 principal components, which lag length equals one. I estimate this equation to calculate the SBIC. Next, I use the same remaining principal component, but change the lag length p from one to two and finally to twelve to get twelve SBICs totally.

b) Second, I substitute the first pc of the remaining principal component with the second pc of the remaining principal component, and get twelve SBICs. After calculating the 12 ( m 1 ) SBICs for all the m 1 remaining principal components, we can choose the second favorable principal component p c ¯ 2 with the lowest S B I C ¯ 2 from results of the remaining m 1 principal components.

3) In step m, by combining the selected principal components one by one, for the last remaining principal component, I first regress y t + h on its past value, the past values of the selected principal components p c ¯ 1 , p c ¯ 2 , , p c ¯ m 1 , and the past value of the remaining principal component with lag length from one to twelve and calculate the lowest S B I C ¯ m .

Lastly, I choose the principal components and the number of principal components by the lowest S B I C ¯ . The results of the principal components I choose are ordered by their selected order.

6.2. Data

The data I use are 190 quarterly time series for the US from 1965Q2-2013Q1. Data sets 1- 113 are selected to gauge economic activity in empirical studies (e.g. Marcellino et al., 2006) [23] . Data series 114 - 172 are from balance sheet data for different sectors. Baker et al. (2005) [24] argue that past asset performance is the best guide to future returns, and they explore that the rates of return on assets are causally connected with rates of economic growth. For this reason, I select the series from asset and liability accounts to make forecasts of GDP. The sources of data include Federal Reserve Economic Database, the Board of Governors of the Federal Reserve System, the Bureau of Economic Analysis, the US Bureau of Labor Statistics, Yahoo Finance, and John Fernald’ s web page. More details are provided in the Appendix.

Before I proceed with my analysis, I check the stationarity of the variables with an augmented Dickey-Fuller (ADF) test and transform all variables to be stationary. A summary of these transformations and sources is given for each series in the Appendix. After these transformations, all series were further standardized to have sample mean zero and unit sample variance, so the transformed data are stationary and do not have high correlations. Table 6 demonstrates principal components and the corresponding variables with the highest component loading. Different principal components are basically loading on various variables.

6.3. Empirical Results

I use data from 1965Q2 to 2013Q1 to select principal components and make

Table 6. Principal components and the corresponding variables.

Table 7. Selected principal components by SBIC (1965:Q2-2013:Q1).

in-sample forecasts and out-of-sample forecasts after 2013Q1. Table 7 presents the selected variables and lag lengths for the one-step-ahead equation and two-step-ahead equation. Consistent with expectations, the empirical evidence appears to favor the hypothesis that different principal components are selected to make forecasts in different time horizons. It also indicates that many factors, rather than a few, should be modeled to predict GDP. It is also noteworthy that the first principal component does not necessarily yield forecasts superior to other principal components.

To summarize, my analysis does not support the common assumption that the first couple principal components can be viewed as the essential factors contributing to US real GDP. More principal components, as expected, may be viewed as predetermined factors, which may even be a “tip of the iceberg” that represents the intermediate results of crucial factors. It is reasonable because the economy will be vulnerable if it depends only on sole shocks.

Moreover, the FAVAR model is considered to include information on all the variables using a few factors. My results suggest that since some of the principal components may be more important at some horizons than at others, we have to select the principal components in a FAVAR model specific to the horizon.

I use the data from 1965Q2 to 2013Q1 to select variables with the lowest SBIC specific to the horizon and make in-sample and out-of-sample direct forecasts. Then I use the out-of-sample MSE from 2013Q1 to 2015Q3 as the criterion to select the long-horizon forecasting model. Figure 1 denotes the in-sample and out-of-sample direct forecasts by my 49-step-ahead quarterly model with the lowest out-of-sample MSE from 2013Q1 to 2015Q3. All of my models from different horizons can predict the real GDP with low in-sample MSEs. The policy intuition behind this graph is that it may take more than ten years for a recession to form. From the result of Figure 1, it seems that the fundamental factors which can affect the US economy over long horizons should be invested these couple years to prepare for the gingival economic downturn around 2022. However, this figure provides only a possibility, so it needs to be studied further.

Figure 1. Forecasting the real GDP by one of my multi-step-ahead models.

6.4. Statistical Evidence for the Long-Horizon Causes of Recessions

I change my criterion to the out-of-sample mean squared errors (MSE). To exemplify, for calculating the out-of-sample MSE during the period 2000Q1 to 2013Q1, I run the regression from the beginning of the available data to 1999Q4 to obtain the parameters and use past values to forecast y 2000 Q 1 . I then run a regression from the beginning of the available data to 2000Q1to get the parameters, and use the values of the regressors to forecast y 2000 Q 2 . All parameters, prediction errors, etc. are then reestimated and the final out-of-sample forecast is made for y 2013 Q 1 . According to Diebold (2013) [14] , we may use the out-of- sample MSE to select the long-horizon causes of recessions.

I first separate the 190 quarterly time series mentioned above into 8 main groups of macroeconomic time series: production group, cost group, unemployment group, monetary policy group, household and nonprofit organization group, nonfinancial business group, government group, and export and import group. More data details can be found in the Appendix. I use the variables in each group to construct the principal components.

The first three groups contain supply-side variables: The production group reflects production information, including industrial production indexes for different industries, housing starts in different geographic areas, new orders, inventories, and corporate profits by industries. The cost group is used to describe the cost information of production, including oil price shocks, technology shocks, working hours, labor costs, producer price indexes, costs of productions, and stock market data. The technology shocks are included in this group because I assume that a new technology can decrease production costs. The employment group provides employment and unemployment information, which includes employment for different sectors, unemployment by different durations and age groups, and the civilian labor force.

The remaining five groups contain demand-side variables: The monetary policy group includes interest rates, money base and velocity, consumer price indexes for all urban consumers in different sectors, personal income, and personal consumption expenditures for different sectors. I include personal income and personal consumption expenditures because these variables influence the economy on the demand side as does the interest rate. I also choose to include the financial asset and liability series for households and nonprofit organizations, nonfinancial businesses, and state, local and federal governments, which have been ignored in the literature. The export and import group includes export and import information for different sectors. Since economics is separated into different fields, I just generally compare the prediction performance of variables from different groups.

Figure 2 shows the comparison of the out-of-sample forecast errors of SPF forecasts and my results. The SPF forecasts use the median one-year-ahead real GDP forecasts in the Survey of Professional Forecasters (SPF) from the Philadelphia Federal Reserve Bank. I take the first difference of the natural logarithm

Figure 2. Comparison of the out-of-sample forecast errors of SPF from the Fed and my models.

of SPF result to calculate the forecast errors relative to real GDP. Then I use the out of sample MSE during the period 2000Q1 to 2013Q1 as a criteria to select the principal components in different horizons by Equation (7). I compare the SPF forecast errors with the forecast errors of selected variables from the production group in a 19-step-ahead model (solid line), the government group in an 18-step-ahead model (dashed line) and the monetary policy group in a 19-step- ahead model (long dashed line), respectively. In the circled areas around 2004 and 2009, my forecast errors are much lower than the SPF results. Based on Figure 2, my approach which selects variables through long-horizon models performs better than the SPF forecasts especially during recessions. It implies that the SPF method may fail to identify the key variables for forecasting recessions through the right horizons, and that the real cause of recessions may be these variables selected from multiple-step-ahead models.

Figure 3 illustrates the out-of-sample MSEs of all groups for horizons from 1 to 80. The out-of-sample MSEs of the one-step-ahead model for all groups except the unemployment group and nonfinancial business group are not the lowest compared with multiple-step-ahead models. This implies that one-step-ahead model may not always the best choice for prediction.

The out-of-sample MSEs of the monetary policy group, the cost group and the production group are lower than 2.5 × 10 5 . For the monetary policy group, as Friedman (1961) [9] mentions, I show that the monetary policy group may only affect the economy after a lag around 5 years. Bernanke et al. (1997) [25] mention that each of the first four recession in their Figure 1 was immediately preceded by a period of tight monetary policy. From my evidence, tight monetary policy may be just the last draw that breaks the camel’s back. The low federal funds rate does not save the US economy from the Great recession of 2007-2009. Likewise, the expansionary monetary policy may be correlated with the recessions according to its lowest out-of-sample MSE. Finally, the variables in the nonfinancial business group can predict the recession one-step ahead. Their real-time data can help the government make the short-horizon decisions.

Next, I put the same 190 data series from 1965Q2 to 2013Q1 into 43 small groups. These classifications are meant only to simplify the interpretation of forecasting functions by broadly grouping areas by economic activities. I select the variables from all combinations of variables in each group to forecast the inflation rate for all items with the lowest MSE for each horizon by Equation (1). I calculate the lowest out-of-sample MSE during period 1990Q1 to 2007Q3 and 2000Q1 to 2013Q1 for each group and for each horizon, then I compare the out-of-sample MSE trends for these two period.

Figure 4 compares the lowest out-of-sample MSE trends of the 43 groups for each horizon for above two periods. Especially for the out-of-sample period during 2000Q1 to 2013Q1, because of recessions, the MSE of the one-step-ahead model for each group is not the lowest MSE relative to other horizons. The results imply that Federal Reserve Bank may not be able to use the real-time data in the one-step-ahead model to forecast the inflation rate if a recession is coming

Figure 3. The out-of-sample MSEs of the multiple-step-ahead models for each group.

Figure 4. The out-of-sample MSE trends of multiple-step-ahead models for small groups: CPI.

even though the real-time data may forecast the inflation rate with low forecast errors if the economy is not in a recession.

7. Conclusions

This paper constructs a novel framework which provides a systematic way to employ different variables specific to the horizon, using fewer coefficients than the VAR model. My methodology is specifically designed for selecting deep variables for VAR models which can be illustrated as important in the long horizons. This application is theoretically grounded and outperforms the VAR mo- del in terms of the out-of-sample forecasting. I also set the sample mean as a benchmark to judge the forecasting performance of VAR models and investigate whether the sample mean outperforms traditional VAR models with lags longer than one when it comes to the out-of-sample forecasting.

Also, the results indicate that we should reselect principal components as the time horizon changes. The principal components from the front do not necessarily forecast better than the principal components at the end as the time horizon changes. Additionally, selection results are done to see if the same problem plagues FAVAR models. Due to my results, I suggest extending the FAVAR model to include my approach.

Since I can use some historical data to forecast US GDP during recessions well out of sample through long horizons whereas traditional one-step-ahead models cannot, I conclude that the long-term economic problems cannot be fixed with short-term interventions. In other words, some variables which play small roles when adopting a short-run perspective may affect the economy strongly over a long time horizon, indicating that they may be the key to fixing recessions. I argue that the causes of recessions obtained from traditional economic models may merely be the last in a series of causes and that the real issues remain concealed in the error term.

The deficiency of my approach is that the way I select data and build models is relatively arbitrary compared with theoretical papers. My selection results are substantially more agnostic about the modeled variables than the traditional models, yet they allow variables to be selected by the data set automatically,, which is not commonly postulated by conventional variable selection. I only want to provide possibilities from a new perspective rather than explain it. In other words, this paper just uses statistical method to analyze the economy from a surface level.

The forecasting models selected by my approach may only be used to forecast time series during recessions and I also suggest that my approach should be cautiously used to interpret the selected results for analysis. In particular, I consider that the stationary data may twist the economic models into short horizons and impact the variable selection results. Hence, this hypothesis may need to be further studied in order to implement my approach for further progress or investigate the use of alternative methods, which is beyond the scope of this paper.

The practical social significance of this paper and some thoughts for future research are as follows:

First, the reason we study economics decides the long-run direction of economic theories: we build up people by creating more wealth or we increase wealth by sacrificing people’s life and future generations. These two attitudes look similar but will produce totally different results in different horizons. For example, all people have the same value but their market prices are different. Some labors’ real wages may even be lower than machines or negative from the second attitude. The former attitude does not mean free welfare for everyone without truth, and the latter one may create fake prosperity in short horizons but ends up to some place we never want to be.

Second, the complex and paradoxes of economics may be attributed to the combined human world with different things together: sense and emotion, spirit and body, and so on. The short-horizon needs of the human body may make us to be attracted by the short-horizon economic world. When our spirit is also attracted by things in the short-horizon world, the space of our spirit may be twisted. We may still think in a straightforward and logical manner but in a twisted space. For instance, the mind space of money-grubber may be twisted by money attraction and their bodies may work too much for earning money driven by their minds, which may be beyond the normal range of human beings. It will be easier for people to evaluate and focus on the short-horizon obvious prize rather than the long-horizon value. People may use the short-horizon solutions to substitute the real solutions which take more efforts. The problem is that this world may not be able to provide enough short-horizon solutions to meet the unlimited desires since the real problems may always exist without the real solutions. Chasing for the short-horizon benefits and solutions may cause the recessions in the long run.

Third, through twisting the long-horizon factors which seem to be unimportant in the short horizons, there is always a dramatic structure which can be used to change the results indirectly by twisting the background. For instance, the financial crisis happened without cautions because the business operations and other economy factors appeared similarly as before while the long-horizon factors were twisting the background of the economy gradually. Some inappropriate economic behaviors did not have significant devastating results in the short horizons, which were ignored by the supervisors easily.

For my future studies, I will try to find which factors may mislead people to make inappropriate short-horizon decisions without considering the long-term costs. According to above arguments, we may need to find some long-term methods to prepare for recessions even if those methods may not have significant effects in the short horizons. Moreover, it is possible that if we fix the long-ho- rizon problems first, the effects of the short-horizon problems will decrease automatically. The key point is to identify the substantial problems through their dynamic effects by considering the fundamentals of the economy over different horizons.

The academic significance of this paper is that my assumption may explain the plausible and contrary results of some economic puzzles. To enumerate, the instability of the empirical relation between oil price and output incurs a debate over whether the oil-price-GDP relationship still exists or not. According to my assumption, oil price which has been recognized as an important factor for the recessions may only be the coup de grace. It is not surprising that existing methods, which focus on the crucial variables selected in a particular time horizon, are unable to document the whole structure of the relationships for all time horizons. We cannot include all variables and their lags in the model, which implies that the error term is correlated with the variables of a model through different horizons and the variables inside the model can always take the contributions of these variables as part of their coefficients. Even adding lags in the model may not save the model from biased coefficients. It is possible that the estimated coefficients of some variables in the model are just projections of the effects of some external variables and that their magnitudes will change as the combination effect of outside variables changes. Some important variables may have the capacity to link with external variables, no matter how the background of the economy changes, if they can affect enough omitted variables to follow some certain trend so that they play an influential and significant role in explaining movements in the economy.

Overall, the conclusions inferred from the biased coefficients of traditional models are by no means settled issues. It remains possible for a skeptic to maintain some dominant views of existing studies which may be derived from the biased coefficients of the models. It is also essential to emphasize that my findings are not limited to economics, which may provide some explanations for the contradicting results in other scientific fields. The basic assumptions of traditional methods may be wrong in fundamental ways. The simple simulation in the paper proves that under the assumption of this paper, the results of science may be arguable, especially in economics.

Again, I must emphasize that this research does not aim to solve the recessions but sheds light on the long-horizon journey of economics. Since the eco- nomy is complex, I do not know how much this new assumption will affect the economic models. It is also possible that it may not affect the traditional models at all, however, it is beyond the scope of this paper.

Acknowledgements

I thank the Editor and the referee for their comments. I am grateful to Lance Bachmeier for his suggestions. He was the instrumental in the development of my research. I also want to thank Lloyd Thomas for his comments. All errors are my sole responsibility.

NOTES

1See Lv (2017) [3] .

2See Lv (2017) [15] .

3The data are provided by Martin Eichenbaum.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Box, G.E. (1979) Some Problems of Statistics and Everyday Life. Journal of the American Statistical Association, 74, 1-4.
https://doi.org/10.1080/01621459.1979.10481600
[2] Stock, J.H. and Watson, M.W. (1999) Forecasting Inflation. Journal of Monetary Economics, 44, 293-335.
https://doi.org/10.3386/w7023
[3] Lv, Y. Y. (2017) How Can the Error Term Be Correlated with the Explanatory Variables on the R.H.S. of a Model? Theoretical Economics Letters, 7, 448-453.
[4] Sims, C.A. (1980) Macroeconomics and Reality. Econometrica, 48, 1-48.
https://doi.org/10.2307/1912017
[5] Doan, T., Litterman, R.B. and Sims, C.A. (1984) Forecasting and Conditional Projection Using Realistic Prior Distribution. Econometric Review, 3, 1-100.
https://doi.org/10.1080/07474938408800053
[6] Littlerman, R.B. (1986) A Statistical Approach to Economic Forecasting. Journal of Business and Economic Statistics, 4, 1-4.
[7] Stock, J.H. and Watson, M.W. (2005) Implications of Dynamic Factor Models for VAR Analysis. NBER Working Paper, No. W11467.
https://doi.org/10.3386/w11467
[8] Bernanke, B.S., Boivin, J. and Eliasz, P. (2004) Measuring the Effects of Monetary Policy: A Factor-Augmented Vector Autoregressive (FAVAR) Approach. National Bureau of Economic Research, No. W10220.
https://doi.org/10.3386/w10220
[9] Friedman, M. (1961) The Lag in Effect of Monetary Policy. The Journal of Political Economy, 69, 447-466.
https://doi.org/10.1086/258537
[10] Blanchard, O.J. and Quah, D. (1988) The Dynamic Effects of Aggregate Demand and Supply Disturbances. NBER Working Paper, No. 2737.
[11] Kilian, L. (2009) Not All Oil Price Shocks Are Alike: Disentangling Demand and Supply Shocks in the Crude Oil Market. American Economic Review, 99, 1053-1069.
https://doi.org/10.1257/aer.99.3.1053
[12] Lippi, F. and Nobili, A. (2012) Oil and the Macroeconomy: A Quantitative Structural Analysis. Journal of the European Economic Association, 10, 1059-1083.
https://doi.org/10.1111/j.1542-4774.2012.01079.x
[13] Cassou, S.P. and Vázquez, J. (2014) Small-Scale New Keynesian Model Features That Can Reproduce Lead, Lag and Persistence Patterns. The B.E. Journal of Macroeconomics, 14, 267-300.
https://doi.org/10.1515/bejm-2012-0037
[14] Diebold, F.X. (2015) Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests. Journal of Business & Economic Statistics, 33, 1.
https://doi.org/10.1080/07350015.2014.983236
[15] Lv, Y. Y. (2017) Does the Biased Coefficient Problem Plague the VAR Model? Theoretical Economics Letters, 7, 454-463.
[16] Christiano, L.J., Eichenbaum, M. and Evans, C.L. (2005) Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy. Journal of Political Economy, 113, 1-45.
https://doi.org/10.1086/426038
[17] Stock, J.H. and Watson, M.W. (1996) Evidence on Structural Instability in Macroeconomic Time Series Relations. Journal of Business & Economic Statistics, 14, 11-30.
[18] Bai, J.S. and Ng, S. (2002) Determining the Number of Factors in Approximate Factor Models. Econometrica, 70, 191-221.
https://doi.org/10.1111/1468-0262.00273
[19] Preacher, K.J., Zhang, G., Kim, C. and Mels, G. (2013) Choosing the Optimal Number of Factors in Exploratory Factor Analysis: A Model Selection Perspective. Multivariate Behavioral Research, 48, 28-56.
https://doi.org/10.1080/00273171.2012.710386
[20] Stock, J.H. and Watson, M.W. (2002) Forecasting Using Principal Components from a Large Number of Predictors. Journal of the American Statistical Association, 97, 1167-1179.
https://doi.org/10.1198/016214502388618960
[21] Stock, J.H. and Watson, M.W. (1998) Diffusion Indexes. NBER Working Paper, No. W6702.
https://doi.org/10.3386/w6702
[22] Jordà, ò. (2005) Estimation and Inference of Impulse Responses by Local Projections. American Economic Review, 95, 161-182.
[23] Marcellino, M., Stock, J.H. and Watson, M.W. (2006) A Comparison of Direct and Iterated Multistep AR Methods for Forecasting Macroeconomic Time Series. Journal of Econometrics, 135, 499-526.
[24] Baker, D., De Long, J.B. and Krugman, P.R. (2005) Asset Returns and Economic Growth. Brookings Papers on Economic Activity, 2005, 289-330.
https://doi.org/10.1353/eca.2005.0011
[25] Bernanke, B.S., Gertler, M., Watson, M., Sims, C.A. and Friedman, B.M. (1997) Systematic Monetary Policy and the Effects of Oil Price Shocks. Brookings Papers on Economic Activity, 1997, 91-157.
https://doi.org/10.2307/2534702
[26] Hamilton, J.D. (1996) This Is What Happened to the Oil Price—Macroeconomy Relationship. Journal of Monetary Economics, 38, 215-220.
[27] Basu, S., Fernald, J. and Kimball, M. (2006) Are Technology Improvements Contractionary? American Economic Review, 96, 1418-1448.
https://doi.org/10.1257/aer.96.5.1418

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.