Monetary Policy and Unemployment—A Study on the Relationship Exists in the United States

This 
paper intends to make research on the relationship between monetary policy and 
unemployment of the United States from the first quarter of 1983 to the second 
quarter of 2018. Data is collected and later divided into two groups, namely an 
ex-crisis group and a post-crisis group, based on the event of the 2008 world 
financial crisis. This paper uses an extended version of the original Taylor’s 
rule by adding the concept of unemployment degree. The results suggest that in 
both periods, the unemployment gap degree does make a positive impact on the 
Fed interest rate and has a constantly significant impact on the Fed rate. As a 
result, Central Banks should adopt an easy monetary policy to stimulate the 
domestic economy from recession.


Introduction
Monetary policies have been constantly utilized by the authorities and governments, since good monetary policies could be one of the most effective tools to stabilize consumption price, increase economic output and also, more importantly, create job spaces for the public. When there exists an increasing number of unemployed people, the authority must improve credit as well as the supply of money, which in turns will stimulate the demand for investment from domestic companies or international capitals. Consequently, as more companies are founded and a larger manufacturing scale is achieved, there will be a highly growing demand for the labour force to fill these empty job spaces. Hence, more individuals will be able to find a job without difficulties and have a fixed monthly income. On the other hand, due to the increasing demand and greater capital inflow to the market, the price of goods, or the CPI (Consumer Price Index), would rise subsequently, and hence cause the unemployment rate to grow up again. A relatively high unemployment rate is also considered as a signal to the government that the domestic economy is at the bottom level and requires Central Banks to inject more capital into the economic cycle. The optimal aim of monetary policies is to stabilize the goods price as well as control the unemployment rate at a reasonable level that the government of the country could stand. In recent years, remarkable progress has been made on controlling the inflation rate and unemployment rate at an acceptable and low level due to the adoption of policy rules by independent Central Banks in developed countries. Taylor's (1993), demonstrated in his study that how monetary policies in the US during the last two decades of the last century could be explained under a specified rule. A large number of later papers made an extension to the original linear Taylor's rule and raised more arguments on the existence of nonlinearities in the reaction function of Central Banks (Ball, 2000), which are caused mainly by two reasons, one is nonlinear macroeconomic links amongst, and another possible reason is from different preferences or priorities of decision-makers (Castro, 2011;Taylor & Davradakis, 2006). There has been evidence shown in the latest empirical studies that governments could make the respond to inflation and production gaps if there exist nonlinear effects (Taylor & Davradakis, 2006;Castro, 2011;Martin & Milas, 2013). This paper would use an extended version of the Taylors rule to find the potential relationship among several variables, namely, Federal Interest rate, Consumer Price Index, unemployment rate gap and production gap. Most data would be collected every quarter, which is the shortest interval that is available from the online database for the US. The layout of the paper is as follows: Section 2 mainly reviews empirical studies; Section 3 introduces the methodology been used in this paper; Section 4 defines variables, carries out the unit root test and describes the statistics; Section 5 focuses on the results and last not least a conclusion will be completed to summarize the findings of the paper.

Literature Review
From the start of the last decade of last century, many Central Banks have adopted an inflation targeting structure (Bernanke & Mishkin, 1997), which is regarded as a beneficial way in many aspects. These advantages include 1) an increase of independence of Central Banks; 2) an improvement in the accuracy about the potential level of inflation; 3) an improvement in the communication between leaders who make policies and the public who follow policies due to greater transparency; 4) an increase in the credibility of the monetary policies themselves (Bernanke & Mishkin, 1997;Svensson, 2000). As the history of studies on monetary policies develops, researchers have invested more attention to Taylor's (1993) rule, and the attention has been caught for more than ten years. Taylor (1993) suggested that the monetary policies of the federal could be de-scribed by an interest rate rule which is built on the base of the range of the output product and inflation rate from the expected value (Orphanides, 2002). The adoption of this rule has initially made enormous improvements in performance in the US (Siegfried, 2000). Collectively, the Economic and Monetary Union area as well as the United Kingdom, are found by Gerlach and Schnabel (1999) that their monetary policies could be well described by Taylor's rule, whereas this particular rule does not provide significant evidence of fitting for the Canadian economy, as none of the research carried out of Taylor's rule are robust enough to hold the conclusion. As Anderson (2009) argues, Taylor's rule is a linear algebraic interest rate rule that specifies how Central Bank or Reserves must adjust the national and federal funds rate to the inflation and output gap.
Central Banks should make some moves to announce and adjust to a less complex instrument rule, argued by Svensson (2000), and the instrument rule was mentioned by Judd (1998). There are many objections and negative opinions on implementing Taylor's rule, a substantial number of papers published by researchers such as Ball (2000), Svensson (2000) and Ko et al. (2011) have made a disapproval of Taylor's rule, criticizing that following it mechanically is undesirable. During the Asian stock market crisis in 1997 and 1998 and the recent 2008 global financial crisis, the Federal Reserve reduced its interest rate dramatically, which could be proof to support the criticism made by the papers above.
Another example that could be raised was that the Bank of England cut the interest rate by 4.5 per cent, from 5 per cent to only 0.5 per cent in only one year shortly after the 2008 global financial crisis, which is one of the very biggest reductions from the creation of interest rate itself in the late 17th century (Astley, 2009). Hence, to make the best use of Taylor's rule and adjust it to fit the best to own country's monetary policies and systems, some new transformations are required immediately when there exists new information (Woodford, 2001). As Martin & Milas (2013) indicated that the Bank of England\ relinquished its policies between the period of the 2008 global financial crisis, to achieve stability, finance-wise.
The new-Keynesian model, which is also known as the NK model, has been a useful and effective method for analyzing monetary policies due to the existence of nominal rigidities. As frictions that are similar as the one relates to the Diamond Mortensen-Pissarides search (the DMP model) in the labour market was introduced by Blanchard (2010) in his study, a more realistic labour market was simulated and hence it has better ability to figure out the influences caused by productivity shocks on inflation as well as unemployment, along with the ability to illustrate how these influences could depend on federal policies as well as work market frictions. As a result, the optimal monetary policy could be derived. Blanchard and Galí (2007) also found in his study that wage rigidities, labour market frictions and staggered price settings are three key unreplaceable elements to the model if an explanation of dynamics in unemployment, as well as effects of productivity shocks and the role monetary policies played in shaping these effects, is required. The central determinant in an economy of a country is the degree of labour market tightness, as shown in his paper, as a tighter market could induce inflation issues by increasing the marginal cost, which then becomes an issue of the relationship between unemployment and labour market tightness. In terms of inflation stabilization, it does not deliver the optimal monetary policy to the decision-makers, due to the existence of the labour market frictions and real wage rigidities, which could be explained as distortions change with shocks, as suggested by Blanchard and Galí (2007). Optimal monetary policies could bear a certain room of inflation and restricts the turbulence of unemployment within a smaller range.
According to Gertler, Sala and Trigari (2008), a DSGE (Dynamic, stochastic and general equilibrium) model was used which is based on a new Keynesian paradigm to investigate the procreation of shock events and inflation motion.
Under this framework, there exists a relationship between real activities and nominal activities, as built by price rigidities. Clarida et al. (1999) indicated that the inflation motion behaviour is strongly connected to the marginal cost of the companies, as represented by unit labour cost. Some papers carry out researches based on an assumption of a frictionless labour market, for example, Clarida, Gali and Gertler (1999) assume a frictionless work market, on the other hand, Faccini, Millard and Zanetti (2011) argued that the research should be carried out based on a labour market with frictions due to two main reasons, firstly, a labour market with frictions is comprehensive and thus makes it easy for researchers to introduce unemployment determinants into the model. Secondly, more empirical papers adjusted their model by introducing the labour market frictions to gain a better fit and accuracy. As substantial work was done by Charpe and Kühn (2012) allowing for unemployment and staggered nominal salary, they found a different result that is in contrast to many existing empirical studies, as the result points out that the efficiency of employment of existing workers is positive, which in turn proves that the model is immune to the criticism from Barros et al. (2016) who argues that model relies on wage rigidity ignore mutual achievements from trade between firms and labours who have ongoing contracts. Significant evidence shows that the DSGE model with rigidity is a better description of the data, which is superior to a model with flexible wage measurement. Different findings were demonstrated by Faccini, Millard and Zanetti (2011), as they found that marginal cost relies on unit labour cost along with the frictional cost of seeking, and that salary rigidities with a model that uses flexible wage measurements could produce opposite reactions in the frictional costs of employment as well as labour cost per unit. Krause and Lubik (2003) also demonstrated similar conclusions and viewpoints to Faccini, Millar and Zanetti. Irrelevance between wage rigidities and inflation motion that relies on parameter estimates of the model is shown in their study of the UK economy, where key features of British economics were re-vealed.
Other issues that could affect the research that is raised in the previous empirical studies, such as the delivery of the accurate estimation of anticipated output along with the error of the data with real-time as opposed to previous data (Orphanides, 2002). Wrong policy decision could be made if there has been uncertainty in forecasting the output gap, by either over-forecasting or under-forecasting. The most commonly used method of filtering is the Hodrick-Prescott (HP) method, which has a few vulnerabilities such as the lack of accuracy, chances of misspecification of the underlying economic structure, as the suggested value is specific to the data from the US region and may cause significant error in other countries. Besides, it is known that the output could vary more frequently especially in most of the emerging countries where economic stabilization depends highly on many outside elements, and hence the fluctuation of the anticipation of the output could be stronger (Levin et al., 1999). Moreover, Central Banks are not allowed to smooth interest rate motions due to the basic standard of Taylor's rule, whilst a smoothing variable that contained in the reaction function could play a crucial role in gaining credibility as well as avoiding any disruptions or noises from the financial market itself (Clarida et al., 1999).

Basic Taylor's Rule
The Basic Taylor's Rule can be shown in the following formula, i.e., where t i is for the policy interest of the central bank, * r is the real interest rate in equilibrium statues, t π is for one-year inflation rate, * π is for target inflation rate set by central banks and t y is for production gap. According to Taylor's Rule, if parameters are set as * 2% r = , * 2% π = , 0.5 f π = , and 0.5 y f = , the equation can fits the US Fed rate between 1987 and 1992. Taylor's policy rules are designed to ensure that the inflation gap and production gap are minimized through the objective function of the policy makers. The expected value is determined by currency loss, which has the following expression: where ( ) * t t y y − is for production gap, t π is for the real time inflation rate at time t and * gregate output and ensures the real inflation rate is slightly fluctuate around the target rate. There are two points worth mentioning for the policy objectives in previous Taylor's rule. Firstly, the setting of the output target level is not determined by the policy model. The assumption of zero production gap is related to potential production gap. This point is also supported by Rotemberg and Woodford (1997). In an imperfectly competitive goods market, the economic system tends to become in efficient because of the fundamental mechanism for the market. However, the government subsidies or direct support will remain in a high level for efficient economic production. At the same time, the natural production level will be lower than the production level in equilibrium status, but the government may not stimulate the economy to raise the output to the best level. The most important part for Taylor's Rule lies in that the monetary policy of a country is not for evaluating the efficiency of gross production but for an effective tool in remaining production volatility is a reasonable level.
In Taylor's Rule, zero production gap is reasonable because such output level fits the natural rate hypothesis. According to Taylor's Rule, the model should reflect the condition that policy makers may consider the economy will automatically return to the level of natural unemployment. Meanwhile, there is no long-term stable relationship between inflation and production gap. In the longrun, policy makers do not have to worry about the deviation of production level from its natural level. Setting the output gap function to zero seems reflect the real economic structure better.

Evolution of Basic Taylor's Rule
Setting several limits on parameters in Equation (1) in which * * π r α = + , 1 f π π β = + . According to Equation (3), the central bank policy interest t i can be divided into three parts, i.e., the fixed real interest rate and target inflation degree α , the deviation of current inflation against target inflation ( ) * t π β π − π , production gap with initial value 0, y t y β . Then we de- rate by central banks. If ρ is relatively high and close to 1, it means that the interest adjustment process by central banks are relatively slow. Taylor (1993) does not give a formal econometric analysis of monetary policy. The paper uses Equation (1) to simulate the Fed rate between 1987 and 1992. Taylor (1998) gives the adjustment of Taylor (1993) by introducing Ordinary Least Square (OLS) estimation in different sample periods in the U.S. by applying Equation (6), i.e., t t y t t i y π = δ + β π + β + ε where ( )

Analysis Framework
In this paper, our research is based on Taylor (1998)'s rule by introducing unemployment degree in Equation (6), i.e., where ( ) * t u u − represents the unemployment gap between real unemployment t π and target unemployment rate * π . Specifically, we firstly define our variables in Equation (7).

Potential Production Level
We apply Hodrick-Prescott (HP) Filter in this part to specify the trend part and the fluctuation part in aggregate production. Given a production level t y at time t, we can write t y as where λ is an important parameter to determine the tracking and smoothing degree of the trend part { } In Equation (10), λ is an exogenous variable and a higher λ will cause a smoother trend part { } T t y . When λ goes to infinity, { } T t y is close to a linear function. In this paper, we would like to use quarterly data in the empirical part. Thus, we set 100 λ = according to empirical values.

Theoretical Foundations for Stationary Analysis
For stationary time series, the numerical characteristics (e.g. expectation and variance) are usually stable. So, it is effective to use past information to predict the future information for these variables. But it is more general to see non-stationary variables in economic variables such as GDP and amount of saving held by banks. They appear to be random at some point of time. Thus, as for non-stationary variables, it may not generate reliable and meaningful predictions by applying past information. One effective way to deal with the non-stationary variables is to do the differentiation of lagged variables to get integrated variables, e.g., I (1)  In time series analysis, "spurious regression" refers to a sequence in which two non-stationary variables exhibit a mathematical long-term stationary relationship because of the existence of time term. Therefore, before performing co-integration analysis on time-series variables, it is necessary to conduct a unit root test on the original data to see whether it is stable. To check the existence of unit root, the Augmented Dickey-Fuller (ADF) method is applied. Basically, assume that a time series t y with first order auto-correlation, we have that Deducting both parts with 1 t y − , we have ( ) The necessary and sufficient condition for a stationary time series t y is 1 ϕ should be significantly different from zero. The null hypothesis for the test is that the time series to check has a unit root.

Theoretical Foundations for Cointegration Analysis
In this paper, we would like to employ the cointegration analysis for our variables. Basically, cointegration is a statistical description for long-term equilibrium relationship between several non-stationary economic variables. This long-term stable relationship can exist in two variables or in pairs for more than three variables. But the later situation is more complicated than the previous one.
In this paper, we will conduct the cointegration analysis on more than two variables. Based on the basic theories of stationary and cointegration analysis, we will firstly do the unit root analysis test for all variables to check whether these variables are stationary. If the variables are not stationary, we need to determine the differentiation lags.
The premise of cointegration analysis on the model is that all variables have the same integration order, then the cointegration relationships among them are stable. If the differentiation lags are different, it is necessary to determine whether the linear combinations of these variables are stable.

Definition of Variables
In this paper, we would like to investigate the influencing impact of unemploy-   and the post-crisis sample (2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018). Our model is based on Taylor (1998)'s rule by expanding the influence of unemployment. Specifically, the model is as following: Before conducting the time series analysis, we need to define our variables.

The Policy Interest Rate i t
In this paper, we follow Taylor (1998)'s framework by choosing the Fed Rate as our policy interest rate t i , which is a free market determined rate. To match the sample range and the frequency of other variables, we choose quarterly data ranging from 1983Q1 to 2018Q2. The Fed Rate is in annul format and the unit is percent (%).

The Inflation Gap (π t -π * )
In the existing literatures studying Taylor's Rule, indicators such as Consumers Price Index (CPI), core CPI and GDP deflator are usually used as the measurement of inflation. In this paper, we apply the quarterly CPI deflator on the yearto-year basis as the measurement of real inflation degree t π . As for the target inflation level, we follow Taylor's basic model by setting * 2% π = annually, or 0.5% quarterly consistently.

The Unemployment Rate Gap (u t -u * )
There are limited number of papers discussing the natural unemployment rate of the United States. But a consensus has been achieved that the unemployment has been increasing since 1992, reaching the peak at 5.6% in 2002. The unemployment is fluctuating between 4.8% and 5.6% after 2000 and it is relatively stable. Compared with the countries with similar development degree, the US has a relatively low unemployment degree. Based on existing research conclusions, we apply the average value of listed unemployment rates, i.e., 4.5% as the target rate. As for the real unemployment rate, we use the quarterly unemployment disclosed by the U.S. Social Security Administration.

The Production Gap (y t -y * )
There has not been a consensus on the measurement of aggregate production since there is no unique method for the computation of potential outputs. Among uses basic macro-economic theory to separate structural cyclical factors which impose an impact on the aggregate output level. One typical method is the production function method, which considers a comprehensive impact of capital, labor and technological progress on output. The shortcomings on this method, however, lies in the complicated computation process.
In this paper, we mainly adopt the Hodrick-Prescott Filter, which realizes the decomposition of time series components by minimizing the variance of fluctuations. As for the aggregate output, the use the real quarterly Gross Deposit Product (GDP) in billion US dollar.

Data Preparation and Descriptive Statistics
In this paper, we use quarterly data between 1983Q1 and 2018Q2. Our data source is Bloomberg database and we use E-Views 8.0 for the data processing.
We firstly apply the basic Hodrick-Prescott Filter analysis in decomposing the real GDP into two parts. As shown in Figure 1, the red line is for the trend part, which is highly coincident with the real GDP. Green line is for the cycle part. In this paper, we set the trend part as the expectation of real output. Thus, the production gap is defined as the cycle part, i.e., the green line in Figure 1.
The descriptive statistics for all variables are shown in Table 1. It is worth mentioning that the corresponding probabilities of the J-B statistics for t i , * t y y − , * t u u − are less than 0.05. This means the null hypothesis that each variable here is normally distributed is rejected by 95% confidential degree. However, we cannot reject the null hypothesis that * t y y − is normally distributed.

Unit Root Test
The result of unit root test by applying Augmented Dickey-Fuller (ADF) method   Table 2. According to Table 2, we notice that the corresponding probabilities of the ADF-t statistics are smaller than 0.1. This means under a 90%-degree confidential degree, all variables have no unit roots in our sample range. Thus, we can use them directly for time-series analysis.

Johanson Cointegration Test
Starting from this part, we divide our sample into two groups, i.e., the pre-crisis group and the post-crisis group. The pre-crisis group contains the data from 1983 to 2007 and the post-crisis group contains data from 2008 to 2018.
According to the empirical framework analysis in last chapter, we know that there may be more than one pairs of long-term cointegration relationship among our variables. Thus, it is necessary to conduct the Johanson Cointegration test to determine the long-term cointegration relationships among our variables.
The fundamental mechanism for Johanson Cointegration is to set the null hypothesis that there is no cointegration existing among previous variables. Then the trace statistics are computed to check the statistical significance of null hypothesis. If the null hypothesis is rejected, we then assume that at most 1 relationship exists and repeat the trace statistics computation again. We keep repeating the testing process until we not reject the hypothesis that there are at most N cointegration relationship. Then we can say that there are N pairs of cointegration relationships in these variables.
According to Table 3, we notice that we cannot reject the null hypothesis that there are two cointegration relationships in the pre-crisis period but only 1 pair of relationship in the post-crisis period.

Granger Causality Test
We conduct the Granger Causality Test in this part and the corresponding results are shown in Table 4. It is obvious to see from Table 4 that in the pre-crisis there are two Granger Causality relationships. The first one is t i is the Granger cause of t y in the pre-crisis period. This result indicates that the monetary policy helps stimulate economy of the United States effectively before the 2008 financial crisis. The second causality relationship is that t y is the Granger cause of t u . This relationship indicates that the domestic economy has a lagged effect towards unemployment conditions in the US.
As for the post-crisis group, we notice there is only one Granger Causality relationship where t y is the Granger cause of t u . This result goes consistent with the pre-crisis group and also indicates the lagged impact of economy to unemployment degree in the U.S. Based on previous analysis, we can draw the conclusion that monetary policy does have a significant impact on the unemployment statues in the pre-crisis group. The mechanism behind this relationship is that monetary policy firstly stimulates the domestic economy in an effective way. Afterwards, the economy has a lagged effect towards the employment status in the U.S.

Regression Result and Conclusion
We apply the Ordinary Least Squared (OLS) method for the Taylor (1998)'s expansion function as following,  i u u y y π = α + β π − π + β − + β − And the regression results for two groups are reported in Table 5.
The interpretation of the regression coefficients in the pre-crisis group is as below.  1) An increase of 1% of ( ) * t π − π will increase 1.002% of t i ; 2) An increase of 1% of ( ) * t u u − will increase 0.578 % of t i ; 3) An increase of 1% of ( ) * t y y − will increase 0.010% of t i ; Similarly, the interpretation of coefficients in the post-crisis group is 1) An increase of 1% of ( ) * t π − π will increase 0.033% of t i ; 2) An increase of 1% of ( ) * t u u − will increase 0.119 % of t i ; 3) An increase of 1% of ( ) * t y y − will increase 0.003 % of t i ; According to Table 5, we find that in both periods, the employment gap degree does make a positive impact to the Fed interest. This indicates the higher the unemployment environment, the higher the Fed Rate may be employed. Based on the Granger Causality analysis in previous part, we understand that the economy development has a lagged impact on employment degree. When the unemployment rate is high, the domestic economy development may be slow.
This may indicate that the Fed Rate can be decreased so as to adopt an easy monetary policy.
Meanwhile, the significance of this relationship has not decreased in the postcrisis group. This means in the post-crisis period, the unemployment rate also reflects the recession of the economy, and an easy monetary policy can have a significant stimulation impact to the domestic economy of the US.

Conclusion
This paper has examined variables of monetary policies and unemployment, which are Federal interest rate, Inflation gap which used indictors of CPI deflator, the unemployment rate gap that utilized the real unemployment rate disclosed by the United States Social Security Administration, as well as production gap where the real quarterly Gross Domestic Product in Billion US dollars is used, respectively. In conclusion, after reviewing the results that generated from the data that collected, this present paper is able to find that there is a significantly positive link between the unemployment gap degree and Federal Interest Rate in both ex-crisis and post-crisis periods for the US. In other words, a higher Federal Interest rate could be implemented if the unemployment rate is higher, and hence slow down the domestic economy, which implies the authorities to implement an easy monetary policy to decrease the Fed rate. Another finding is that the inflation rate Gap could affect the Federal Interest Rate in a strongly positive way before the financial crisis but drop dramatically after the financial crisis. The production gap has a relatively tiny connection with the federal Interest rate ex-crisis and decrease to an even smaller figure post-crisis. The significance of the link between unemployment rate and the Fed Interest Rate has not decreased after the 2008 global financial shock, which could be indicating that the unemployment rate could also reflect the downturn of the domestic economy in the United States even after the financial crisis. As a result, an implementation of an easy monetary policy could strongly stimulate the recession of the economy and make positive impacts on the domestic economy of the US. In order to receive a broader and deeper understanding between monetary policies and unemployment, future studies are recommended to carry out a cross country research and make comparisons with the differences from each country. Countries could be chosen as specific groups, for example, a group of developed countries or a group of emerging countries. Although there are many sample countries, it could be hard to find data in shorter intervals, i.e., monthly data or quarterly data, especially in emerging countries.

Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.