ArticlePDF Available



Abstract and Figures

We investigate state‐dependent effects of fiscal multipliers and allow for endogenous sample splitting to determine whether the U.S. economy is in a slack state. When the endogenized slack state is estimated as the period of the unemployment rate higher than about 12%, the estimated cumulative multipliers are significantly larger during slack periods than nonslack periods and are above unity. We also examine the possibility of time‐varying regimes of slackness and find that our empirical results are robust under a more flexible framework. Our estimation results point out the importance of the heterogenous effects of fiscal policy and shed light on the prospect of fiscal policy in response to economic shocks from the current COVID‐19 pandemic. (JEL C32, E62, H20, H62)
Content may be subject to copyright.
We investigate state-dependent effects of scal multipliers and allow for endogenous
sample splitting to determine whether the U.S. economy is in a slack state. When the
endogenized slack state is estimated as the period of the unemployment rate higher than
about 12%, the estimated cumulative multipliers are signicantly larger during slack
periods than nonslack periods and are above unity. We also examine the possibility
of time-varying regimes of slackness and nd that our empirical results are robust
under a more exible framework. Our estimation results point out the importance of
the heterogenous effects of scal policy and shed light on the prospect of scal policy
in response to economic shocks from the current COVID-19 pandemic. (JEL C32, E62,
H20, H62)
The debate over the role of scal policy
during a recession has recently taken center
stage again in macroeconomics. One particular
topic that has received substantial attention is
whether the multiplier effect of government
spending is state-dependent. On the one hand,
in a series of papers, Auerbach and Gorod-
nichenko (2012, 2013a, 2013b) used data from
the United States as well as from the organiza-
tion for economic cooperation and development
countries and provided empirical evidence sup-
porting that the scal multiplier might be larger
during recessions than expansions. On the other
hand, Ramey and Zubairy (2018) constructed
new quarterly historical U.S. data and reported
that their estimates of the scal multipliers
We would like to thank the Seoul National University
Research Grant in 2020, the Social Sciences and Humanities
Research Council of Canada (SSHRC-435-2018-0275), the
European Research Council for nancial support (ERC-
2014-CoG-646917-ROMIA), and the UK Economic and
Social Research Council for research grant (ES/P008909/1)
to the CeMMAP.
Lee: Professor, Department of Economics, Columbia Uni-
versity, New York, NY 10027, Research Staff, Insti-
tute for Fiscal Studies, London, WC1E 7AE, E-mail
Liao: Associate Professor, Department of Economics,
Rutgers University, New Brunswick, NJ 08901, E-mail
Seo: Associate Professor, Department of Economics, Seoul
National University, Seoul, 08826, Republic of Korea.
Shin: Associate Professor, Department of Economics,
McMaster University, Hamilton, ON L8S 4L8, Canada.
were below unity irrespective of the state of
the economy.
In this paper, we contribute to this debate
by estimating a threshold regression model that
determines the states of the economy endoge-
nously. Auerbach and Gorodnichenko (2012)
estimated smooth regime-switching models
using a 7 quarter moving average of the output
growth rate as the threshold variable. Their
primary results relied on a xed level of intensity
of regime switching. Instead of estimating the
level of intensity jointly with other parameters in
their model, they calibrated the level of intensity
so that the U.S. economy spends about 20%
of time in a recessionary regime. In Ramey
and Zubairy (2018), the baseline results assume
that the U.S. economy is in a slack state if the
unemployment rate is above 6.5%. To check
the baseline results, Ramey and Zubairy (2018)
conducted various robustness checks using
different thresholds.
To be consistent with the empirical litera-
ture, we build on Ramey and Zubairy (2018):
we use their dataset and follow their methodol-
ogy closely. Our main departure from the recent
empirical literature is that we split the sample in a
data-dependent way so that the choice of thresh-
old level is determined endogenously. It turns out
that the endogenized threshold level of the unem-
ployment rate is estimated at 11.97%, which is
GDP: gross domestic product
MIO: mixed integer optimization
Economic Inquiry
(ISSN 0095-2583) doi:10.1111/ecin.12919
© 2020 The Authors. Economic Inquiry published by Wiley Periodicals LLC. on behalf of Western Economic Association International.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in
any medium, provided the original work is properly cited.
much higher than 6.5% adopted in Ramey and
Zubairy (2018). Using this new threshold level
combined with the same data and specications
as in Ramey and Zubairy (2018), we nd that the
estimated scal multipliers are signicantly dif-
ferent between the two states and above unity
for the high unemployment state. Specically,
if the threshold level is 6.5%, the estimates of
2-year integral multipliers are around 0.6 regard-
less of the state of the economy. However, if the
threshold level is 11.97%, the estimates are 1.58
for the high employment state and 0.55 for the
low employment state, respectively. If we look
at observations used in estimation, there is no
period after World War II with the unemployment
rate higher than 11.97%. In fact, there is only
one timespan of severe slack periods in 1930s.
In other words, the period of the Great Depres-
sion is isolated from other periods, as an outcome
of our estimation procedure. Therefore, our esti-
mation results suggest that (1) the scal multi-
plier can be larger than unity if the slackness of
the economy is very severe and that (2) the post-
World War II period does not include the severe
slack state and thus, our estimates for the high
unemployment state are not applicable to mod-
erate recessions in the post-WWII period. How-
ever, after the outbreak of the COVID-19 pan-
demic, the U.S. unemployment rate rose to 14.7%
in April 2020.1Therefore, the estimation results
in this paper shed light on the prospect of the s-
cal policy in response to the current economic
shocks. We also examine the possibility of time-
varying regimes of slackness by including a time
dummy for the post-WWII period and nd that
our empirical results are robust under this more
exible framework. All the computer codes and
data les for replication are available at https://
The remainder of the paper is organized as
follows. In Section II, we describe the econo-
metric model and present empirical results. In
Section III, we give concluding remarks.
In this section, we give a brief description
of the methodology developed by Ramey and
Zubairy (2018, RZ hereafter). They consider
the state-dependent local projection method of
Jordà (2005). Their baseline regression model
1. Source: U.S. Bureau of Labor Statistics, https://, accessed on May
25, 2020.
for each horizon hhas the following form (see
equation (2) in RZ):
where It(·) is a dummy variable denoting the state
of the economy, xtis the variable of interest, zt
is a vector of control variables including GDP,
government spending, and lags of the defense
news variable, 𝜓(L) is a polynomial of order 4
in the lag operator, and shocktis the defense
news variable.
Recall that RZ assume that the economy is in
the slack state when the unemployment rate is
above 6.5%. We instead adopt a threshold regres-
sion model and parameterize It=1{unempt>𝜏},
where 1{·} is an indicator function and unemp
denotes the unemployment rate. In other words,
we estimate the model that endogenously deter-
mines the slack states that t the data best. Specif-
ically, we estimate the following model using the
least squares (see, e.g., Hansen 2000; Hidalgo,
Lee, and Seo 2019):
To estimate the threshold regression model in
(2.2), dene the objective function
where 𝜃:=(𝛼A,𝜓A(L), 𝛽A,𝛼B,𝜓B(L), 𝛽B). Note
that the model (2.2) is linear in 𝜃conditional
on 𝜏. Thus, we obtain the (restricted) ordinary
least squares estimator ̂
𝜃(𝛾)easily for any given 𝛾.
Then, the threshold parameter 𝛾can be estimated
by minimizing the proled objective function:
̂𝜏 ∶= argmin
where Q
model, it is necessary to specify the parameter
space for 𝜏. We set it to be the interval between
the 5 and 95 percentiles of the unemployment
rates in the dataset and estimate ̂𝜏 by the grid
search method.
In our view, the threshold regression model
above provides a natural way to endogenize the
level of slackness since there is a change point
at 𝜏for GDP in the model. Note that the level of
the slackness is determined endogenously by t-
ting the regression model for GDP in (2.2) and
then it is imposed in the specication of It1in
(2.1). Considering that both RZ and Auerbach
and Gorodnichenko (2012, 2013a, 2013b) deter-
mine the criterion for the economic slackness
based on the researchers’ discretion, it is novel
to determine the threshold point endogenously.
Furthermore, as we will see in the next section,
the endogenous threshold estimate is beyond the
range of the values that RZ considered for a
robustness check.
In general, estimating the change point 𝜏tends
to be robust to model misspecication. Speci-
cally, in our context, the local projection argu-
ment may imply that the model (2.2) is poten-
tially misspecied; however, it is worthwhile to
emphasize that the change-point estimation tends
to be robust against mild misspecication in the
regression function employed in each regime, as
shown by for example Bai et al. (2008).
Before looking at the estimation results, we
briey describe the dataset adopted in our empir-
ical analysis. RZ constructed new quarterly U.S.
data from 1889 to 2015 for their analysis. The
main variables include real GDP, real govern-
ment spending, the unemployment rate, and the
defense news series. The real GDP data come
from Historical Statistics of the United States for
18891928 and from the National Income and
Product Accounts from 1929 to 2015. Real gov-
ernment spending is calculated by dividing all
federal, state, and local purchases by the GDP
deator. The unemployment rates before 1948
were calculated by interpolating Weir’s (1992)
series and the NBER Macrohistory database.
Finally, the defense news series is constructed
by the narrative method of Ramey (2011), which
measures changes in the expected present dis-
counted value of government spending. For addi-
tional details of the dataset, we refer to Ramey
and Zubairy (2018).
A. Endogenous Sample Splitting
Using the same dataset constructed by RZ, we
obtain ̂𝜏 =11.97%for the threshold parameter.
This estimate is even higher than 8%, which RZ
used for their robustness check. To appreciate
our estimation result, we plot the proled least
squares objective function (1 R2) as a function
of 𝜏in the left-panel of Figure 1.
It can be seen that the minimizer is well sepa-
rated at 11.97%, which gives the graphical veri-
cation of ̂𝜏. On the contrary, there is even no local
minimum around RZ’s threshold value at 6.5%.
To check the possibility of the second threshold
level below 11.97%, we re-estimated the model
with the subsample for which the unemployment
rate is lower than 11.97%. The right-hand panel
indicates that there could be a second threshold
around 4%, but not around 6.5%.
We test for the existence of the threshold for
the whole sample and for the subsample with
unemp <11.97 by adopting the sup-Wald test in
Hansen (1996). Figure 2 gives a graphical sum-
mary of the testing results. We set the number of
bootstraps to 2,000 and the trimming ratio to 5%.
We use the heteroskedasticity-robust test statis-
tic. The bootstrap p-value for the whole sample
is 0.053 and we can reject the null hypothesis
of no threshold effect at the 10% signicance
level. For the subsample with the unemployment
rate below 11.97, the bootstrap p-value for the
same test is 20.3%. Thus, we conclude that there
is mild evidence for the single threshold in the
data. Finally, the 95% condence interval for the
threshold variable is (11.97, 13.56).
The periods with high unemployment rates are
relatively rare. The U.S. economy spent less than
10% of time in the new slack regime dened
by 11.97%. The shaded areas in Figure 3 show
slack periods over GDP and unemployment rates.
There is only one timespan of severe slack peri-
ods from 1930Q3 to 1940Q3, namely the Great
Depression. We call this new slack periods as
severe slack states (“hard times”) compared to
moderate slack states in RZ. There is no period
after WWII that belongs to the hard times in this
dataset. However, the current recession belongs
to the hard times, as the unemployment rose to
14.7% in April 2020.
B. State-Dependent Cumulative Multipliers
We now report the estimation results of the
cumulative multipliers under endogenous sam-
ple splitting. It turns out that the new regime
classication produces quite different implica-
tions. Following RZ, we adopt the local projec-
tion method in Jordà (2005) and use the military
news as an instrument. Figure 4 reports the cumu-
lative multiplier over 5years (20 quarters) in each
Least Squares Objective Function
Note: In the left-hand panel, the long-dashed vertical lines are the 5 and 95 percentiles of the empirical distribution of the
unemployment rate. The dashed vertical lines are the 10 and 90 percentiles and the dotted lines are the 15 and 85 percentiles,
regime. To make the comparison straightforward,
we also show the estimation results of Ramey and
Zubairy (2018) next to our results.
When the 6.5% threshold is used in classica-
tion of slack state (i.e., the moderate slack state),
the multipliers in the high-unemployment state
are negative up to 3 quarters and are indistin-
guishable to those in the low-unemployment state
after 6 quarters. It is counterintuitive to observe
that the multipliers are higher for the low unem-
ployment state. On the other hand, if the 11.97%
threshold is adopted (i.e., the severe slack state),
the multipliers in the high-unemployment state
are mostly positive and largely above those in
the low-unemployment state and are around unity
after 10 quarters. In other words, the multipliers
are all less than unity in the case of the mod-
erate slack state; however, they are substantially
higher in the case of the severe slack state. These
results are robust to the choice of the instrumental
variable. As additional empirical results, Figure 5
depicts the impulse response functions in non-
slack and slack periods, respectively. Both gov-
ernment spending and GDP responses are much
higher in slack periods.
In Table 1, we report the 2-year and 4-year
cumulative multipliers when we use the military
news, Blanchard and Perotti (2002) shock, and
the combined variable of these two as an instru-
ment, respectively. The basic implication does
not change. The estimates of the 2-year multiplier
vary from 1.58 to 2.21 and the 4-year multipli-
ers are around 1. The main implication from our
empirical results is that scal multipliers can be
signicantly larger during severe recessions than
in normal periods.
We illustrate the difference between our
results and those in RZ by comparing the
effects of the COVID-19 stimulus package.
The COVID-19 pandemic and the following
economic lockdown increased the U.S. unem-
ployment rate up to 14.7% in April 2020. This is
the highest unemployment rate since World War
II. To mitigate the economic hardship, the U.S.
congress has passed the COVID-19 stimulus
package (the CARES act) whose total amount is
2 trillion dollars. In Table 2, we report the differ-
ence of the estimated multiyear integral effects
of the stimulus package when we use the multi-
pliers in this paper and those in RZ. We assume
that 25% of the total amount (500 billion dollars)
will be spent in the immediate quarter and use
the cumulative multiplier estimates based on the
military news shock. Two approaches provide
Inference for Multiple Regimes
Note: The red dashed line denotes the 95% critical value for the existence of the threshold point. In the left panel, we
conrm that the Wald test statistic at 𝜏=11.97 is very close to the 95% critical value. In the right panel, we use the subsample
and test if there exists an additional threshold point. The result conrms that there is no additional threshold point in the
Periods of Slack States over GDP and Unemployment
Note:GDP denotes real per capita GDP divided by trend GDP. The red dashed line in the right panel is the change-point
estimate, ̂𝜏 =11.97. The blue shaded area denotes the slack states estimated from the data.
Cumulative Multipliers
Note: The blue solid line denotes cumulative multipliers for slack states (high unemployment) and the red dashed line for
nonslack states (low unemployment). The 95% pointwise condence bands are also presented along with cumulative multipliers.
We also draw a dot-dashed horizontal line at multiplier =1.
Government Spending and GDP Responses to News Shock
Note: A news shock is equal to 1% of GDP. The red line with circles denotes the impulse response function in nonslack
periods and the blue solid line denotes the same function in slack periods. The related 95% pointwise condence bands are also
provided. The threshold point dividing slack/nonslack periods is ̂𝜏 =11.97 estimated from the data.
quite different results of the policy effect. Over
2 years, the difference between the two estimates
is 490 billion dollars. The gap decreases over
time but it is still 70 billion dollars after 5 years.
Therefore, we conclude that the endogenous
threshold estimate gives quite different results
of the scal policy effect, especially when the
slackness of the economy is severe.
C. Possibly Time-Varying Regimes
In this subsection, we explore the possibility
of time-varying regimes of slackness. One might
be worried that the U.S. economy changed after
WWII such that the level of slackness changed
from the pre-WWII period to the post-WWII
period. To deal with this issue, we extend the
endogenous sample splitting to the following
where dt=1iftis greater than or equal to
1945Q4. The resulting regression model has the
following form:
To estimate this model, we need to optimize
the least squares objective function with respect
to unknown parameters jointly. The parame-
ters could be estimated through the proling
method as explained in Section II. Specically,
one may rst estimate the slope parameters
𝜏:=(𝜏0,𝜏1) and then optimize the proled
objective function over 𝜏by the two-dimensional
grid search.
We adopt more efcient computational
algorithms developed in our previous work
(Lee et al. 2018) with the aid of mixed inte-
ger optimization (MIO). To explain the
algorithm, we rst dene some notation:
yt:=DGPt,ft:=(unempt1,dt1,1), and
xt:=(1, zt1,shockt). Then, the least squares
estimator can be written as
(̂𝜏, ̂
where 𝛿=𝜃A𝜃B. Instead of multidimensional
grid search over 𝜏, Lee et al. (2018) propose
an equivalent optimization problem by introduc-
ing a set of binary parameters dt∶= 1{f
and 𝓁j,t=𝛿jdtfor j=1, ,dx, where dxis the
dimension of xt. The new objective function can
be written as
Estimates of Cumulative Multipliers
in multipliers
Panel A: Threshold at 11.97%
Military news shock
2 year integral 1.58 0.55 0.000
(0.099) (0.064)
4 year integral 0.94 0.61 0.000
(0.017) (0.050)
Blanchard– Perotti shock
2 year integral 1.65 0.34 0.005
(0.425) (0.105)
4 year integral 1.23 0.40 0.000
(0.130) (0.104)
2 year integral 2.21 0.35 0.000
(0.406) (0.092)
4 year integral 1.11 0.46 0.000
(0.108) (0.086)
Panel B. Threshold at 6.5%
Military news shock
2 year integral 0.60 0.59 0.954
(0.095) (0.091)
4 year integral 0.68 0.67 0.924
(0.052) (0.121)
Blanchard– Perotti shock
2 year integral 0.68 0.30 0.005
(0.102) (0.111)
4 year integral 0.77 0.35 0.001
(0.075) (0.107)
2 year integral 0.62 0.33 0.099
(0.098) (0.110)
4 year integral 0.68 0.39 0.021
(0.052) (0.110)
Note:Thep-values for difference in multipliers are calcu-
lated by the HAC-robust p-values in Newey and West (1987).
Panel A is based on our threshold estimate (11.97%). Panel B
comes from Ramey and Zubairy (2018) where the threshold
point (6.5%) is chosen by the authors.
The equivalent optimization problem becomes a
mixed integer programming problem with some
additional constraints. The new optimization
problem can be solved efciently by the modern
MIO solvers such CPLEX and GUROBI. One
can solve the optimization jointly or by iterating
between (𝜃B,𝛿) and the remaining parameters.
The advantage of the new algorithm is that one
can construct and estimate the model, where the
regimes are determined in a more sophisticated
way by a multidimensional factor ft. We refer to
Lee et al. (2018) for additional details.
By applying the joint and iterative algorithms
proposed in that paper, we obtain the following
Joint algorithm ∶(̂𝜏1𝜏0)=(1.82,11.97),
obj =0.0002636456,
GDP Increases Caused by the COVID-19
Stimulus Package (in $ bn)
at 11.97%)
at 6.5%) Difference
2 year integral 790 300 490
3 year integral 510 355 155
4 year integral 470 340 130
5 year integral 465 395 70
Note: The estimates denote the increased cumulate GDP
when the U.S. government spends 500 billion dollars in the
period of high unemployment (14.7%). Military news shocks
are used as an instrument.
Iterative algorithm ∶(̂𝜏1𝜏0)=(0.56,11.97),
obj =0.0002636456.
That is, two algorithms yield different estimates
but the same objective function values. It turns
out that the regimes determined by two estimates
are identical; that is, ̂𝜏1has no role in determining
slack periods.
In addition, we apply the model selection
algorithm proposed in our previous work (Lee
et al. 2018). Specically, we specify the penal-
ized least squares objective function with the
penalty term consisting of a tuning parameter
𝜆>0 times the number of nonzero coefcients.
The resulting specication of the endogenous
sample splitting rule is as follows:
where |·|0is an 𝓁0norm of a vector. We implement
it using MIO with λ=̂σ2log(T)/T, where Tis
the sample size and ̂σ2=0.00027 is estimated
from the baseline model with a single threshold at
11.97%. When we apply the penalized estimation
algorithm, we nd that the 𝜏1estimate becomes
zero and is dropped from the model. Therefore,
there is no empirical evidence that supports time-
varying regimes of slackness.
We have investigated state-dependent effects
of scal multipliers and have found that it is
crucial how to determine whether the U.S. econ-
omy is in a slack state. When the slack state is
dened as the period of the unemployment rate
higher than about 12%, the estimated cumulative
multipliers are signicantly larger during slack
periods than nonslack periods and are above
unity. Our estimation results emphasize the
importance of endogenous sample splitting.
Furthermore, the effect of the scal policy may
be heterogenous with respect to the level of
slackness in the economy, thereby calling for
more research in understanding the heteroge-
nous effects of scal policy. Finally, our paper
sheds light on the prospect of scal policy in
response to economic shocks from the current
COVID-19 pandemic.
Auerbach, A. J., and Y. Gorodnichenko. “Measuring the Out-
put Responses to Fiscal Policy.” American Economic
Journal: Economic Policy, 4(2), 2012, 127.
. “Fiscal Multipliers in Recession and Expansion,”
in Fiscal Policy after the Financial Crisis, edited by
A. Alesina and F. Giavazzi. Chicago, IL: University of
Chicago Press, 2013a, 63– 98.
. “Output Spillovers from Fiscal Policy.” American
Economic Review, 103(3), 2013b, 141– 6.
Bai, J., H. Chen, T. Tai-Leung Chong, and S. Xin Wang.
“Generic Consistency of the Break-Point Estima-
tors under Specication Errors in a Multiple-Break
Model.” The Econometrics Journal, 11(2), 2008,
287– 307.
Blanchard, O., and R. Perotti. “An Empirical Characterization
of the Dynamic Effects of Changes in Government
Spending and Taxes on Output.” Quarterly Journal of
Economics, 117(4), 2002, 1329– 68.
Hansen, B. E. “Inference When a Nuisance Parameter is Not
Identied Under the Null Hypothesis.” Econometrica,
64(2), 1996, 413– 30.
. “Sample Splitting and Threshold Estimation.”
Econometrica, 68(3), 2000, 575– 603.
Hidalgo, J., J. Lee, and M. H. Seo. “Robust Inference for
Threshold Regression Models.” Journal of Economet-
rics, 210(2), 2019, 291– 309.
Jordà, Ò. “Estimation and Inference of Impulse Responses by
Local Projections.” American Economic Review, 95(1),
2005, 161– 82.
Lee, S., Y. Liao, M. H. Seo, and Y. Shin. “Factor-Driven Two-
Regime Regression.” 2018. arXiv preprint. https://arxiv
Newey, W., and K. West. “A Simple, Positive Semi-Denite,
Heteroskedasticity and Autocorrelation Consistent
Covariance Matrix.” Econometrica, 55(3), 1987,
703– 8.
Ramey, V. A. “Identifying Government Spending Shocks: It’s
All in the Timing.” Quarterly Journal of Economics,
126(1), 2011, 1– 50.
Ramey, V., and S. Zubairy. “Government Spending Multipli-
ers in Good Times and in Bad: Evidence from us Histor-
ical Data.” Journal of Political Economy, 126(2), 2018,
850– 901.
Weir, D. R. “A Century of us Unemployment, 1890–1990:
Revised Estimates and Evidence for Stabilization.”
Research in Economic History, 14(1), 1992, 301–46.
... Faria e Castro (2021) analyzes the effects of coronavirus outbreak on the United States economy, through an econometric model, relating fiscal policies, public debt, household income, and unemployment. Also, Lee, Liao, Seo, and Shin (2020) investigate the effects of fiscal multipliers on the United States economy. Auray and Eyquem (2020) discuss the effects of lockdown on Euro Area economy on inflation and unemployment, as well as government spending and unemployment insurance. ...
The COVID‐19 outbreak has affected everyday lives worldwide. As governments started to implement confinement and business closure measures, the economic impact was felt by entire societies immediately. The urgency of such a theme has led researchers to study the phenomenon. Accordingly, the purpose of this research is to provide the state of the art on relevant dimensions and hot topics of research to understand the economic impacts of COVID‐19. In this survey, we conduct a text mining analysis of 301 articles published during 2020 which analyzed such economic impacts. By defining a set of relevant dimensions grounded on existing literature, we were able to extract a set of coherent topics that aggregate the collected articles, characterized by the predominance of a few sets of dimensions. We found that the impact on “financial markets” was widely studied, especially in relation to Asia. Next, we found a more diverse range of themes analyzed in Europe, from “government measures” to “macroeconomic variables.” We also discovered that America has not received the same degree of attention, and “institutions,” “Africa,” or “other pandemics” were studied less. We anticipate that future research will proliferate focusing on several themes, from environmental issues to the effectiveness of government measures.
Pandemia es una palabra con origen profundo que en el tiempo se ha cargado con la historia de pueblos y naciones, alcanzando su plena expresión en nuestra sociedad globalizada. No solamente por las estadísticas que atraen la atención de todo el mundo y nos obligan a razonar de manera lógico-numérica, cuantitativa, olvidando que muchas veces los números son personas, y no rinden la justa cuenta de todos aquellos seres humanos que representan estas cifras. Una pandemia logra su máxima expresión al alcanzar el mayor número de personas, esto refleja en su etimología proveniente del griego pan (todo) y demos (pueblo) y desde sus orígenes ha significado principalmente una cosa, la “reunión del pueblo”. Dicha noción invita a reflexionar sobre el porqué de las medidas sanitarias, del aislamiento y confinamiento, y, más aún, nos recuerda la importancia de la unión como decisión ancestral al origen de nuestras sociedades, columna portante de la familia, fortaleza de cualquier sistema en desequilibrio. Reconociendo esto, se evidencia la capacidad de transformar juntos las debilidades en fortalezas, de saber demostrar nuestro mejor impulso a la solidaridad y colaboración cuando sentimos que nos necesitan, que permite terminar con alegría un capítulo con la seguridad de que el próximo será mucho mejor. Este espíritu es responsable de la iniciativa de quienes enfrentamos los desafíos del contexto desde la unión, con la aspiración de formar seres humanos en un espacio compartido de generación de conocimiento y conformación de valores humanos y sociales que llamamos Universidad.
We examine the impact of Economic Policy Uncertainty (EPU) on the effectiveness of government spending using a local projection method with an endogenously determined threshold parameter. The empirical analysis reveals that government spending multipliers are larger during low EPU periods compared to those in high EPU periods. This result is robust when government spending shocks are identified using government spending news constructed based on survey of professional forecasters data. Our study calls for policy expectation management as a company effort together with fiscal stimulus.
Full-text available
This study seeks to evaluate the efficacy of macroeconomic revamping policies operationalized after the pandemic by fiscal and monetary regulators to fight the pandemic in China. This study aims to assess what the Chinese economic recovery implies after the pandemic regarding economic expansion and energy consumption of different economies utilizing an econometric approximation relying on data throughout the COVID-19 phase. Within the extended stage, Chinese economic development spillover impacts attain the same effect on upper-middle-income nations' economic expansion of 0.18 percent, next to the economic development, of lower-middle-income countries of 0.15 percent and high-income nations. We discover proofs of robust direct provincial spillovers, implying that provinces tend to construct a cluster of high-performing and low-performing areas, a procedure that accentuates regional earnings variances. Applying the experience of revamping previous financial crisis, we replicate the impact of the pandemic on the competence of these, and by far, other upper limit income nations to build back better from the pandemic to jobs occasioned by proofs of the pandemic. The spillover impact of China’s economic revival past the pandemic phase's carries a critical effect on the expansion in energy consumption in high-income nations, subsequently middle-income nations. As total factor productivity headwinds underpin economic growth, fiscal policy is the only policy that probably sustains the pollution intensities and concurrently advances household well-being regarding consumption and jobs.
Full-text available
This paper characterizes the dynamic effects of shocks in government spending and taxes on U. S. activity in the postwar period. It does so by using a mixed structural VAR/event study approach. Identification is achieved by using institutional information about the tax and transfer systems to identify the automatic response of taxes and spending to activity, and, by implication, to infer fiscal shocks. The results consistently show positive government spending shocks as having a positive effect on output, and positive tax shocks as having a negative effect. One result has a distinctly nonstandard flavor: both increases in taxes and increases in government spending have a strong negative effect on investment spending. © 2001 the President and Fellows of Harvard College and the Massachusetts Institute of Technology
This paper considers robust inference in threshold regression models when the practitioners do not know whether at the threshold point the true specification has a kink or a jump, nesting previous works that assume either continuity or discontinuity at the threshold. We find that the parameter values under the kink restriction are irregular points of the Hessian matrix, destroying the asymptotic normality and inducing the cube-root convergence rate for the threshold estimate. However, we are able to obtain the same asymptotic distribution as Hansen (2000) for the quasi-likelihood ratio statistic for the unknown threshold. We propose to construct confidence intervals for the threshold by bootstrap test inversion. Finite sample performances of the proposed procedures are examined through Monte Carlo simulations and an economic empirical application is given.
We investigate whether US government spending multipliers are higher during periods of economic slack or when interest rates are near the zero lower bound. Using new quarterly historical US data covering multiple large wars and deep recessions, we estimate multipliers that are below unity irrespective of the amount of slack in the economy. These results are robust to two leading identification schemes, two different estimation methodologies, and many alternative specifications. In contrast, the results are more mixed for the zero lower bound state, with a few specifications implying multipliers as high as 1.5.
In this paper, we estimate the cross-country spillover effects of government purchases on output for a large number of OECD countries. Following the methodology in Auerbach and Gorodnichenko (2012a, b), we allow these multipliers to vary smoothly according to the state of the economy and use real-time forecast data to purge policy innovations of their predictable components. We also consider the responses of other key macroeconomic variables. Our findings suggest that cross-country spillovers have an important impact, and also confirm those of our earlier papers that fiscal shocks have a larger impact when the affected country is in recession.Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at
Standard vector autoregression (VAR) identification methods find that government spending raises consumption and real wages; the Ramey--Shapiro narrative approach finds the opposite. I show that a key difference in the approaches is the timing. Both professional forecasts and the narrative approach shocks Granger-cause the VAR shocks, implying that these shocks are missing the timing of the news. Motivated by the importance of measuring anticipations, I use a narrative method to construct richer government spending news variables from 1939 to 2008. The implied government spending multipliers range from 0.6 to 1.2. Copyright 2011, Oxford University Press.
A key issue in current research and policy is the size of fiscal multipliers when the economy is in recession. Using a variety of methods and data sources, we provide three insights. First, using regime-switching models, we estimate effects of tax and spending policies that can vary over the business cycle; we find large differences in the size of fiscal multipliers in recessions and expansions with fiscal policy being considerably more effective in recessions than in expansions. Second, we estimate multipliers for more disaggregate spending variables which behave differently in relation to aggregate fiscal policy shocks, with military spending having the largest multiplier. Third, we show that controlling for predictable components of fiscal shocks tends to increase the size of the multipliers.
This paper introduces methods to compute impulse responses without specification and estimation of the underlying multivariate dynamic system. The central idea consists in estimating local projections at each period of interest rather than extrapolating into increasingly distant horizons from a given model, as it is done with vector autoregressions (VAR). The advantages of local projections are numerous: (1) they can be estimated by simple regression techniques with standard regression packages; (2) they are more robust to misspecification; (3) joint or point-wise analytic inference is simple; and (4) they easily accommodate experimentation with highly nonlinear and flexible specifications that may be impractical in a multivariate context. Therefore, these methods are a natural alternative to estimating impulse responses from VARs. Monte Carlo evidence and an application to a simple, closed-economy, new-Keynesian model clarify these numerous advantages.
This paper considers the estimation of multiple-structural-break models under specification errors. A common example in economics is that the true model is measured in level, but a linear-log model is estimated. We show that, under specification errors, if there are more than one break points and if a single-break model is estimated, the estimated break point is consistent for one of the true break points. This consistency result applies to models with multiple regressors where some or all of the regressors are misspecified. Another important contribution of this paper is that we have constructed a Sup-Wald test whose limiting distribution is not affected by model misspecification. Using this robust test, we show that the break points can be estimated sequentially one at a time. Simulation evidence and an empirical application are provided. Copyright © 2008 The Author(s). Journal compilation © Royal Economic Society 2008