Chapter

Low-Frequency Econometrics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Many questions in economics involve long-run or “trend” variation and covariation in time series. Yet, time series of typical lengths contain only limited information about this long-run variation. This paper suggests that long-run sample information can be isolated using a small number of low-frequency trigonometric weighted averages, which in turn can be used to conduct inference about long-run variability and covariability. Because the low-frequency weighted averages have large sample normal distributions, large sample valid inference can often be conducted using familiar small sample normal inference procedures. Moreover, the general approach is applicable for a wide range of persistent stochastic processes that go beyond the familiar I (0) and I (1) models. INTRODUCTION This paper discusses inference about trends in economic time series. By “trend” we mean the low-frequency variability evident in a time series after forming moving averages such as low-pass (cf. Baxter and King, 1999) or Hodrick and Prescott (1997) filters. To measure this low-frequency variability we rely on projections of the series onto a small number of trigonometric functions (e.g., discrete Fourier, sine, or cosine transforms). The fact that a small number of projection coefficients capture low-frequency variability reflects the scarcity of low-frequency information in the data, leading to what is effectively a “small-sample” econometric problem. As we show, it is still relatively straightforward to conduct statistical inference using the small sample of low-frequency data summaries.Moreover, these low-frequency methods are appropriate for both weakly and highly persistent processes. Before getting into the details, it is useful to fix ideas by looking at some data. Figure 1 plots the value of per-capita GDP growth rates (panel A) and price inflation (panel B) for the United States using quarterly data from 1947 through 2014, and where both are expressed in percentage points at an annual rate. The plots show the raw series and two “trends.” The first trend was constructed using a band-pass moving average filter designed to pass cyclical components with periods longer than T/6 ≈ 11 years, and the second is the full-sample projection of the series onto a constant and twelve cosine functions with periods 2T/j for j = 1, …, 12, also designed to capture variability for periods longer than 11 years.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To do so, we produce output gap estimates from a suite of models, including the models considered in Orphanides and van Norden (2002), the relatively new de-trending methods from Mueller and Watson (2017) and Hamilton (2018), the judgmental output gap from the Federal Reserve's staff, as well as a number of univariate and multi-variate unobserved components models. We next propose a decomposition of the estimates from models that can be written as a linear Gaussian state-space system. ...
... The GLMS methodology aims to reduce the end-point problem by applying an HP filter to a data series that has been augmented with a forecast from an AR model. The Mueller and Watson (2017) (MW) extracts a trend from a univariate series by projecting the series onto a small number of trigonometric functions intended to isolate the low-frequency information in the series. Finally, Hamilton (2018) provides a new regression-based approach to trend extraction. ...
... Table 1 provides summary statistics, which reinforce the observation that output gap estimates differ dramatically across models: the real-time estimates have notably different means and volatilities, and often have different first derivatives and signs. Garratt et al. (2008), and MW to Mueller and Watson (2017). Grey shaded areas denote NBER-defined recession. ...
Article
Output gaps that are estimated in real time can differ substantially from those estimated after the fact. We aim to understand the real-time instability of output gap estimates by comparing a suite of reduced-form models. We propose a new statistical decomposition and find that including a Okun’s law relationship improves real-time stability by alleviating the end-point problem. Models that include the unemployment rate also produce output gaps with relevant economic content. However, we find that no model of the output gap is clearly superior to the others along each metric we consider.
... Most prominently, inflation rates are best modelled by an integration order d between zero and one (Hassler and Wolters;1995;Baillie;1996;Tschernig et al.;2013;2020). For real GDP, Müller and Watson (2017) find that the likelihood is flat around d = 1, yielding a 90% confidence interval for d that is given by [0.51, 1.44]. Inference for d > 1 is found in Chambers (1998) for low frequency transformations of income, consumption, investment, exports, and imports for the UK. ...
... For log GDP, empirical evidence for the exact value of the persistence parameter d is mixed. Diebold and Rudebusch (1989) and Tschernig et al. (2013) estimate d to be slightly smaller than one, whereas Müller and Watson (2017) find that the likelihood is flat around d = 1, such that a 90% confidence interval yields d ∈ [0.51, 1.44]. From the exact local Whittle estimator of Shimotsu and Phillips (2005) and the method of Geweke and Porter-Hudak (1983) we obtaind EW = 1.24 andd GP H = 1.24 with tuning parameter α = 0.65 as in Shimotsu and Phillips (2005). ...
... For the fractional trend-cycle model the ML estimator yieldsd F T −F C = 1.32, implying that log US real GDP is a non-stable, nonstationary fractional process. As figure 5 shows, the log likelihood is considerably flat aroundd F T −F C , which explains the different results for the persistence parameter in the literature and confirms the findings in Müller and Watson (2017). Nonetheless, most of the probability mass clearly lies at d ≥ 1. Contrary to the benchmark, the FT-FC specification attributes more volatility to the transitory shocks, whereas σ η is estimated to be smaller than in the T-C specification. ...
Preprint
We develop a generalization of correlated trend-cycle decompositions that avoids prior assumptions about the long-run dynamic characteristics by modelling the permanent component as a fractionally integrated process and incorporating a fractional lag operator into the autoregressive polynomial of the cyclical component. We relate the model to the Beveridge-Nelson decomposition and derive a modified Kalman filter estimator for the fractional components. Identification and consistency of the maximum likelihood estimator are shown. For US macroeconomic data we demonstrate that, unlike non-fractional correlated unobserved components models, the new model estimates a smooth trend together with a cycle hitting all NBER recessions.
... In addition, their asymptotic distribution theory differs under the null (no predictability) and alternative hypothesis, and it depends on the first-stage filtering. 1 Müller & Watson (2017 study the different problem of drawing inference and generating predictions about the very long run. For that purpose, they estimate a bivariate spectral density using low-frequency trigonometric averages of observations, thus extracting information about the "long-run" coherence between two series from a fixed number of frequencies in the vicinity of the origin. ...
... This complements the comprehensive simulation study in Ferson et al. (2003), showing that size distortions can be very severe, up to 70% for joint significance tests, in our general setting. Our LCM test for joint significance, on the other hand, has excellent size and power 2 Despite achieving a memory-dependent and potentially very slow rate-of-convergence as well as providing less general inference within fractionally (co)integrated systems, it is important to note that the Müller & Watson (2017 procedure also applies to local-level and local-to-unity models, thus adding generality along that dimension. 3 See, for example, Baillie, Bollerslev & Mikkelsen (1996), Comte & Renault (1998), Andersen, Bollerslev, Diebold & Ebens (2001), Andersen, Bollerslev, Diebold & Labys (2001, Christensen & Nielsen (2006), Andersen, Bollerslev & Diebold (2007), Corsi (2009), Bollerslev, Osterrieder, Sizova & Tauchen (2013 and Varneskov & Perron (2018). in finite samples. ...
... There are high frequency financial variables and low frequency macroeconomic variables. Classifying these variables depending on their frequency is a common approximation in the time series econometrics literature (see, e.g., Bańbura et al. (2010), Müller and Watson (2015), Engle (2000)). We follow this approach. ...
Article
Full-text available
In this paper, we propose a two-step approach for conducting statistical inference in financial networks of volatility, applied to a network of European sovereign debt markets. The static results highlight that, contrarily to the intuition, southern European bonds exhibiting most volatility during the European debt crisis were not necessarily net transmitters to the network. We also find that the best monetary and macroprudential policy stances to achieve low volatility transmission are to target low inflation and low financial stress. The dynamics of the model show that the central bank should adjust which variable targets depending on the time period.
... Each series is passed through the filter proposed in Mueller and Watson (2017) to remove the low frequency variations in the mean. This is equivalent to adding a set of cosine predictors in the VAR. ...
Preprint
The paper provides three results for SVARs under the assumption that the primitive shocks are mutually independent. First, a framework is proposed to study the dynamic effects of disaster-type shocks with infinite variance. We show that the least squares estimates of the VAR are consistent but have non-standard properties. Second, it is shown that the restrictions imposed on a SVAR can be validated by testing independence of the identified shocks. The test can be applied whether the data have fat or thin tails, and to over as well as exactly identified models. Third, the disaster shock is identified as the component with the largest kurtosis, where the mutually independent components are estimated using an estimator that is valid even in the presence of an infinite variance shock. Two applications are considered. In the first, the independence test is used to shed light on the conflicting evidence regarding the role of uncertainty in economic fluctuations. In the second, disaster shocks are shown to have short term economic impact arising mostly from feedback dynamics.
... The low frequency component captures the long run movements of the original data with periodicity longer than 2T =j for j = 1; :::; K years of cycles. A useful rule of thumb introduced in Müller (2014) and Müller and Watson (2017) suggests a choice of K = 16 to capture the low-frequency movements of T = 65 years of Post World War II macro data with periodicity higher than the commonly accepted business cycle period of T = (K=2) ' 8 years. The lowfrequency transformation also has substantive empirical content in the context of the cointegration regression system in (1)- (2), as the cointegration model itself seeks a long run relation among economic time series. ...
Preprint
Full-text available
This paper develops a robust t and F inferences on a triangular cointegrated system when one may not be sure the economic variables are exact unit root processes. We show that the low frequency transformed augmented (TA) OLS method possesses an asymptotic bias term in the limiting distribution, and corresponding t and F inferences in Hwang and Sun (2017) are asymptotically invalid. As a result, the size of the cointegration vector can be extremely large for even very small deviations from the unit root regressors. We develop a method to correct the asymptotic bias of the TA-OLS test statistics for the cointegration vector. Our modi…ed statistics not only adjusts the locational bias but also re ‡ects the estimation uncertainty of the long-run endogenity parameter in the bias correction term and has asymptotic t and F limits. Based on the modi…ed TA-OLS test statistics, the paper provides a simple Bonferroni method to test for the cointegration parameter. JEL Classi…cation: C12, C13, C32
... Implementing those methods, however, requires making specific assumptions about the trend models, and those assumptions are often difficult to verify. Moreover, trend assumptions that are difficult to distinguish empirically can lead to substantially different inferences (Elliott (1998), Müller and Watson (2017)). Thus, the absence of random assignment of radiative forcing leads to biased estimation of the TCR, a situation further complicated by the trends in the data. ...
Article
Full-text available
The transient climate response (TCR) is the change in global mean temperature at the time of an exogenous doubling in atmospheric CO2 concentration increasing at a rate of 1% per year. A problem with estimating the TCR using observational data is that observed CO2 concentrations depend in turn on temperature. Therefore, the observed concentration data are endogenous, potentially leading to simultaneous causation bias of regression estimates of the TCR. We address this problem by employing instrumental variables regression, which uses changes in radiative forcing external to earth systems to provide quasi-experiments that can be used to estimate the TCR. Because the modern instrumental record is short, we focus on decadal fluctuations (up to 30-year changes), which also mitigate some statistical issues associated with highly persistent temperature and concentration data. Our estimates of the TCR for these shorter horizons, normalized to be comparable to the traditional 70-year TCR, fall within the range in the IPCC-AR5 and provide new observational confirmation of model-based estimates.
... We also evaluate the quantitative implications of different measures of volatility in explaining growth and then discuss some possible explanations for the importance of LR volatility. It is important to mention that LR volatility can also be interpreted as persistence in volatility (Ascari and Sbordone 2014;Levy and Dezhbakhsh 2003;Müller and Watson 2017). ...
Article
This paper revisits the empirical relationship between volatility and long-run growth, but the key contribution lies in decomposing growth volatility into its business-cycle and trend components. This volatility decomposition also accounts for enormous heterogeneity among countries in terms of their long-run growth trajectories. We identify a negative effect of trend volatility, which we refer to as long-run volatility, on growth, but no effect of business-cycle volatility. However, if long-run volatility is omitted, there would be a spurious (negative) effect of business-cycle volatility. Our results draw attention to a crucial question about different volatility measures and their implications in macroeconomic analyses.
... Their focus was on constructing confidence sets with good coverage for small and large breaks, by inverting structural break tests. Recently, Müller and Watson (2017) proposed new methods for detecting low-frequency mean or trend changes. Our paper is different as it highlights the properties of the existing sup Wald test for a break in the unconditional and conditional mean and variance of a time series. ...
Article
Full-text available
Structural break tests for regression models are sensitive to model misspecification. We show—analytically and through simulations—that the sup Wald test for breaks in the conditional mean and variance of a time series process exhibits severe size distortions when the conditional mean dynamics are misspecified. We also show that the sup Wald test for breaks in the unconditional mean and variance does not have the same size distortions, yet benefits from similar power to its conditional counterpart in correctly specified models. Hence, we propose using it as an alternative and complementary test for breaks. We apply the unconditional and conditional mean and variance tests to three US series: unemployment, industrial production growth and interest rates. Both the unconditional and the conditional mean tests detect a break in the mean of interest rates. However, for the other two series, the unconditional mean test does not detect a break, while the conditional mean tests based on dynamic regression models occasionally detect a break, with the implied break-point estimator varying across different dynamic specifications. For all series, the unconditional variance does not detect a break while most tests for the conditional variance do detect a break which also varies across specifications.
Article
Output gaps that are estimated in real time can differ substantially from those estimated after the fact. We provide a comprehensive comparison of real‐time output gap estimates, with the aim of understanding this real‐time instability. Using a statistical decomposition, we find that including Okun's law relationship improves real‐time stability by alleviating the end‐point problem. Models that include the unemployment rate also produce output gaps with relevant economic content.
Article
This paper develops new t and F tests in a low-frequency transformed triangular cointegrating regression when one may not be certain that the economic variables are exact unit root processes. We first show that the low-frequency transformed and augmented OLS (TA-OLS) method exhibits an asymptotic bias term in its limiting distribution. As a result, the test for the cointegration vector can have substantially large size distortion, even with minor deviations from the unit root regressors. To correct the asymptotic bias of the TA-OLS statistics for the cointegration vector, we develop modified TA-OLS statistics that adjust the bias and take account of the estimation uncertainty of the long-run endogeneity arising from the bias correction. Based on the modified test statistics, we provide Bonferroni-based tests of the cointegration vector using standard t and F critical values. Monte Carlo results show that our approach has the correct size and reasonable power for a wide range of local-to-unity parameters. Additionally, our method has advantages over the IVX approach when the serial dependence and the long-run endogeneity in the cointegration system are important.
Article
This paper provides three results for SVARs under the assumption that the primitive shocks are mutually independent. First, a framework is proposed to accommodate a disaster-type variable with infinite variance into a SVAR. We show that the least squares estimates of the SVAR are consistent but have non-standard asymptotics. Second, the disaster shock is identified as the component with the largest kurtosis. An estimator that is robust to infinite variance is used to recover the mutually independent components. Third, an independence test on the residuals pre-whitened by the Choleski decomposition is proposed to test the restrictions imposed on a SVAR. The test can be applied whether the data have fat or thin tails, and to over as well as exactly identified models. Three applications are considered. In the first, the independence test is used to shed light on the conflicting evidence regarding the role of uncertainty in economic fluctuations. In the second, disaster shocks are shown to have short term economic impact arising mostly from feedback dynamics. The third uses the framework to study the dynamic effects of economic shocks post-covid.
Article
We survey the literature on spectral regression estimation. We present a cohesive framework designed to model dependence on frequency in the response of economic time series to changes in the explanatory variables. Our emphasis is on the statistical structure and on the economic interpretation of time-domain specifications needed to obtain horizon effects over frequencies , over scales , or upon aggregation . To this end, we articulate our discussion around the role played by lead-lag effects in the explanatory variables as drivers of differential information across horizons. We provide perspectives for future work throughout.
Article
This paper provides a long-run cycle perspective to explain the behavior of the annual flow of inheritance. Based on the low- and medium frequency properties of long time bequests series in Sweden, France, UK, and Germany, we explore the extent to which a two-sector Barro-type OLG model is consistent with such empirical regularities. As long as agents are sufficiently impatient and preferences are non-separable, we show that endogenous fluctuations are likely to occur through two mechanisms, which can generate independently or together either period-2 cycles or Hopf bifurcations. The first mechanism relies on the elasticity of intertemporal substitution or equivalently the sign of the cross-derivative of the utility function whereas the second rests on sectoral technologies through the sign of the capital intensity difference across two sectors. Furthermore, building on the quasi-palindromic nature of the degree-4 characteristic equation, we derive some meaningful sufficient conditions associated to the occurrence of complex roots and a Hopf bifurcation in a two-sector OLG model.
Article
A factor stochastic volatility model estimates the common component to output gap estimates produced by the staff of the Federal Reserve, its time-varying volatility, and time-varying, horizon-specific forecast uncertainty. The output gap estimates are uncertain even well after the fact. Nevertheless, the common component is clearly procyclical, and positive innovations to the common component produce movements in macroeconomic variables consistent with an increase in aggregate demand. Heightened macroeconomic uncertainty, as measured by the common component's volatility, leads to persistently negative economic responses.
Article
Using monthly data on costly natural disasters affecting the United States over the last 40 years, we estimate 2 time series models and use them to generate predictions about the impact of COVID-19. We find that while our models yield reasonable estimates of the impact on industrial production and the number of scheduled flight departures, they underestimate the unprecedented changes in the labor market.
Article
This paper studies standard predictive regressions in economic systems governed by persistent vector autoregressive dynamics for the state variables. In particular, all – or a subset – of the variables may be fractionally integrated, which induces a spurious regression problem. We propose a new inference and testing procedure – the Local speCtruM (LCM) approach – for joint significance of the regressors, that is robust against the variables having different integration orders and remains valid regardless of whether predictors are significant and, if they are, whether they induce cointegration. Specifically, the LCM procedure is based on fractional filtering and band spectrum regression using a suitably selected set of frequency ordinates. Contrary to existing procedures, we establish a uniform Gaussian limit theory and a standard χ2-distributed test statistic. Using the LCM inference and testing techniques, we explore predictive regressions for the realized return variation. Standard least squares inference indicates that popular financial and macroeconomic variables convey valuable information about future return volatility. In contrast, we find no significant evidence using our robust LCM procedure. If anything, our tests support a reverse chain of causality, with rising financial volatility predating adverse innovations to key macroeconomic variables. Simulations are employed to illustrate the relevance of the theoretical arguments for finite-sample inference.
Article
We develop a Bayesian latent factor model of the joint long-run evolution of GDP per capita for 113 countries over the 118 years from 1900 to 2017. We find considerable heterogeneity in rates of convergence, including rates for some countries that are so slow that they might not converge (or diverge) in century-long samples, and a sparse correlation pattern (“convergence clubs”) between countries. The joint Bayesian structure allows us to compute a joint predictive distribution for the output paths of these countries over the next 100 years. This predictive distribution can be used for simulations requiring projections into the deep future, such as estimating the costs of climate change. The model's pooling of information across countries results in tighter prediction intervals than are achieved using univariate information sets. Still, even using more than a century of data on many countries, the 100-year growth paths exhibit very wide uncertainty.
Article
Full-text available
The usual t test, the t test based on heteroskedasticity and autocorrelation consistent (HAC) covariance matrix estimators, and the heteroskedasticity and autocorrelation robust (HAR) test are three statistics that are widely used in applied econometric work. The use of these significance tests in trend regression is of particular interest given the potential for spurious relationships in trend formulations. Following a longstanding tradition in the spurious regression literature, this paper investigates the asymptotic and finite sample properties of these test statistics in several spurious regression contexts, including regression of stochastic trends on time polynomials and regressions among independent random walks. Concordant with existing theory (Phillips, 1986, 1998; Sun, 2004, 2014b), the usual t test and HAC standardized test fail to control size as the sample size n !1 in these spurious formulations, whereas HAR tests converge to well-defined limit distributions in each case and therefore have the capacity to be consistent and control size. However, it is shown that when the number of trend regressors K !1; all three statistics, including the HAR test, diverge and fail to control size as n !1. These findings are relevant to high dimensional nonstationary time series regressions where machine learning methods may be employed.<br/
Article
Measuring economic growth is complicated by seasonality, the regular fluctuation in economic activity that depends on the season of the year. The Bureau of Economic Analysis uses statistical techniques to remove seasonality from its estimates of GDP, and, in 2015, it took steps to improve the seasonal adjustment of data back to 2012. I show that residual seasonality in GDP growth remains even after these adjustments, has been a longer-term phenomenon, and is particularly noticeable in the 1990s. The size of this residual seasonality is economically meaningful and has the ability to change the interpretation of recent economic activity.
Article
Measuring economic growth is complicated by seasonality, the regular fluctuation in economic activity that depends on the season of the year. The BEA uses statistical techniques to remove seasonality from its estimates of GDP, but some research has indicated that seasonality remains. As a result, the BEA began a three-phase plan in 2015 to improve its seasonal-adjustment techniques, and in July 2018, it completed phase 3. Our analysis indicates that even after these latest improvements by the BEA, residual seasonality in GDP growth remains. On average, this residual seasonality makes GDP growth appear to be slower in the first quarter of the year and more rapid in the second quarter of the year. Rapid second-quarter growth is particularly noticeable in recent years. As a result, business economists and policymakers may want to take seasonality into account when using GDP to assess the health of the economy.
Article
Full-text available
This paper proposes growth rate transformations with targeted lag selection in order to improve the long-horizon forecast accuracy. The method targets lower frequencies of the data that correspond to particular forecast horizons, and is applied to models of the real price of crude oil. Targeted growth rates can improve the forecast precision significantly at horizons of up to five years. For the real price of crude oil, the method can achieve a degree of accuracy up to five years ahead that previously has been achieved only at shorter horizons.
Article
Inference for statistics of a stationary time series often involve nuisance parameters and sampling distributions that are difficult to estimate. In this paper, we propose the method of orthogonal samples, which can be used to address some of these issues. For a broad class of statistics, an orthogonal sample is constructed through a slight modification of the original statistic, such that it shares similar distributional properties as the centralised statistic of interest. We use the orthogonal sample to estimate nuisance parameters of weighted average periodogram estimators and L2L_{2}-type spectral statistics. Further, the orthogonal sample is utilized to estimate the finite sampling distribution of various test statistics under the null hypothesis. The proposed method is simple and computationally fast to implement. The viability of the method is illustrated with various simulations.
Article
Significant contributions have been made since the World Health Organization published Brian Abel-Smith's pioneering comparative study of national health expenditures more than 50 years ago. There have been major advances in theories, model specifications, methodological approaches, and data structures. This introductory essay provides a historical context for this line of work, highlights four newly published studies that move health economics research forward, and indicates several important areas of challenging but potentially fruitful research to strengthen future contributions to the literature and make empirical findings more useful for evaluating health policy decisions. Copyright © 2016 John Wiley & Sons, Ltd.
ResearchGate has not been able to resolve any references for this publication.