Article

Progress in the Study of Nonstationary Political Time Series: A Comment

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Cointegration was introduced to our discipline by Renée Smith and Charles Ostrom Jr. and by Robert Durr more than two decades ago at political methodology meetings in Washington University�St. Louis and Florida State University. Their articles, along with comments by Neal Beck and John T. Williams, were published in a symposium like this one in the fourth volume of Political Analysis . Keele, Lin, and Webb (2016; hereafter KLW) and Grant and Lebo (2016; hereafter GL) show how, in the years that followed, cointegration was further evaluated by political scientists, and the related idea of error correction subsequently was applied. Have the last twenty-plus years witnessed significant progress in modeling nonstationary political time series? In some respects, the answer is yes. The present symposium represents progress in understanding equation balance, analyzing bounded variables, and decomposing short- and longterm causal effects. In these respects KLW's and GL's articles deserve wide dissemination. But KLW and GL leave important methodological issues unresolved. They do not address some critical methodological challenges. From a historical perspective, the present symposium shows that we have made relatively little progress in modeling nonstationary political time series.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Many physical phenomena have unknown underlying dynamical systems, leaving us with a sequence of measurements of an associated quantity with system. Because of its vast applications spanning from ecology to political science, modeling, and forecasting of such time series are an active study topic [4,18]. In many cases, the present value of a time series is determined by its prior values. ...
Chapter
Full-text available
Chaotic systems are sensitive to initial conditions. The exponential divergence of nearby trajectories limits the accuracy of the long-term prediction of a chaotic time series. Because of the deterministic governing equations of the underlying system, accurate short-term prediction of a chaotic time series is possible. Various approaches have been used to forecast a chaotic time series, the most popular of which is the first-order approximation of the local dynamics in the embedded state space. Simultaneously, various strategies for modeling a time series by a complex network have also been developed. Nonetheless, time-series induced networks have received little attention in terms of forecasting time series. This paper proposes a method based on symbolic dynamics for constructing a weighted network from a given time series and provides a strategy for forecasting a time series using the weighted network. We demonstrate the approach’s effectiveness by predicting a chaotic time series. The results are then compared to those obtained using the linear first-order approximation method. The proposed method is straightforward, computationally efficient, and parameter-free.KeywordsComplex networksChaotic times series predictionChaos
... Very little empirical work in political science that uses longitudinal analysis raises the issue of balance to defend the theoretical or empirical model used. The discussion of balance has been mostly relegated to methodological discussions amongst a small group of academics (Freeman, 2016;Keele et al., 2016a,b;Lebo and Grant, 2016;Enns and Wlezien, 2017), many of whom are involved in this symposium. ...
Article
Full-text available
The papers in this symposium use Monte Carlo simulations to demonstrate the consequences of estimating time series models with variables that are of different orders of integration. In this summary, I do the following: very briefly outline what we learn from the papers; identify an apparent contradiction that might increase, rather than decrease, confusion around the concept of a balanced time series model; suggest a resolution; and identify a few areas of research that could further increase our understanding of how variables with different dynamics might be combined. In doing these things, I suggest there is still a lack of clarity around how a research practitioner demonstrates balance, and demonstrates what Pickup and Kellstedt (2021) call I (0) balance.
... Nonstationarity is a pervasive and persistent challenge for modeling and forecasting real-world data ranging from economic to ecological systems and political time series to machine learning as most theories are developed based on the assumption that the phenomenon under investigation is stationary and fluctuating around a time-independent mean [1][2][3][4][5][6]. It has been argued that all systems are nonstationary at some scale which often corresponds to macrosystems span-* mallikasasi@gmail.com ...
Article
Full-text available
The classical surrogate data tests, which are used to differentiate linear noise processes from nonlinear processes, are not suitable for nonstationary time series. In this paper, we propose a surrogate data test that can be applied on both stationary time series as well as nonstationary time series with short-term fluctuations. The method is based on the idea of constructing a network from the time series, employing a generalized symbolic dynamics method introduced in this work, and using any one of the several easily computable network parameters as discriminating statistics. The construction of the network is designed to remove the long-term trends in the data automatically. The network-based test statistics pick up only the short-term variations, unlike the discriminating statistics of the traditional methods, which are influenced by nonstationary trends in the data. The method is tested on several systems generated by linear or nonlinear processes and with deterministic or stochastic trends, and in all cases it is found to be able to differentiate between linear stochastic processes and nonlinear processes quite accurately, especially in cases where the common methods would lead to false rejections of the null hypothesis due to nonstationarity being interpreted as nonlinearity. The method is also found to be robust to the presence of experimental or dynamical noise of a moderate level in an otherwise nonlinear system.
... Box-Steffensmeier and Helgason (2016, 2) make the point by stating, "when studying the relationship between two (or more) series, the analyst must ensure that they are of the same level of integration; that is, they have to be balanced." Although Freeman (2016) offers a more nuanced perspective on equation balance, many of the symposium contributors could be interpreted as recommending that scholars never mix orders of integration. 2 Indeed, in their concluding article, Lebo and Grant write, "One point of agreement among the papers here is that equation balance is an important and neglected topic. ...
Article
Full-text available
Most contributors to a recent Political Analysis symposium on time series analysis suggest that in order to maintain equation balance, one cannot combine stationary, integrated and/or fractionally integrated variables with general error correction models (GECMs) and the equivalent autoregressive distributed lag (ADL) models. This definition of equation balance implicates most previous uses of these models in political science and circumscribes their use moving forward. The claim thus is of real consequence and worthy of empirical substantiation, which the contributors did not provide. Here we address the issue. First, we highlight the difference between estimating unbalanced equations and mixing orders of integration, the former of which clearly is a problem and the latter of which is not, at least not necessarily. Second, we assess some of the consequences of mixing orders of integration by conducting simulations using stationary, integrated, and combined (stationary plus integrated) time series. Our simulations show that with an appropriately specified model, regressing a stationary variable on an integrated one or the reverse does not increase the risk of spurious results and that such regressions can detect true relationships when they exist. We then illustrate the potential importance of these conclusions with an applied example-income inequality in the United States. the PSRM editor, Vera Troeger, and two anonymous reviewers for helpful comments and suggestions. Political Analysis (PA) recently hosted a symposium on time series analysis that built upon De Boef and Keele's (2008) influential time series article in the American Journal of Political Science. Equation balance was an important point of emphasis throughout the symposium. In their classic work on the subject, Banerjee, Dolado, Galbraith and Hendry (1993, 164) explain that an unbalanced equation is a regression, "in which the regressand is not the same order of integration as the regressors, or any linear combination of the regressors." The contributors to this symposium were right to emphasize the importance of equation balance, as unbalanced equations can produce serially correlated residuals (e.g., Pagan and Wickens 1989) and spurious relationships (e.g., Banerjee et al. 1993, 79). Throughout the PA symposium, however, equation balance is defined and applied in different ways. Grant and Lebo (2016, 7) follow Banerjee, et al's definition when they explain that a general error correction model (GECM)-or autoregressive distributed lag (ADL)-is balanced if co-integration is present. 1 Keele, Linn and Webb (2016a, 83) implicitly make this same point in their second contribution to the symposium when they cite Bannerjee et al. (1993) in their discussion of equation balance. Yet, other parts of the symposium seem to apply a stricter standard of equation balance, stating that when estimating a GECM/ADL all time series must be the same order of integration. As Grant and Lebo write in the abstract of their first article, "Time series of various orders of integration-stationary, non-stationary, explosive, near-and fractionally integrated-should not be analyzed together...
... Box-Steffensmeier and Helgason (2016, 2) make the point by stating, "when studying the relationship between two (or more) series, the analyst must ensure that they are of the same level of integration; that is, they have to be balanced." Although Freeman (2016) offers a more nuanced perspective on equation balance, many of the symposium contributors could be interpreted as recommending that scholars never mix orders of integration. 2 Indeed, in their concluding article, Lebo and Grant write, "One point of agreement among the papers here is that equation balance is an important and neglected topic. ...
Article
Full-text available
Most contributors to a recent Political Analysis symposium on time series anal- ysis suggest that in order to maintain equation balance, one cannot combine stationary, integrated, and/or fractionally integrated variables with general error correction models (GECMs) and the equivalent autoregressive distributed lag (ADL) models. This de�ni- tion of equation balance implicates most previous uses of these models in political science and circumscribes their use moving forward. The claim thus is of real consequence and worthy of empirical substantiation, which the contributors did not provide. Here we address the issue. First, we highlight the di�erence between estimating unbalanced equations and mixing orders of integration, the former of which clearly is a problem and the latter of which is not, at least not necessarily. Second, we assess some of the consequences of mixing orders of integration by conducting simulations using stationary and integrated time series. Our simulations show that with an appropriately speci�ed model, regressing a stationary variable on an integrated one, or the reverse, does not increase the risk of spurious results. We then illustrate the potential importance of these conclusions with an applied example|income inequality in the United States.
... The comments of Keele, Linn, and Webb (2016, KL&W hereafter), Freeman (2016), Helgason (2016), and Esarey (2016) on the issues brought up in our paper Error Correction Methods with Political Time Series are extremely useful. In particular, this symposium provides some muchneeded discussion about relating different types of time series to various modeling strategies. ...
Article
The papers in this symposium agree on several points. In this article, we sort through some remaining areas of disagreement and discuss some of the practical issues of time series modeling we think deserve further explanation. In particular, we have five points: (1) clarifying our stance on the general error correction model in light of the comments in this issue; (2) clarifying equation balance and discussing how bounded series affects our thinking about stationarity, balance, and modeling choices; (3) answering lingering questions about our Monte Carlo simulations and exploring potential problems in the inferences drawn from long-run multipliers; (4) reviewing and defending fractional integration methods in light of the questions raised in this symposium and elsewhere; and (5) providing a short practical guide to estimating a multivariate autoregressive fractionally integrated moving average model with or without an error correction term.
Article
Full-text available
It is understood that ensuring equation balance is a necessary condition for a valid model of times series data. Yet, the definition of balance provided so far has been incomplete and there has not been a consistent understanding of exactly why balance is important or how it can be applied. The discussion to date has focused on the estimates produced by the general error correction model (GECM). In this paper, we go beyond the GECM and beyond model estimates. We treat equation balance as a theoretical matter, not merely an empirical one, and describe how to use the concept of balance to test theoretical propositions before longitudinal data have been gathered. We explain how equation balance can be used to check if your theoretical or empirical model is either wrong or incomplete in a way that will prevent a meaningful interpretation of the model. We also raise the issue of “ I(0) balance” and its importance.
Article
Full-text available
It is fairly well-known that proper time series analysis requires that estimated equations be balanced. Numerous scholars mistake this to mean that one cannot mix orders of integration. Previous work has clarified the distinction between equation balance and having different orders of integration, and shown that mixing orders of integration does not increase the risk of Type I error when using the GECM/ADL, that is, so long as equations are balanced (and other modeling assumptions are met). This paper builds on that research to assess the consequences for Type II error when employing those models. Specifically, we consider cases where a true relationship exists, the left-and right-hand sides of the equation mix orders of integration, and the equation still is balanced. Using the asymptotic case, we find that the different orders of integration do not preclude identification of the true relationship using the GECM/ADL. We then highlight that estimation is trickier in practice, over finite time, as data sometimes do not reveal the underlying process. But, simulations show that even in these cases, researchers will typically draw accurate inferences as long as they select their models based on the observed characteristics of the data and test to be sure that standard model assumptions are met. We conclude by considering the implications for researchers analyzing or conducting simulations with time series data.
Article
In recent years, political science has seen a boom in the use of sophisticated methodological tools for time series analysis. One such tool is the general error correction model (GECM), originally introduced to political scientists in the pages of this journal over 20 years ago (Durr 1992; Ostrom and Smith 1992) and re-introduced by De Boef and Keele (2008), who advocate its use for a wider set of time series data than previously considered appropriate. Their article has proven quite influential, with numerous papers justifying their methodological choices with reference to De Boef and Keele's contribution. Grant and Lebo (2016) take issue with the increasing use of the GECM in political science and argue that the methodology is widely misused and abused by practitioners. Given the recent surge of research conducted using error correction methods, there is every reason to take their suggestions seriously and provide a fuller discussion of the points they raise in their paper. The present symposium serves such a role. It consists of Grant and Lebo's critique, a detailed response by Keele, Linn, and Webb (2016b), and shorter comments by Esarey (2016), Freeman (2016), and Helgason (2016). Finally, Lebo and Grant (2016) and Keele, Linn, and Webb (2016a) reflect on the contributions made in the symposium, as well as discuss outstanding issues.
Article
This issue began as an exchange between Grant and Lebo (2016) and ourselves (Keele, Linn, and Webb 2016) about the utility of the general error correction model (GECM) in political science. The exchange evolved into a debate about Grant and Lebo’s proposed alternative to the GECM and the utility of fractional integration methods (FIM). Esarey (2016) and Helgason (2016) weigh in on this part of the debate. Freeman (2016) offers his views on the exchange as well. In the end, the issue leaves readers with a lot to consider. In his comment, Freeman (2016) argues that the exchange has produced little significant progress because of the contributors’ failures to consider a wide array of topics not directly related to the GECM or FIM. We are less pessimistic. In what follows, we distill what we believe are the most important elements of the exchange—the importance of balance, the costs and benefits of FIM, and the vagaries of pre-testing.
Article
Full-text available
While traditionally considered for non-stationary and cointegrated data, DeBoef and Keele suggest applying a General Error Correction Model (GECM) to stationary data with or without cointegration. The GECM has since become extremely popular in political science but practitioners have confused essential points. For one, the model is treated as perfectly flexible when, in fact, the opposite is true. Time series of various orders of integration—stationary, non-stationary, explosive, near- and fractionally integrated—should not be analyzed together but researchers consistently make this mistake. That is, without equation balance the model is misspecified and hypothesis tests and long-run-multipliers are unreliable. Another problem is that the error correction term’s sampling distribution moves dramatically depending upon the order of integration, sample size, number of covariates, and the boundedness of Yt. This means that practitioners are likely to overstate evidence of error correction, especially when using a traditional t-test. We evaluate common GECM practices with six types of data, 746 simulations, and five paper replications.
Article
Full-text available
We analyze democratic accountability in open economies based on different hypotheses about political evaluations and government responsiveness. Specifically, we assess whether citizens primarily rely on government policies or if they focus on economic outcomes resulting from these policies to evaluate governments. Our empirical analysis relies on Bayesian structural vector autoregression models for the British economy, aggregate monthly measures of public opinion, and economic evaluations from 1984 to 2006. We find that voters continuously monitor and strongly respond contemporaneously to changes in monetary and fiscal policies, but less to changes in macroeconomic outcomes. Voters also respond to policies differently when institutions change. When the Bank of England became politically independent, citizens shifted their attention toward fiscal policy, and the role of monetary policy in their evaluations decreased significantly. Finally, politicians respond to voting behavior by adjusting their policies in a sensible way. When vote intentions and approval decrease, the government reacts to the public by adjusting fiscal policy and, before the Bank of England became independent, also monetary policy.
Article
Full-text available
Do public opinion dynamics play an important role in understanding conflict trajectories between democratic governments and other rival groups? The majority of previous research has assumed either that public opinion is irrelevant to conflict processes or that the relation-ships are one-way causal chains. In this paper, we argue that neither of these assumptions are theoretically or empirically necessary. Instead, we interpret several theories of opinion dynam-ics and government behavior as particular causal links in models of reciprocity, accountability and credibility relationships. Theoretical expectations about the character of these linkages are translated into four distinct Bayesian structural time series models. These models allow us to include novel domestic public information where available, as well as relax the strict recursive structure that previous time series models have assumed. The models are fit to events data from the Israeli-Palestinian conflict with provisions for U.S. intervention and public sup-port for peace. We find that a credibility model, which allows domestic public opinion to influence U.S., Palestinian and Israeli behavior within a given month, fits the data best. This credibility model supports research that predicts asymmetric reciprocity between democratic and non-democratic belligerents. For the credibility model there is evidence that more pacific Israeli opinion leads to more immediate hostility by the Palestinians toward the Israelis. The direction of this response suggests a negative feedback mechanism where low level conflict is maintained and momentum toward either all out war or dramatic peace is slowed.
Article
Full-text available
Bayesian approaches to the study of politics are increasingly popular. But Bayesian approaches to modeling multiple time series have not been critically evaluated. This is in spite of the potential value of these models in international relations, political economy, and other fields of our discipline. We review recent developments in Bayesian multi-equation time series modeling in theory testing, forecasting, and policy analysis. Methods for constructing Bayesian measures of uncertainty of impulse responses (Bayesian shape error bands) are explained. A reference prior for these models that has proven useful in short-and medium-term forecasting in macroeconomics is described. Once modified to incorporate our experience analyzing political data and our theories, this prior can enhance our ability to forecast over the short and medium terms complex political dynamics like those exhibited by certain international conflicts. In addition, we explain how contingent Bayesian forecasts can be constructed, contingent Bayesian forecasts that embody policy counterfactuals. The value of these new Bayesian methods is illustrated in a reanalysis of the Israeli-Palestinian conflict of the 1980s.
Article
Full-text available
Analyzing macro-political processes is complicated by four interrelated problems: model scale, endogeneity, persistence, and specification uncertainty. These problems are endemic in the study of political economy, public opinion, international relations, and other kinds of macro-political research. We show how a Bayesian structural time series approach addresses them. Our illustration is a structurally identified, nine-equation model of the U.S. political-economic system. It combines key features of the model of Erikson, MacKuen, and Stimson (2002) of the American macropolity with those of a leading macroeconomic model of the United States (Sims and Zha, 1998; Leeper, Sims, and Zha, 1996). This Bayesian structural model, with a loosely informed prior, yields the best performance in terms of model fit and dynamics. This model 1) confirms existing results about the countercyclical nature of monetary policy (Williams 1990); 2) reveals informational sources of approval dynamics: innovations in information variables affect consumer sentiment and approval and the impacts on consumer sentiment feed-forward into subsequent approval changes; 3) finds that the real economy does not have any major impacts on key macropolity variables; and 4) concludes, contrary to Erikson, MacKuen, and Stimson (2002), that macropartisanship does not depend on the evolution of the real economy in the short or medium term and only very weakly on informational variables in the long term.
Article
Full-text available
This article deals with a variety of dynamic issues in the analysis of time-series–cross-section (TSCS) data. Although the issues raised are general, we focus on applications to comparative political economy, which frequently uses TSCS data. We begin with a discussion of specification and lay out the theoretical differences implied by the various types of dynamic models that can be estimated. It is shown that there is nothing pernicious in using a lagged dependent variable and that all dynamic models either implicitly or explicitly have such a variable; the differences between the models relate to assumptions about the speeds of adjustment of measured and unmeasured variables. When adjustment is quick, it is hard to differentiate between the various models; with slower speeds of adjustment, the various models make sufficiently different predictions that they can be tested against each other. As the speed of adjustment gets slower and slower, specification (and estimation) gets more and more tricky. We then turn to a discussion of estimation. It is noted that models with both a lagged dependent variable and serially correlated errors can easily be estimated; it is only ordinary least squares that is inconsistent in this situation. There is a brief discussion of lagged dependent variables combined with fixed effects and issues related to non-stationarity. We then show how our favored method of modeling dynamics combines nicely with methods for dealing with other TSCS issues, such as parameter heterogeneity and spatial dependence. We conclude with two examples.
Book
Full-text available
This book provides a wide-ranging account of the literature on co-integration and the modelling of integrated processes (those which accumulate the effects of past shocks). Data series which display integrated behaviour are common in economics, although techniques appropriate to analysing such data are of recent origin and there are few existing expositions of the literature. This book focuses on the exploration of relationships among integrated data series and the exploitation of these relationships in dynamic econometric modelling. The concepts of co-integration and error-correction models are fundamental components of the modelling strategy. This area of time-series econometrics has grown in importance over the past decade and is of interest to econometric theorists and applied econometricians alike. By explaining the important concepts informally, but also presenting them formally, the book bridges the gap between purely descriptive and purely theoretical accounts of the literature. The asymptotic theory of integrated processes is described and the tools provided by this theory are used to develop the distributions of estimators and test statistics. Practical modelling advice, and the use of techniques for systems estimation, are also emphasized. A knowledge of econometrics, statistics, and matrix algebra at the level of a final-year undergraduate or first-year undergraduate course in econometrics is sufficient for most of the book. Other mathematical tools are described as they occur. Available in OSO: http://www.oxfordscholarship.com/oso/public/content/economicsfinance/0198288107/toc.html
Article
Full-text available
This paper considers estimation and inference in panel vector autoregressions where (i) the individual effects are either random or fixed, (ii) the time-series properties of the model variables are unknown a priori and may feature unit roots and cointegrating relations, and (iii) the time dimension of the panel is short and its cross-sectional dimension is large. Generalized method of moments (GMM) and quasi maximum likelihood (QML) estimators are obtained and compared in terms of their asymptotic and finite-sample properties. It is shown that the asymptotic variances of the GMM estimators that are based on levels in addition to first differences of the model variables depend on the variance of the individual effects, whereas by construction the fixed effects QML estimator is not subject to this problem. Monte Carlo evidence is provided showing that the fixed effects QML estimator tends to outperform the various GMM estimators in finite sample under both normal and nonnormal errors. The paper also shows how the fixed effects QML estimator can be successfully used for unit root and cointegration tests in short panels.We are grateful to Karim Abadir, Stephen Bond, Jinyong Hahn, Marc Nerlove, Ingmar Prucha, and, especially, Manuel Arellano, Peter Schmidt, Peter Phillips (the editor), and four anonymous referees for helpful and constructive comments. We have also benefited from useful suggestions by participants at various seminars and conferences.
Article
Full-text available
This paper provides an overvie of topics in nonstationary panels: panel unit root tests, panel cointegration tests, and estimation of panel cointegration models. In addition it surveys recent developments in dynamic panel data models.
Article
Full-text available
This paper overviews some recent developments in panel data asymptotics, concentrating on the nonstationary panel case and gives a new result for models with individual effects. Underlying recent theory are asymptotics for multi-indexed processes in which both indexes may pass to infinity. We review some of the new limit theory that has been developed, show how it can be applied and give a new interpretation of individual effects in nonstationary panel data. Fundamental to the interpretation of much of the asymptotics is the concept of a panel regression coefficient which measures the long run average relation across a section of the panel. This concept is analogous to the statistical interpretation of the coefficient in a classical regression relation. A variety of nonstationary panel data models are discussed and the paper reviews the asymptotic properties of estimators in these various models. Some recent developments in panel unit root tests and stationary dynamic panel regression models are also reviewed.
Book
This systematic and integrated framework for econometric modelling is organized in terms of three levels of knowledge: probability, estimation, and modelling. All necessary concepts of econometrics (including exogeneity and encompassing), models, processes, estimators, and inference procedures (centred on maximum likelihood) are discussed with solved examples and exercises. Practical problems in empirical modelling, such as model discovery, evaluation, and data mining are addressed, and illustrated using the software system PcGive. Background analyses cover matrix algebra, probability theory, multiple regression, stationary and non‐stationary stochastic processes, asymptotic distribution theory, Monte Carlo methods, numerical optimization, and macro‐econometric models. The reader will master the theory and practice of modelling non‐stationary (cointegrated) economic time series, based on a rigorous theory of reduction.
Article
In this article, we highlight three points. First, we counter Grant and Lebo’s claim that the error correction model (ECM) cannot be applied to stationary data. We maintain that when data are properly stationary, the ECM is an entirely appropriate model. We clarify that for a model to be properly stationary, it must be balanced. Second, we contend that while fractional integration techniques can be useful, they also have important weaknesses, especially when applied to many time series typical in political science. We also highlight two related but often ignored complications in time series: low power and overfitting. We argue that the statistical tests used in time-series analyses have little power to detect differences in many of the sample sizes typical in political science. Moreover, given the small sample sizes, many analysts overfit their time-series models. Overfitting occurs when a statical model describes random error or noise instead of the underlying relationship. We argue that the results in the Grant and Lebo replications could easily be a function of overfitting.
Article
The authors examine the conditions under which democratic events, including elections, cabinet formations, and government dissolutions, affect asset markets. Where these events have less predictable outcomes, market returns are depressed and volatility increases. In contrast, where market actors can forecast the result, returns do not exhibit any unusual behavior. Further, political expectations condition how markets respond to the political process. When news causes market actors to update their political beliefs, market actors reallocate their portfolios, and overall market behavior changes. To measure political information, Professors Bernhard and Leblang employ sophisticated models of the political process. They draw on a variety of models of market behavior, including the efficient markets hypothesis, capital asset pricing model, and arbitrage pricing theory, to trace the impact of political events on currency, stock, and bond markets. The analysis will appeal to academics, graduate students, and advanced undergraduates across political science, economics, and finance.
Article
Theory: It has been argued that because researchers have not taken into account the long-memoried natures of certain political processes - especially the fact that some political time series appear to contain unit roots - some users of level Vector Autoregressions may have reached erroneous conclusions about the validity of important causal relationships and model specifications. Hypothesis: For the first time, this argument is evaluated. The difficulties associated with modeling long-memoried political processes are reviewed. Then several approaches to dealing with them are discussed. One of the most promising approaches, Fully-Modified Vector Autoregression (FM-VAR) is studied in detail. Method: The usefulness of FM-VAR is evaluated in a stylized Monte Carlo investigation and in reanalyses of major existing studies in political science - reanalyses that are representative of the ways in which level-VARs are employed in our discipline. Results: Our experiments indicate that FM-VAR performs well (particularly in terms of size) in small and large samples, in fully and near-integrated systems, and in stationary systems. Most important, use of FM-VAR calls into question some of the major causal findings and specification test results in published studies. The implication, therefore, is that taking into account the trend properties of political processes is essential in theory building in political science.
Article
We examine some of the consequences of financial globalization for democratization in emerging market economies by focusing on the currency markets of four Asian countries at different stages of democratic development. Using political data of various kinds—including a new events data series—and the Markov regime switching model from empirical macroeconomics, we show that in young and incipient democracies politics continuously causes changes in the probability of experiencing two different currency market equilibria: a high volatility “contagion” regime and a low volatility “fundamentals” regime. The kind of political events that affect currency market equilibration varies cross-nationally depending on the degree to which the polity of a country is democratic and its policymaking transparent. The results help us better gauge how and the extent to which democratization is compatible with financial globalization.
Article
The analysis of time-series data is fraught with problems of specification uncertainty and dynamic instability. Vector autoregression (VAR) is one attempt to overcome specification problems in time-series analysis, but this methodology has been criticized for being unparsimonious and potentially unstable through time. ¹ This article describes an important extension of VAR, one using Bayesian methods and allowing for time-varying parameters. These extensions improve VAR, making analysis less vulnerable to these criticisms. These VAR methods, developed by Doan, Litterman, and Sims (1984), provide a reasonable method for dealing with general time variation when theory does not provide useful a priori specification restrictions.
Article
Beginning with Mueller's (1970) seminal work, researchers have wrestled with explanations of the movement of presidential approval over time. In his initial argument, Mueller states that in tandem, the concepts underlying the coalition of minorities and rally round the flag variables predict that the president's popularity will continually decline over time and that international crises and similar events will explain short-term bumps and wiggles in this otherwise inexorable descent. (1970, 22) From this basis, Mueller posits “… a general downward trend in each president's popularity” (1970, 19) that is linear and deterministic over the course of a term. Others later moved away from arguments of linearity (e.g., Stimson 1976) and from the coalition of minorities concept (e.g., Kemell 1978), but these early characterizations of approval's time path, perpetuated in the “myth of the inexorable descent,” remain to this day.
Article
At one time, the lag time for the implementation of methods from economics and other disciplines in political science was quite long, reflecting the newness of political methodology as well as a lack of statistical training. The articles by Ostrom and Smith and by Durr (in this volume) represent a departure from this longstanding lag time associated with political methodology. These articles, as well as others (e.g., Beck 1992), are using the methodology of unit root econometrics and error correction models with much smaller lag time. The Ostrom and Smith article represents little lag time at all, as some of their results are using methods not as yet printed in econometrics journals!
Article
I am pleased to participate in this symposium because I agree that mutual criticism of our theories and of the methods used to test them helps to make social science objective and rational (Popper 1976). Space limitations preclude me from responding to all of the comments and suggestions of Beck and Williams (in this volume), but my interpretation of their key points is as follows: (1) estimating multiequation error correction models (ECMs) is unnecessary either because many theories provide us with exogeneity restrictions that imply single equation ECMs (Beck) or because statistical inference is unaffected by integration and cointegration in vector autoregressive systems (Williams); (2) using OLS in one step to estimate a single equation ECM is statistically superior to using the Engle-Granger two-step estimator (Beck); (3) commonly used classical hypothesis tests for nonstationarity favor the null hypothesis of a unit root, and therefore cannot be believed, but Bayesian inference with a flat prior solves this and other problems of inference (Williams); and (4) presidential approval is neither statistically nor conceptually an integrated random walk, but is either long memoried (Beck) or stationary (Williams). These are thought-provoking comments, and to one of them I will reply mea culpa. However, some of them need to be qualified, and others are incorrect. I begin by responding to comments (1) and (2) assuming that two or more time-series are integrated and cointegrated and then address comments (3) and (4), which question assumptions of and tests for integration and cointegration.
Article
For political scientists who engage in longitudinal analyses, the question of how best to deal with nonstationary time-series is anything but settled. While many believe that little is lost when the focus of empirical models shifts from the nonstationary levels to the stationary changes of a series, others argue that such an approach erases any evidence of a long-term relationship among the variables of interest. But the pitfalls of working directly with integrated series are well known, and post-hoc corrections for serially correlated errors often seem inadequate. Compounding (or perhaps alleviating, if one believes in the power of selective perception) the difficult question of whether to difference a time-series is the fact that analysts have been forced to rely on subjective diagnoses of the stationarity of their data. Thus, even if one felt strongly about the superiority of one modeling approach over another, the procedure for determining whether that approach is even applicable can be frustrating.
Article
Dramatic world change has stimulated interest in research questions about the dynamics of politics. We have seen increases in the number of time series data sets and the length of typical time series. But three shortcomings are prevalent in published time series analysis. First, analysts often estimate models without testing restrictions implied by their specification. Second, researchers link the theoretical concept of equilibrium with cointegration and error correction models. Third, analysts often do a poor job of interpreting results. The consequences include weak connections between theory and tests, biased estimates, and incorrect inferences. We outline techniques for estimating linear dynamic regressions with stationary data and weakly exogenous regressors. We recommend analysts (1) start with general dynamic models and test restrictions before adopting a particular specification and (2) use the wide array of information available from dynamic specifications. We illustrate this strategy with data on Congressional approval and tax rates across OECD countries.
Article
Has there been a structural change in the way U.S. presidents use force abroad since the nineteenth century? In this article, I investigate historical changes in the use of force by U.S. presidents using Bayesian changepoint analysis. In doing so, I present an integrated Bayesian approach for analyzing changepoint problems in a Poisson regression model. To find the nature of the breaks, I estimate parameters of the Poisson regression changepoint model using Chib's (1998) hidden Markov model algorithm and Frühwirth-Schnatter and Wagner's (2006) data augmentation method. Then, I utilize transdimensional Markov chain Monte Carlo methods to detect the number of breaks. Analyzing yearly use of force data from 1890 to 1995, I find that, controlling for the effects of the Great Depression and the two world wars, the relationship between domestic conditions and the frequency of the use of force abroad fundamentally shifted in the 1940s.
Article
The lack of temporal disaggregation in conflict data has so far presented a strong obstacle to analyzing the short-term dynamics of military conflict. Using a novel data set of hourly dyadic conflict intensity scores drawn from Twitter and other social media sources during the Gaza Conflict (2008-2009), the author attempts to fill a gap in existing studies. The author employs a vector autoregression (VAR) to measure changes in Israel's and Hamas's military response dynamics immediately following two important junctures in the conflict: the introduction of Israeli ground troops and the UN Security Council vote. The author finds that both Hamas's and Israel's response to provocations by the other side increase (both by about twofold) immediately after the ground invasion, but following the UN Security Council vote, Israel's response is cut in half, while Hamas's slightly increases. In addition, the author provides a template for researchers to harness social media to capture the micro-dynamics of conflict.
Article
In recent years, more and more social scientists have begun to view the world as inherently probabilistic (Suppes 1984; Gigerenzer 1987). Without detailing the philosophical underpinnings of such a view, this subtle movement away from deterministic positivism has been fed, in large part, by a recognition of the indeterminacy of strategic interaction among individuals and the inevitability of uncertainty in social relations (Boudon 1986). Chaos theory provides an alternative viewpoint from which to view indeterminacy because the complexity we see in the real world may, in theory, be a reflection of chaotic dynamics resulting from simple deterministic structures (Huckfeldt 1990). Particularly when processes are generated by social or strategic interaction among actors, nonlinear models provide useful representations and chaotic outcomes become conceivable. In short, the existence of a complex social reality is in itself inadequate evidence of indeterminacy.
Article
This paper considers estimation and hypothesis testing in linear time series when some or all of the variables have (possibly multiple) unit roots. The motivating example is a vector autoregression with some unit roots in the companion matrix, which might include polynomials in time as regressors. Parameters that can be written as coefficients on mean zero, nonintegrated regressors have jointly normal asymptotic distribution, converging at the rate of T(superscript "one-half") In general, the other coefficients (including the coefficient on polynomials in time), and associated t and F test statistics, have nonstandard asymptotic distributions. Copyright 1990 by The Econometric Society.
Article
The unit root hypothesis is examined allowing a possible one-time change in the level or in the slope of the trend function. When fluctuations are stationary around a breaking trend function, standard tests cannot reject the unit root, even asymptotically. Consistent tests are derived and applied to the Nelson-Plosser data set (allowing a change in level for the 1929 crash) and to the postwar quarterly real GNP series (allowing a change in slope after 1973). The unit root hypothesis is rejected at a high confidence level for most series. Fluctuations are stationary. The only persistent "shocks" are the 1929 crash and the 1973 oil price shock. Copyright 1989 by The Econometric Society.
) make a serious attempt to provide some theoretical motivation for their fractionally cointegrated model of strategic party government
  • Lebo
  • Kroger Mcglynn
but they consider samples sizes of 500 and 1000. GL essentially focus on the series used by students of macro-American politics, assuming sample sizes of less than 100. With such small sample sizes, estimation bias could be a serious problem. See, for instance
  • Banerjee
7) note that models of stationary relationships the departures from which have nonconstant variance could be useful but they do not explain how
  • Banerjee
252) are more equivocal. They recommend against estimating unbalanced equations. But Maddala and Kim quote Banerjee et al. to say that unbalanced equations are “valid tools of inference as long as the correct critical values are used
  • For Instance
  • Banerjee
A discussion of these methods along with applications from the study of international relations and American politics can be
  • Banerjee
Reaction in a rational expectations arms race model of U.S.-Soviet rivalry
  • Williams
The methodology of cointegation
  • Beck
GL (fn. 35) mention a companion study that analyzes "how multiple endogenous variables re-equilibrate to each other
  • Mcglynn Lebo
What moves macropartisanship?
  • Erikson
Electoral participation: A cause or a consequence of elite polarization
  • Wood B Dan
  • Jordan Soren
Empirical regime-specific models of international, inter-group conflict and politics
  • Brandt Patrick
KLW's, and GL's articles, unbalanced regression is one in which "the regressand is not of the same order of integration as the regressors, or any linear combination of the regressors
  • Banerjee