About
485
Publications
142,411
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
35,907
Citations
Introduction
Current institution
Climate Econometrics
Current position
- Deputy Director
Publications
Publications (485)
UK top income shares have varied hugely over the past two centuries, ranging from more than 30% to less than 7% of pre‐tax national income allocated to the top 1 percentile. We build a congruent dynamic linear regression model of the top 1% income share allowing for economic, political and social factors. Saturation estimation is used to model outl...
Net zero greenhouse gas emissions by 2050, the UK’s current target, requires bridging a dramatic energy transition and eliminating all other net sources of emissions while ensuring a just transition. Key components like renewable electricity generation and electric vehicles are well developed, but many issues remain. Public support for a green econ...
The UK relationship between nominal wage inflation and the unemployment rate is unstable. Over sub‐periods of the last 160 years of turbulent data, Phillips curve slopes range from strongly negative, slightly negative, flat, slightly positive and strongly positive. Our constant‐parameter congruent model of real wages explains these instabilities, y...
We review key stages in the development of general‐to‐specific modelling ( Gets ). Selecting a simplified model from a more general specification was initially implemented manually, then through computer programs to its present automated machine learning role to discover a viable empirical model. Throughout, Gets applications faced many criticisms,...
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life...
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life...
Comparisons between alternative scenarios are used in many disciplines, from macroeconomics through epidemiology to climate science, to help with planning future responses. Differences between scenario paths are often interpreted as signifying likely differences between outcomes that would materialise in reality. However, even when using correctly...
By its emissions of greenhouse gases, economic activity is the source of climate change which affects pandemics that in turn can impact badly on economies. Across the three highly interacting disciplines in our title, time-series observations are measured at vastly different data frequencies: very low frequency at 1000-year intervals for paleoclima...
Objective:
We analyze the number of recorded cases and deaths of COVID-19 in many parts of the world, with the aim to understand the complexities of the data, and produce regular forecasts.
Methods:
The SARS-CoV-2 virus that causes COVID-19 has affected societies in all corners of the globe but with vastly differing experiences across countries....
We investigate forecasting in models that condition on variables for which future values are unknown. We consider the role of the significance level because it guides the binary decisions whether to include or exclude variables. The analysis is extended by allowing for a structural break, either in the first forecast period or just before. Theoreti...
Successful modeling of observational data requires jointly discovering the determinants of the underlying process and the observations from which it can be reliably estimated, given the near impossibility of pre-specifying both. To do so requires avoiding many potential problems, including substantive omitted variables; unmodeled non-stationarity a...
Faculty and graduates of the University of Oxford have played a significant role in the history of econometrics from an early date. The term econometrics was only formulated by Ragnar Frisch in the 1930s, but in the seventeenth century, William Petty created a discipline that he called Political Arithmetick, a forerunner of quantitative economics t...
The Covid-19 pandemic has put forecasting under the spotlight, pitting epidemiological models against extrapolative time-series devices. We have been producing real-time short-term forecasts of confirmed cases and deaths using robust statistical models since 20 March 2020. The forecasts are adaptive to abrupt structural change, a major feature of t...
Economic forecasting is difficult, largely because of the many sources of nonstationarity influencing observational time series. Forecasting competitions aim to improve the practice of economic forecasting by providing very large data sets on which the efficacy of forecasting methods can be evaluated. We consider the general principles that seem to...
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life...
We have been publishing real-time forecasts of confirmed cases and deaths for COVID-19 from mid-March 2020 onwards, published at www.doornik.com/COVID-19. These forecasts are short-term statistical extrapolations of past and current data. They assume that the underlying trend is informative of short term developments, without requiring other assump...
‘Fat big data’ characterise data sets that contain many more variables than observations. We discuss the use of both principal components analysis and equilibrium correction models to identify cointegrating relations that handle stochastic trends in non-stationary fat data. However, most time series are wide-sense non-stationary—induced by the join...
There are numerous possible approaches to building a model of a given data set, whether it be time series, cross section or panel. In economics, imposing a ‘theory model’ on the data, by simply estimating its parameters, is common. In ‘big data’ analyses, various methods of selecting relationships are used (aka ‘data mining’), but in practice, mode...
While empirical modelling is primarily concerned with understanding the interactions between variables to recover the underlying ‘truth’, the aim of forecasts is to generate useful predictions about the future regardless of the model. We explain why models must be different in non-stationary processes from those that are optimal’ under stationarity...
In a world that is always changing, ‘conclusion’ seems an oxymoron. But we can summarize the story. First, that non-stationary data are pervasive in observational disciplines. Second, there are two main sources of non-stationarity deriving from evolutionary change leading to stochastic trends that cumulate past shocks and abrupt changes, especially...
Empirical models used in disciplines as diverse as economics through to climatology analyze data assuming observations are from stationary processes even though the means and variances of most ‘real world’ time series change. We discuss some key sources of non-stationarity in demography, economics, politics and the environment, noting that (say) no...
Structural changes are pervasive from innovations affecting many disciplines. These can shift distributions, altering relationships and causing forecast failure. Many empirical models also have outliers: both can distort inference. When the dates of shifts are not known, they need to be detected to be handled, usually by creating an indicator varia...
This chapter provides four primers. The first considers what a time series is and notes some of the major properties that time series might exhibit. The second extends that to distinguish stationary from non-stationary time series, where the latter are the prevalent form, and indeed provide the rationale for this book. The third describes a specifi...
The previous Chapter noted there are benefits of non-stationarity, so we now consider that aspect in detail. Non-stationarity can be caused by stochastic trends and shifts of data distributions. The simplest example of the first is a random walk of the kind created by Yule, where the current observation equals the previous one perturbed by a random...
It may be thought that testing for outliers and shifts everywhere in a sample might adversely affect statistical inference. Fortunately, the rigorous and innovative analysis by Søren Johansen and Bent Nielsen for impulse-indicator saturation (IIS) allays such concerns. Under the null of no outliers, the limiting distribution of the IIS estimator of...
The M4 forecast competition required forecasts of 100,000 time series at different frequencies. We provide a detailed description of the calibrated average of Rho and Delta (Card) forecasting method that we developed for this purpose. Delta estimates a dampened trend from the growth rates, while Rho estimates an adaptive but simple autoregressive m...
This open access book focuses on the concepts, tools and techniques needed to successfully model ever-changing time-series data. It emphasizes the need for general models to account for the complexities of the modern world and how these can be applied to a range of issues facing Earth, from modelling volcanic eruptions, carbon dioxide emissions and...
Ordinary least squares estimation of an impulse-indicator coefficient is inconsistent, but its variance can be consistently estimated. Although the ratio of the inconsistent estimator to its standard error has a t-distribution, that test is inconsistent: one solution is to form an index of indicators. We provide Monte Carlo evidence that including...
The adoption as policy models by central banks of representative agent New Keynesian dynamic stochastic general equilibrium models has been widely criticised, including for their simplistic micro-foundations. At the Bank of England, the previous generation of policy models is seen in its 1999 medium-term macro model (MTMM). Instead of improving tha...
During his period at LSE from the early 1960s to the mid-1980s, John Denis Sargan rose to international prominence and LSE emerged as the world’s leading centre for econometrics. Within this context, we examine the life of Denis Sargan, describe his major research accomplishments, recount the work of his many doctoral students and track this remark...
Macroeconomic time-series data are aggregated, inaccurate, non-stationary, collinear and rarely match theoretical concepts. Macroeconomic theories are incomplete, incorrect and changeable: location shifts invalidate the law of iterated expectations and 'rational expectations' are then systematically biased. Empirical macro-econometric models are no...
Economic policy agencies produce forecasts with accompanying narratives, and base policy changes on the resulting anticipated developments in the target variables. Systematic forecast failure, defined as large, persistent deviations of the outturns from the numerical forecasts, can make the associated narrative false, which would in turn question t...
When empirically modelling the U.S. demand for money, Milton Friedman more than doubled the observed initial stock of money to account for a "changing degree of financial sophistication" in the United States relative to the United Kingdom. This note discusses effects of this adjustment on Friedman's empirical models.
This paper develops a new approach for evaluating multi-step system forecasts with relatively few forecast-error observations. It extends the work of Clements and Hendry (1993) by using that of Abadir et al. (2014) to generate “design-free” estimates of the general matrix of the forecast-error second-moment when there are relatively few forecast-er...
Data spanning long time periods, such as that over 1860–2012 for the UK, seem likely to have substantial errors of measurement that may even be integrated of order one, but which are probably cointegrated for cognate variables. We analyze and simulate the impacts of such measurement errors on parameter estimates and tests in a bivariate cointegrate...
The concept of a ‘rational expectation’ (RE), defined as the conditional expectation of next period’s outcome, given all relevant information today and assuming a known distribution, should be forgotten despite its widespread use in economics to model how agents form expectations about future outcomes. This critique applies most forcefully to ‘pres...
We present a methodology for detecting breaks at any point in time-series regression models using an indicator saturation approach, applied here to modelling climate change. Building on recent developments in econometric model selection for more variables than observations, we saturate a regression model with a full set of designed break functions....
Denis Sargan was the leading British econometrician of his generation, playing a central role in establishing the technical basis for modern time-series econometric analysis. In a distinguished 40-year career as teacher, researcher, and practitioner Sargan transformed the role of econometrics in the analysis of macroeconomic time series and the tea...
Economic forecasting may go badly awry when there are structural breaks, such that the relationships between variables that held in the past are a poor basis for making predictions about the future. We review a body of research that seeks to provide viable strategies for economic forecasting when past relationships can no longer be relied upon. We...
It is argued that model selection and robust estimation should be handled jointly. Impulse indicator saturation makes that possible, but leads to the situation where there are more variables than observations. This is illustrated by revisiting the analysis of Tobin's food data. © 2016 Board of the Foundation of the Scandinavian Journal of Statistic...
We recommend a major shift in the Econometrics curriculum for both graduate and undergraduate teaching. It is essential to include a range of topics that are still rarely addressed in such teaching, but are now vital for understanding and conducting empirical macroeconomic research. We focus on a new approach to macro-econometrics teaching, since e...
We present a methodology for detecting breaks at any point in time-series regression models using an indicator saturation approach, applied here to modelling climate change. Building on recent developments in econometric model selection for more variables than observations, we saturate a regression model with a full set of designed break functions....
To capture location shifts in the context of model selection, we propose selecting significant step indicators from a saturating set added to the union of all of the candidate variables. The null retention frequency and approximate non-centrality of a selection test are derived using a ‘split-half’ analysis, the simplest specialization of a multipl...
Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role...
We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium-correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, impulses, omitted variables, unanticipated location shifts and inc...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
Economic theories are often fitted directly to data to avoid any issues of model selection. This is an excellent strategy when the theory is complete and correct, but less successful otherwise. We consider embedding a theory model that specifies the set of n relevant exogenous variables, xt, within a larger set of n+k candidate variables, (xt; wt),...
A synthesis of the authors' groundbreaking econometric research on automatic model selection, which uses powerful computational algorithms and theory evaluation.
Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious...
This book is a collection of 14 original research articles presented at the conference Nonlinear Time Series Econometrics that was held in Ebeltoft, Denmark, in June 2012. The conference gathered several eminent time series econometricians to celebrate the work and outstanding career of Professor Timo Teräsvirta, one of the leading scholars in the...
Although a general unrestricted model may under-specify the data generation process, especially when breaks occur, model selection can still improve over estimating a prior specification. Impulse-indicator saturation (IIS) can ‘correct’ non-constant intercepts induced by location shifts in omitted variables, which surprisingly leave slope parameter...
We outline six important hazards that can be encountered in econometric
modelling of time-series data, and apply that analysis to demonstrate errors
in the empirical modelling of climate data in Beenstock et al. (2012). We show that
the claim made in Beenstock et al. (2012) as to the different degrees of
integrability of CO2 and temperature is inco...
We develop forecast-error taxonomies when there are unmodeled variables, forecast ‘off-line’. We establish three surprising results. Even when an open system is correctly specified in-sample with zero intercepts, despite known future values of strongly exogenous variables, changes in dynamics can induce forecast failure when they have non-zero mean...
High dimensional general unrestricted models (GUMs) may include important individual determinants, many small relevant effects, and irrelevant variables. Automatic model selection procedures can handle more candidate variables than observations, allowing substantial dimension reduction from GUMs with salient regressors, lags, non-linear transformat...
We demonstrate major flaws in the statistical analysis of Beenstock et
al. (2012), discrediting their initial claims as to the different
degrees of integrability of CO2 and temperature.
This article, which proposes a new approach to forecasting breaks by focusing on the role of information, begins by outlining the eight conditions that are necessary to successfully forecast a break. Section 2 then considers the concepts of unpredictability and information to address the first necessary condition. Section 3 separates information in...
This text provides up-to-date coverage of both new developments and well-established fields in the sphere of economic forecasting. The articles aim to provide accounts of the key concepts, subject matter, and techniques in a number of diverse but related areas. It covers the ways in which the availability of ever more plentiful data and computation...
We consider selecting an econometric model when there is uncertainty over both the choice of variables and the occurrence and timing of multiple location shifts. The theory of general-to-simple (Gets) selection is outlined and its e fficacy demonstrated in a new set of simulation experiments first for a constant model in ortho gonal variables, wher...