# Chris Chatfield's research while affiliated with University of Bath and other places

**What is this page?**

This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

## Publications (48)

This paper presents a general framework for constructing a predictive distribution of the exposure to an environmental hazard
sustained by a randomly selected member of a designated population. The individual’s exposure is assumed to arise from random
movement through the environment, resulting in a distribution of exposure that can be used for env...

Chris Chatfield draws on his experience of over 40 years of forecasting to make some practical recommendations about the choice and implementation of forecasting methods and to offer advice to forecasting practitioners and consultants. Copyright International Institute of Forecasters, 2007

The modeling of complex environmental stochastic systems is a difficult task. There may be little or no background theory, the data may be of doubtful quality, and it is not always clear what the objectives are. Nevertheless, scientists have made substantial progress in getting a better insight into environmental relationships and changes. However,...

This paper identifies a portfolio of variable combinations that can be used as sets of control variables in research on U.S. state economic growth. I examine 31 variables that have either been used or suggested by previous research. These yield over 2 billion variable combinations, each representing a potential set of control variables. I manipulat...

As we go through life, everyone makes forecasts all the time, often without realising it. Sadly these forecasts are often (very) inaccurate. Chris Chatfield looks at the chequered history of forecasting and asks how we might do it better using time-series data, and what statistical techniques and models might help us.

This article describes the use of a probabilistic model to estimate personal exposure to airborne pollutants. Such estimates are important when assessing, for example, the potential effects of air pollution on health and in developing related policy. An individual's personal exposure will be determined by local pollution sources which will change t...

Predicting future values of a time series poses special problems because successive observations are not usually independent but are correlated through time. Consequently, methods such as multiple regression are not generally suitable. The models that are used often have to cope with two sources of variation, trend and seasonality. In this entry, s...

This paper describes a model framework for estimating the exposure to pollutants for a randomly selected member of a designated population. Such exposures can then be used in studies of the effects on health. The model makes intrinsic use of the random time-activity patterns of such individuals that have an important role in determining variations...

This paper describes a conceptual framework within which models can be developed for predicting the exposure to specified pollutants, of a randomly selected member of a designated population. Such predicted exposures can in turn be used for such things as the analysis of their impacts on human health. The model uses randomly selected time-activity...

First order stationary autoregressive (AR(1)) models are introduced for which there exists a linear relation between the expectations of the observations, and where it is readily possible to arrange the marginal distributions to be other than normal.

The paper reflects on the author's experience and discusses how statistical theory, sound judgment and knowledge of the context can work together to best advantage when tackling the wide range of statistical problems that can arise in practice. The phrase `pragmatic statistical inference' is introduced.

This paper reviews recent developments in time series forecasting with particular emphasis on the use of multivariate and non-linear models, the results of recent forecasting competitions, the computation of prediction intervals and the effects of model uncertainty.

Exponential smoothing (ES) forecasting methods are widely used but are often dis-cussed without recourse to a formal statistical framework. This paper reviews and compares a variety of potential models for ES. As well as autoregressive integrated moving average and structural models, a promising class of dynamic non-linear state space models is des...

Computing prediction intervals (PIs) is an important part of the forecasting process intended to indicate the likely uncertainty in point forecasts. The commonest method of calculating PIs is to use theoretical formulae conditional on a best-fitting model. If a normality assumption is used, it needs to be checked. Alternative computational procedur...

This case-study fits a variety of neural network (NN) models to the well-known air line data and compares the resulting forecasts with those obtained from the Box–Jenkins and Holt–Winters methods. Many potential problems in fitting NN models were revealed such as the possibility that the fitting routine may not converge or may converge to a local m...

Neural networks are not a universal panacea for time series
forecasting. The empirical evidence is mixed. Various problems in
modelling and fitting time series with NNs are identified. There is a
need for greater co-operation between statisticians, forecasters and
computer scientists with their widely different skills and background

In time-series analysis, a model is rarely pre-specified but rather is typically formulated in an iterative, interactive way using the given time-series data. Unfortunately the properties of the fitted model, and the forecasts from it, are generally calculated as if the model were known in the first place. This is theoretically incorrect, as least...

A partir des données de Chatfield-Prothero, plusieurs réseaux neuronaux (RN) sont élaborés et les prévisions en résultant sont comparées avec celles des modélisations de Box-Jenkins et de Holt-Winters. Les résultats montrent que les modèles RN présentent des limites et qu'il n'est pas raisonnable de les utiliser de façon automatique. En définitive,...

Policymakers need to know whether prediction is possible and, if so, whether any proposed forecasting method will provide forecasts that are substantially more accurate than those from the relevant benchmark method. An inspection of global temperature data suggests that temperature is subject to irregular variations on all relevant time scales, and...

This paper takes abroad, pragmatic view of statistical inference to include all aspects of model formulation. The estimation of model: parameters traditionally assumes that a model has a prespecified known form and takes no account of possible uncertainty regarding the model structure. This implicitly assumes the existence of a 'true' model, which...

The purpose of the M2-Competition is to determine the post sample accuracy of various forecasting methods. It is an empirical study organized in such a way as to avoid the major criticism of the M-Competition that forecasters in real situations can use additional information to improve the predictive accuracy of quantitative methods. Such informati...

Several general approaches to calculating interval forecasts are described and compared. They include the use of theoretical formulae based on a fitted probability model, various "approximate" formulae (which should be avoided), and empirically-based, simulation, and resampling procedures. Some gener al comments are made as to why prediction interv...

Commentary on Armstrong, J. Scott, and Collopy, Fred, (1992), “Error measures for generalizing about forecasting
methods: Empirical comparisons,” International Journal of Forecasting, 8, 69-80.

Yar and Chatfield (1990) have proposed a method of constructing prediction intervals for the additive Holt-Winters forecasting procedure and this companion paper extends the results to the multiplicative seasonal case. In contrast to the additive case, it is shown that the width of ‘multiplicative’ prediction intervals will depend on the time origi...

Prediction interval formulae are derived for the Holt-Winters forecasting procedure with an additive seasonal effect. The formulae make no assumptions about the ‘true’ underlying model. The results are contrasted with those obtained from various alternative approaches to the calculation of prediction intervals. Some large discrepancies are noted an...

Time-series forecasting methids may be classified into univariate (projection) methods and multivariate methids. The choice amongst methods depends on a variety of considerations including the objectives, the type of data, and whether an automatic or unautomatic approach is to be used. The value of forecasting competitions is reviewed and suggestio...

The Holt-Winters forecasting procedure is a variant of exponential smoothing which is simple, yet generally works well in practice, and is particularly suitable for producing short-term forecasts for sales or demand time-series data. Some practical problems in implementing the method are discussed, including the normalization of seasonal indices, t...

Forecasting methods are reviewed. They may be classified into univariate, multivariate and judgemental methods, and also by whether an automatic or non-automatic approach is adopted. The choice of ‘best’ method depends on a wide variety of considerations. The use of forecasting competitions to compare the accuracy of univariate methods is discussed...

Experienced statisticians like to get the `feel' of a given set of data by subjecting them to an informal initial examination (abbreviated IDA for Initial Data Analysis). This data-scrutiny is vital, not only for detecting `oddities' in the data, but also more generally for data description and model formulation. IDA is briefly reviewed and compare...

It is usually wise to begin any statistical analysis with an informal, exploratory examination of the data, and this is often called exploratory data analysis (abbreviated EDA). The ingredients of EDA are discussed, and two main objectives are delineated, namely data description and model-formulation. It is suggested that it is important to see EDA...

This paper describes a conceptual framework within which models can be de-veloped for predicting the exposure of randomly selected individuals to a specified pollutant (e.g. for health impact analysis). They can help answer questions like: (i) What fraction of the population sustained 'high' levels of exposure? (ii) How many sustained such exposure...

## Citations

... The methods commonly used in time series prediction include regression, autoregressive integrated moving average (ARIMA), recurrent neural network (RNN), long short-term memory (LSTM), and gated recurrent unit (GRU) [7]- [9]. Prediction results from time series are either a point or an interval [10]. Evaluation methods are used to predict the accuracy values [11]. ...

... He further argued that IDA should be carried out before attempting formal inference because this would avoid the use of inappropriate techniques or models. Chatfield and Schimek (1987) give an example of IDA for time series analysis. Chatfield proposed that, although the general approach for a regression analysis is determined a priori, IDA is crucial when making assumptions about the model to be fitted. ...

... DIMEX is built on the probabilistic framework, known as pCNEM, developed by [5], [7], [6]. pCNEM itself built upon the NAAQS Exposure Model (NEM), which was deterministic, and pNEM, a probabistic version [8]. ...

... In order to understand the impact of the fraudulent financial reporting, a sample of 68 of frauds, from Europe and US, in the period 1925-2020 is analysed. As it is pointed out by Chatfield (1996) In this context, the audit firms were split into Big 4 and non-Big 4 companies, in order to observe the impact of the biggest audit firm in audit world. Figure 1 shows with the exception of Italy, which was audited by Grant Thornton, a non-Big 4 audit firm. ...

... In this paper we develop a model that estimates the ERF by relating personal exposures to daily health counts (aggregated over the entire population), and follows on from work by Holloman et al. (2004) and Shaddick et al. (2005). In particular we investigate the potential of using the pCNEM exposure simulator ) to generate personal exposures, and compare the results to the CRFs estimated using routinely collected ambient levels. ...

... Bitcoin is an intriguing analogy since it is a time series prediction issue in a market that is still in its early stages. Traditional time series prediction approaches, such as the Holt-Winters exponential smoothing models, rely on linear assumptions and need data that may be classified as a trend, seasonal, or noise (Chatfield and Yar, 1988). This technique is better suited for tasks including seasonal impacts, such as sales forecasting. ...

... The title of this section emulates the successfully "eyecatching" title "What is the 'best' method of forecasting?" that was given to the seminal review paper by Chatfield (1988) from the forecasting field. This same paper begins by stating that the reader who expects a simple answer to the question consisting the paper's title might eventually get disappointed by the contents of the paper, although some general guidelines are still provided in it. ...

... Hay que indicar que uno de los riesgos de la metodología propuesta por Box y Jenkins es la sobrediferenciación (Makridakis y Hibon, 1997). Este peligro se acentúa cuando se trabaja con series temporales para las cuales es difícil distinguir si son estacionarias o no (Chatfield, 1997). Los modelos BJ obtenidos para las series objeto de estudio, se muestran en laTabla 1 En dicha Tabla, se puede observar cómo los modelos BJ encontrados para las cuatro series temporales, incorporan la estacionalidad anual (orden 12), en ellas presente. ...

... The ability of the model to predict out-of-sample (i.e., predict data not included within the calibration dataset) was assessed through crossvalidation (Chatfield 2006). Here, cross-validation is performed by estimating model parameters from one subset (i.e., fold) of data and then using those parameters to make predictions on the remaining data. ...

... Early TSER would consist of predicting the value of the numerical target variable as soon as possible, while ensuring proper reliability. Another example of a supervised task for which ML-EDM approaches could be developed is time series forecasting [28]. Basically, a forecasting model aims to predict the next measurements of a time series up to an horizon ν, Y = x t+1 , x t+2 , . . . ...