Bayesian Distributed Lag Models: Estimating Effects of Particulate Matter Air Pollution on Daily Mortality

Department of Preventive Medicine, Northwestern University, Feinberg School of Medicine, 680 North Lake Shore Drive, Suite 1102, Chicago, Illinois 60611, USA.
Biometrics (Impact Factor: 1.57). 05/2008; 65(1):282-91. DOI: 10.1111/j.1541-0420.2007.01039.x
Source: PubMed


A distributed lag model (DLagM) is a regression model that includes lagged exposure variables as covariates; its corresponding distributed lag (DL) function describes the relationship between the lag and the coefficient of the lagged exposure variable. DLagMs have recently been used in environmental epidemiology for quantifying the cumulative effects of weather and air pollution on mortality and morbidity. Standard methods for formulating DLagMs include unconstrained, polynomial, and penalized spline DLagMs. These methods may fail to take full advantage of prior information about the shape of the DL function for environmental exposures, or for any other exposure with effects that are believed to smoothly approach zero as lag increases, and are therefore at risk of producing suboptimal estimates. In this article, we propose a Bayesian DLagM (BDLagM) that incorporates prior knowledge about the shape of the DL function and also allows the degree of smoothness of the DL function to be estimated from the data. We apply our BDLagM to its motivating data from the National Morbidity, Mortality, and Air Pollution Study to estimate the short-term health effects of particulate matter air pollution on mortality from 1987 to 2000 for Chicago, Illinois. In a simulation study, we compare our Bayesian approach with alternative methods that use unconstrained, polynomial, and penalized spline DLagMs. We also illustrate the connection between BDLagMs and penalized spline DLagMs. Software for fitting BDLagM models and the data used in this article are available online.

34 Reads
    • "We conducted a simulation study to compare the existing geometric (GEO) and negative binomial (NB) delays as used in Toktay et al. (2000) and exponential (E) delay as used in Clottey et al. (2012), with the gamma delay (G), using the M–H algorithm for estimation. Data were generated under six different scenarios for the delay function coefficients based on specifications that had previously been employed to evaluate DLMs (Toktay, 2004; Welty et al., 2009). We categorized the delay functions by two characteristics: (i) type: discrete (i.e., geometric and negative binomial delays ) or continuous (i.e., exponential and gamma delays) and (ii) shape: 1, 2, or 3, indicating whether earlier or later lags had the largest coefficients. "
    [Show abstract] [Hide abstract]
    ABSTRACT: An important problem faced in closed-loop supply chains is ensuring a sufficient supply of reusable products (i.e., cores) to support reuse activities. Accurate forecasting of used product returns can assist in effectively managing the sourcing activities for cores. The application of existing forecasting models to actual data provided by an Original Equipment Manufacturer (OEM) remanufacturer resulted in the following challenges: (i) inherent difficulties in estimation due to long return lags in the data and (ii) required adjustments for initial conditions. This article develops methods to address these issues and illustrates the proposed approach using the data provided by the OEM remanufacturer. The cost implications of using the proposed method to source cores are also investigated. Results of the performed analysis showed that the proposed forecasting approach performed best when the product is in the maturity or decline stages of its life cycle, with the rate of product returns balanced with the demand volume for the remanufactured product. Forecasting product returns can therefore be best leveraged for reducing the acquisition costs of cores in such settings.
    IIE Transactions 06/2014; 46(9). DOI:10.1080/0740817X.2014.882531 · 1.37 Impact Factor
  • Source
    • "How to incorporate multiple lagged exposures in a high-dimensional response-exposure surface remains a problem where no consensus has been reached. Selecting predictors/complex models under a distributed lag structure remains an issue of ongoing research [88,89]. In many health effects studies, where direct measurements of personal exposure to multiple pollutants are not practical, ambient pollutants concentrations are often used as proxies for personal exposure [6,13]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: As public awareness of consequences of environmental exposures has grown, estimating the adverse health effects due to simultaneous exposure to multiple pollutants is an important topic to explore. The challenges of evaluating the health impacts of environmental factors in a multipollutant model include, but are not limited to: identification of the most critical components of the pollutant mixture, examination of potential interaction effects, and attribution of health effects to individual pollutants in the presence of multicollinearity. In this paper, we reviewed five methods available in the statistical literature that are potentially helpful for constructing multipollutant models. We conducted a simulation study and presented two data examples to assess the performance of these methods on feature selection, effect estimation and interaction identification using both cross-sectional and time-series designs. We also proposed and evaluated a two-step strategy employing an initial screening by a tree-based method followed by further dimension reduction/variable selection by the aforementioned five approaches at the second step. Among the five methods, least absolute shrinkage and selection operator regression performs well in general for identifying important exposures, but will yield biased estimates and slightly larger model dimension given many correlated candidate exposures and modest sample size. Bayesian model averaging, and supervised principal component analysis are also useful in variable selection when there is a moderately strong exposure-response association. Substantial improvements on reducing model dimension and identifying important variables have been observed for all the five statistical methods using the two-step modeling strategy when the number of candidate variables is large. There is no uniform dominance of one method across all simulation scenarios and all criteria. The performances differ according to the nature of the response variable, the sample size, the number of pollutants involved, and the strength of exposure-response association/interaction. However, the two-step modeling strategy proposed here is potentially applicable under a multipollutant framework with many covariates by taking advantage of both the screening feature of an initial tree-based method and dimension reduction/variable selection property of the subsequent method. The choice of the method should also depend on the goal of the study: risk prediction, effect estimation or screening for important predictors and their interactions.
    Environmental Health 10/2013; 12(1):85. DOI:10.1186/1476-069X-12-85 · 3.37 Impact Factor
  • Source
    • "In this setting, interest lies in specifying plausible shapes for DL curves and in particular the " mortality displacement " effect, a phenomenon characterized by negative coefficients in the tail of the estimated lag structure. Zanobetti et al. (2000) propose a generalized model taking a penalized spline approach to modeling DL curves while Welty et al. (2009) discuss a Bayesian approach with penalties on parameters determined by carefully chosen priors. Others (Muggeo et al., 2008; Gasparrini et al., 2010) allow lag coefficients to change with temperature in addition to lying on a smooth curve, so that a surface of lag coefficients results. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Summary The distributed lag model (DLM), used most prominently in air pollution studies, finds application wherever the effect of a covariate is delayed and distributed through time. We specify modified formulations of DLMs to provide computationally attractive, flexible varying-coefficient models that are applicable in any setting in which lagged covariates are regressed on a time-dependent response. We investigate the application of such models to rainfall and river flow and in particular their role in understanding the impact of hidden variables at work in river systems. We apply two models to data from a Scottish mountain river, and we fit to some simulated data to check the efficacy of our model approach. During heavy rainfall conditions, changes in the influence of rainfall on flow arise through a complex interaction between antecedent ground wetness and a time-delay in rainfall. The models identify subtle changes in responsiveness to rainfall, particularly in the location of peak influence in the lag structure.
    Biometrics 02/2013; 69(2). DOI:10.1111/biom.12008 · 1.57 Impact Factor
Show more

Preview (2 Sources)

34 Reads
Available from