Article

U.S. census unit population exposures to ambient air pollutants

National Center for Environmental Health, Centers for Disease Control and Prevention, Atlanta, Georgia, USA.
International Journal of Health Geographics (Impact Factor: 2.62). 01/2012; 11(1):3. DOI: 10.1186/1476-072X-11-3
Source: PubMed

ABSTRACT

Progress has been made recently in estimating ambient PM(2.5) (particulate matter with aerodynamic diameter < 2.5 μm) and ozone concentrations using various data sources and advanced modeling techniques, which resulted in gridded surfaces. However, epidemiologic and health impact studies often require population exposures to ambient air pollutants to be presented at an appropriate census geographic unit (CGU), where health data are usually available to maintain confidentiality of individual health data. We aim to generate estimates of population exposures to ambient PM(2.5) and ozone for U.S. CGUs.
We converted 2001-2006 gridded data, generated by the U.S. Environmental Protection Agency (EPA) for CDC's (Centers for Disease Control and Prevention) Environmental Public Health Tracking Network (EPHTN), to census block group (BG) based on spatial proximities between BG and its four nearest grids. We used a bottom-up (fine to coarse) strategy to generate population exposure estimates for larger CGUs by aggregating BG estimates weighted by population distribution.
The BG daily estimates were comparable to monitoring data. On average, the estimates deviated by 2 μg/m(3) (for PM(2.5)) and 3 ppb (for ozone) from their corresponding observed values. Population exposures to ambient PM(2.5) and ozone varied greatly across the U.S. In 2006, estimates for daily potential population exposure to ambient PM(2.5) in west coast states, the northwest and a few areas in the east and estimates for daily potential population exposure to ambient ozone in most of California and a few areas in the east/southeast exceeded the National Ambient Air Quality Standards (NAAQS) for at least 7 days.
These estimates may be useful in assessing health impacts through linkage studies and in communicating with the public and policy makers for potential intervention.

Download full-text

Full-text

Available from: Yongping Hao, Feb 27, 2014
  • Source
    • "In general, most of the studies represent only the impact caused by the pollutant in various time scales. It is indeed important to assess the population exposure to various levels of the pollutant in the spatial and the temporal scales [14] [15] [16] [17]. Air pollution exposure assessment studies are limited in developing countries like India, due to their widespread geographical coverage, increased urban population sprawl, and limited number of air pollution monitoring stations. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Research outcomes from the epidemiological studies have found that the course (PM 10 ) and the fine particulate matter (PM 2.5 ) are mainly responsible for various respiratory health effects for humans. The population-weighted exposure assessment is used as a vital decision-making tool to analyze the vulnerable areas where the population is exposed to critical concentrations of pollutants. Systemic sampling was carried out at strategic locations of Chennai to estimate the various concentration levels of particulate pollution during November 2013–January 2014. The concentration of the pollutants was classified based on the World Health Organization interim target (IT) guidelines. Using geospatial information systems the pollution and the high-resolution population data were interpolated to study the extent of the pollutants at the urban scale. The results show that approximately 28% of the population resides in vulnerable locations where the coarse particulate matter exceeds the prescribed standards. Alarmingly, the results of the analysis of fine particulates show that about 94% of the inhabitants live in critical areas where the concentration of the fine particulates exceeds the IT guidelines. Results based on human exposure analysis show the vulnerability is more towards the zones which are surrounded by prominent sources of pollution.
    Full-text · Article · Jun 2015 · The Scientific World Journal
    • "All of this has led to the mean absolute deviation being used routinely in a number of areas other than the astronomy of Eddington, including biology, engineering, IT, physics, imaging, geography, and environmental science (Anand and Narasimha 2013; Hao et al. 2012; Hižak and Logožar 2011; Sari, Roslan, and Shimamura 2012). In each case, M|D| is preferred for its ease of understanding, unbiased treatment of all scores (whether extreme or not), accuracy or efficiency, or simply because the results are found to be easier to portray. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme values. The paper then proposes the use of an easy to comprehend effect size based on the mean difference between treatment groups, divided by the mean absolute deviation of all scores. Using a simulation based on 1656 randomised controlled trials each with 100 cases, and a before and after design, the paper shows that the substantive findings from any such trial would be the same whether raw-score differences, a traditional effect size like Cohen's d, or the mean absolute deviation effect size is used. The same would be true for any comparison, whether for a trial or a simpler cross-sectional design. It seems that there is a clear choice over which effect size to use. The main advantage in using raw scores as an outcome measure is that they are easy to comprehend. However, they might be misleading and so perhaps require more judgement to interpret than traditional ‘effect’ sizes. Among the advantages of using the mean absolute deviation effect size are its relative simplicity, everyday meaning, and the lack of distortion of extreme scores caused by the squaring involved in computing the standard deviation. Given that working with absolute values is no longer the barrier to computation that it apparently was before the advent of digital calculators, there is a clear place for the mean absolute deviation effect size (termed ‘A’).
    No preview · Article · Dec 2014 · International Journal of Research & Method in Education
    • "All of this has led to the mean absolute deviation being used routinely in a number of areas other than the astronomy of Eddington, including biology, engineering, IT, physics, imaging, geography, and environmental science (Anand and Narasimha 2013; Hao et al. 2012; Hižak and Logožar 2011; Sari, Roslan, and Shimamura 2012). In each case, M|D| is preferred for its ease of understanding, unbiased treatment of all scores (whether extreme or not), accuracy or efficiency, or simply because the results are found to be easier to portray. "
    [Show abstract] [Hide abstract]
    ABSTRACT:  This paper discusses the reliance of numerical analysis on the concept of the standard deviation, and its close relative the variance. It suggests that the original reasons why the standard deviation concept has permeated traditional statistics are no longer clearly valid, if they ever were. The absolute mean deviation, it is argued here, has many advantages over the standard deviation. It is more efficient as an estimate of a population parameter in the real-life situation where the data contain tiny errors, or do not form a completely perfect normal distribution. It is easier to use, and more tolerant of extreme values, in the majority of real-life situations where population parameters are not required. It is easier for new researchers to learn about and understand, and also closely linked to a number of arithmetic techniques already used in the sociology of education and elsewhere. We could continue to use the standard deviation instead, as we do presently, because so much of the rest of traditional statistics is based upon it (effect sizes, and the F-test, for example). However, we should weigh the convenience of this solution for some against the possibility of creating a much simpler and more widespread form of numeric analysis for many.
    No preview · Article · Nov 2005 · British Journal of Educational Studies
Show more