Conference Paper

Modelo Evolutivo para Determinar el Comportamiento Espacio Temporal de la Concentración de Material Particulado PMx

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Uno de los principales interrogantes cuando se trata de describir el comportamiento de un contaminante en la atmósfera, lo constituye el hecho de determinar su concentración en un punto cualquiera de una zona de estudio, en el cual existen zonas de difícil acceso, o se tiene la imposibilidad de llevar a cabo campañas de medida. En este artículo, se presenta un modelo integrado basado en los principios de la computación evolutiva, el cual permite determinar el comportamiento espacio temporal para la concentración de material particulado PM10 en una zona de estudio., en términos del fenómeno. Se integra un modelo de interpolación adaptativo Takagi Sugeno, y un modelo de dispersión que permite la conformación de las funciones base del interpolador. El modelo propuesto fue validado frene a oros mé´todos de interpolación propios de la geoestadistica y de la inteligencia computacional, utilizados para la representación de este tipo de fenómenos espaciales.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

...  Un modelo por evolución para la estimación de emisiones en un conjunto de n_fuentes, a partir de una serie de medidas de concentración para PM x obtenidas de un conjunto de m_estaciones de monitoreo ubicadas espacialmente en una zona. Este modelo estará basado en los principios de la computación evolutiva, y en un modelo de dispersión lagrangiano del tipo Backward gaussian puff tracking [6], [13]. ...
Conference Paper
En este artículo, se desarrolla y analiza una propuesta de investigación, la cual permitirá la construcción de un modelo basado en los principios de la computación evolutiva, para determinar el comportamiento espacio temporal de la concentración de material particulado PMx en una zona de estudio georeferenciada. Para determinar este comportamiento, se hará necesario en primera instancia, la estimación de emisiones en n_fuentes de emisión, a partir de una serie de medidas de concentración obtenidas de un conjunto de m_estaciones de monitoreo de la calidad del aire, ubicadas espacialmente en dicha zona, y mediante la utilización de un algoritmo por evolución. Como resultado de esta estimación, el modelo arrojará la densificación de la zona de estudio, a través de una serie de superficies analíticas de concentración para PMx, y arrojará igualmente, un patrón de emisiones que permitirá describir el comportamiento de las fuentes de emisión como un proceso estocástico, a lo largo del tiempo que comprende una campaña de medida, o periodo de monitoreo de la calidad del aire, convirtiendo así al algoritmo, en una estrategia por evolución. Asimismo, el mapeo de este patrón de emisiones sobre la zona de estudio, entregará como resultado una serie de mapas de pronóstico, que describirán el comportamiento analítico espacial de la concentración para PMx a lo largo del tiempo, en términos de una serie de acciones tendientes a mitigar los efectos que la concentración por material particulado PMx, puedan ocasionar en un área específica de una zona.
Article
Full-text available
A methodology combining Bayesian inference with Markov chain Monte Carlo (MCMC) sampling is applied to a real accidental radioactive release that occurred on a continental scale at the end of May 1998 near Algeciras, Spain. The source parameters (i.e., source location and strength) are reconstructed from a limited set of measurements of the release. Annealing and adaptive procedures are implemented to ensure a robust and effective parameter-space exploration. The simulation setup is similar to an emergency response scenario, with the simplifying assumptions that the source geometry and release time are known. The Bayesian stochastic algorithm provides likely source locations within 100 km from the true source, after exploring a domain covering an area of approximately 1800 km × 3600 km. The source strength is reconstructed with a distribution of values of the same order of magnitude as the upper end of the range reported by the Spanish Nuclear Security Agency. By running the Bayesian MCMC algorithm on a large parallel cluster the inversion results could be obtained in few hours as required for emergency response to continental-scale releases. With additional testing and refinement of the methodology (e.g., tests that also include the source geometry and release time among the unknown source parameters), as well as with the continuous and rapid growth of computational power, the approach can potentially be used for real-world emergency response in the near future.
Chapter
Full-text available
In this article, a computational model for the interpolation and exploration of Complex Response Surfaces is described and analyzed. This computational model consists of two stages: an initial stage in which a group of measured points is interpolated by means of the coalition of characteristic concepts of vector geometry, numeric methods and evolutionary computation to construct a response surface; and a second stage, where a series of good trajectories by means of the exploration of the interpolated surface are determined. In this stage, an evolutionary algorithm, processing a mutation operator that incorporates the fundamental concepts of the cylindrical coordinates, is used to identify a trajectory containing the best combinations among the variables of the particular process this surface represents.
Article
An event reconstruction technology system has been designed and implemented at Lawrence Livermore National Laboratory (LLNL). This system integrates sensor observations, which may be sparse and/or conflicting, with transport and dispersion models via Bayesian stochastic sampling methodologies to characterize the sources of atmospheric releases of hazardous materials. We demonstrate the application of this event reconstruction technology system to designing sensor networks for detecting and responding to atmospheric releases of hazardous materials. The quantitative measure of the reduction in uncertainty, or benefit of a given network, can be utilized by policy makers to determine the cost/benefit of certain networks. Herein we present two numerical experiments demonstrating the utility of the event reconstruction methodology for sensor network design. In the first set of experiments, only the time resolution of the sensors varies between three candidate networks. The most ''expensive'' sensor network offers few advantages over the moderately-priced network for reconstructing the release examined here. The second set of experiments explores the significance of the sensors detection limit, which can have a significant impact on sensor cost. In this experiment, the expensive network can most clearly define the source location and source release rate. The other networks provide data insufficient for distinguishing between two possible clusters of source locations. When the reconstructions from all networks are aggregated into a composite plume, a decision-maker can distinguish the utility of the expensive sensor network.
Article
The objective of this paper is to show the methodology developed to estimate particle emissions from several typical activities of bulk handling in harbours. It is based on several experimental campaigns carried out in the Harbour of Tarragona, where high time resolution monitors were deployed close to different areas of solid bulk handling. Monitors recorded particle concentrations and meteorological variables. A high resolution dispersion model is used to estimate the emission rates that fits better the observations. These emission estimates are used as input for an emission model called EMIPORT. The model is complemented with the AP-42 (EPA) emission factors. This work is one of the activities of the LIFE project called HADA (Herramienta Automática de Diagnóstico Ambiental or in English Automatic Tool for Environmental Diagnostic).
Article
In order to suggest a new methodology for selecting an appropriate dispersion model, various statistical measures having respective characteristics and recommended value ranges were integrated to produce a new single index by using fuzzy inference where eight statistical measures for various model results, including fractional bias (FB), normalized mean square error (NMSE), geometric bias mean (MG), geometric bias variance (VG), within a factor of two (FAC2), index of agreement (IOA), unpaired accuracy of the peak concentration (UAPC), and mean relative error (MRE), were taken as premise part variables. The new methodology using a single index was applied to the prediction of ground-level SO2 concentration of 1-h average in coastal areas, where eight modeling combinations were organized with fumigation models, σy schemes for pre-fumigation, and modification schemes for σy during fumigation. As a result, the fumigation model of Lyons and Cole was found to have better predictability than the modified Gaussian model assuming that whole plume is immerged into the Thermal Internal Boundary Layer (TIBL). Again, a better scheme of σy (fumigation) was discerned. This approach, which employed the new integrated index, appears to be applicable to model evaluation or selection in various areas including complex coastal areas.
Article
Single-particle mass spectrometry data collected during the Pittsburgh Supersite experiment was used to isolate an episode on 27 October 2001 when the measurement site was primarily influenced by emissions from coal combustion sources. Results showed that (a) 60–80% of the particles detected during this event belonged to the Na/Si/K/Ca/Fe/Ga/Pb particle class associated with coal combustion emissions, (b) observation of this class was an isolated event occurring only during the hours of 06:00–14:00 EST, and (c) the detection of these particles was highly correlated with shifts in wind direction. Coincident SMPS, TEOM PM2.5, SO2, NOx, and O3 measurements were in excellent agreement with the single-particle results in terms of both identifying and characterizing this event. The three most likely point sources of these particles were isolated and Gaussian plume dispersion models were used in reverse to predict their particle number, particle mass, and gas phase emissions. Calculated mass emission rates were in general agreement with the US EPA National Emissions Inventory (NEI) database emissions estimates and the Title V PM10 limit. The largest of the three sources emits about 2.4×1017 fine and ultrafine particles per second.
Article
Bulk material handling can be a significant source of particles in harbor areas. The atmospheric impact of a number of loading/unloading activities of diverse raw materials has been assessed from continuous measurements of ambient particle concentrations recorded close to the emission sources. Two experimental campaigns have been carried out in the Tarragona port to document the impact of specific handling operations and bulk materials. Dusty bulk materials such as silica–manganese powder, tapioca, coal, clinker and lucerne were dealt with during the experiments. The highest impacts on ambient particle concentrations were recorded during handling of clinker. For this material and silica–manganese powder, high concentrations were recorded in the fine grain size (<2.5 μm). The lowest impacts on particulate matter concentrations were recorded during handling of tapioca and lucerne, mainly in the coarse grain size (2–5–10 μm). The effectiveness of several emission abatement measures, such as ground watering to diminish coal particle resuspension, was demonstrated to reduce ambient concentrations by up to two orders of magnitude. The importance of other good practices in specific handling operations, such as controlling the height of the shovel discharge, was also evidenced by these experiments. The results obtained can be further utilized as a useful experimental database for emission factor estimations.
Article
In homeland security applications, it is often necessary to characterize the source location and strength of a potentially harmful contaminant. Correct source characterization requires accurate meteorological data such as wind direction. Unfortunately, available meteorological data is often inaccurate or unrepresentative, having insufficient spatial and temporal resolution for precise modeling of pollutant dispersion. To address this issue, a method is presented that simultaneously determines the surface wind direction and the pollutant source characteristics. This method compares monitored receptor data to pollutant dispersion model output and uses a genetic algorithm (GA) to find the combination of source location, source strength, and surface wind direction that best matches the dispersion model output to the receptor data. A GA optimizes variables using principles from genetics and evolution.The approach is validated with an identical twin experiment using synthetic receptor data and a Gaussian plume equation as the dispersion model. Given sufficient receptor data, the GA is able to reproduce the wind direction, source location, and source strength. Additional runs incorporating white noise into the receptor data to simulate real-world variability demonstrate that the GA is still capable of computing the correct solution, as long as the magnitude of the noise does not exceed that of the receptor data.
Article
We have developed an integrated artificial neural network model to forecast the maxima of 24 h average of PM10 concentrations 1 day in advance and we have applied it to the case of five monitoring stations in the city of Santiago, Chile. Inputs to the model are concentrations measured until 7 PM at the five stations on the present day plus measured and forecast values of meteorological variables. Outputs are the expected maxima concentrations for the following day at the site of the same five stations. The greatest of the concentrations among the five forecasts defines air quality for the following day. According to the range where the concentrations fall, three levels or classes of air quality are defined: good (A), bad (B) and critical (C). We have adjusted the parameters of the models using 2001 and 2002 data to forecast 2003 conditions and 2002 and 2003 data in order to forecast 2004 values. Forecast values using the neural model are compared with the results obtained with a linear model with the same input variables and with persistence. According to the results reported here, overall, the neural model seems more accurate, although a good choice of input variables appears to be very important.
Article
Lagrangian techniques have previously been employed to extend initial mixing calculations beyond the near field, either alone or in combination with Eulerian models. Computational efficiency and accuracy are of prime importance in designing these ‘hybrid’ approaches to simulating a pollutant discharge, and we characterize three relatively simple Lagrangian techniques in this regard: random walk particle tracking (RWPT), forward Gaussian puff tracking (FGPT), and backward Gaussian puff tracking (BGPT). RWPT is generally the most accurate, capable of handling complexities in the flow field and domain geometry. It is also the most computationally expensive, as a large number of particles are generally required to generate a smooth concentration distribution. FGPT and BGPT offer dramatic savings in computational expense, but their applicability is limited by accuracy concerns in the presence of spatially variable flow or diffusivity fields or complex no-flux or open boundary conditions. For long simulations, particle and/or puff methods can transition to an Eulerian model if appropriate, since the relative computational expense of Lagrangian methods increases with time for continuous sources. Although we focus on simple Lagrangian models that are not suitable to all environmental applications, many of the implementation and computational efficiency concerns outlined herein would also be relevant to using higher order particle and puff methods to extend the near field.
Article
The problem of determining the source of an emission from the limited information provided by a finite and noisy set of concentration measurements obtained from real-time sensors is an ill-posed inverse problem. In general, this problem cannot be solved uniquely without additional information. A Bayesian probabilistic inferential framework, which provides a natural means for incorporating both errors (model and observational) and prior (additional) information about the source, is presented. Here, Bayesian inference is applied to find the posterior probability density function of the source parameters (location and strength) given a set of concentration measurements. It is shown how the source–receptor relationship required in the determination of the likelihood function can be efficiently calculated using the adjoint of the transport equation for the scalar concentration. The posterior distribution of the source parameters is sampled using a Markov chain Monte Carlo method. The inverse source determination method is validated against real data sets acquired in a highly disturbed flow field in an urban environment. The data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. These two examples demonstrate the utility of the proposed approach for inverse source determination.
Article
Thesis (M.S.)--Pennsylvania State University, 2006. Library holds archival microfiches negative and service copy.
Algoritmos de Estimación e Interpolación de Parámetros Geofísicos
  • J J Cruzado
Cruzado, J. J. (2004) Algoritmos de Estimación e Interpolación de Parámetros Geofísicos. Unpublished Magíster Thesis, Universidad de Puerto Rico, San Juan. Retrieved from http://grad.uprm. edu/tesis/cruzadojapan.pdf
Modelos de dispersión de contaminantes atmosféricos
  • K L Gallardo
Gallardo, K. L. (1997). Modelos de dispersión de contaminantes atmosféricos. Comisión Nacional del Medio Ambiente (CONAMA), Santiago de Chile, 1997.
Redes de Neuronas Artificiales Un Enfoque Práctico
  • V P Isazi
  • L I Galván
Isazi, V. P., & Galván, L. I. (2004). Redes de Neuronas Artificiales Un Enfoque Práctico. Upper Saddle River, NJ: Pearson, Prentice Hall.
Inventario de modelos utilizados para calidad de aire. Paper presented at V Seminario de Calidad de Aire en España
  • L F Martin
Martin, L. F. (2006). Inventario de modelos utilizados para calidad de aire. Paper presented at V Seminario de Calidad de Aire en España, Santander, 2006.
Sistema Informático para el Control y Prevención de la Contaminación Atmosférica en Huelva
  • L F Martín
  • C González
  • I Palomino
Martín, L. F., González, C., Palomino, I., et al. (2002) Sistema Informático para el Control y Prevención de la Contaminación Atmosférica en Huelva. Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT.
Semi-analytical model for pollution dispersion in the planetary boundary layer. Atmospheric Environment
  • D M Moreira
  • U Rizza
  • M T Villena
  • A Goulart
Moreira, D. M., Rizza, U., Villena, M. T., & Goulart, A. (2005). Semi-analytical model for pollution dispersion in the planetary boundary layer. Atmospheric Environment, 39(14), 2673-2681. Retrieved from doi:10.1016/j.atmosenv.2005.03.004
Event reconstruction for atmospheric releases employing urban puff model UDM with stochastic inversion methodology
  • S Neuman
  • L Glascoe
  • B Kosovik
Neuman, S., Glascoe, L., Kosovik, B., et al. (2005). Event reconstruction for atmospheric releases employing urban puff model UDM with stochastic inversion methodology. In Proceedings for the American Meteorological Society Annual Meeting, Atlanta, January 29-February 2, 2006.
Compression of Free Surface Base don the Evolutionary Optimisation of A NURBS-Takagi Sugeno System
  • P Peña
  • R J Hernández
Peña, P., & Hernández, R. J. (2007b). Compression of Free Surface Base don the Evolutionary Optimisation of A NURBS-Takagi Sugeno System. Paper presented at International Conference on CAD/CAM, ROBOTICS & Factories of the Future, Bogota.
Compresión de imágenes basada en la optimización metaheurística de un sistema Takagi Sugeno Kang. Paper presented at: V Congreso Español sobre Metaheurísticas
Compresión de imágenes basada en la optimización metaheurística de un sistema Takagi Sugeno Kang. Paper presented at: V Congreso Español sobre Metaheurísticas, Algoritmos Evolutivos y Bioinspirados (MAEB 2007), Tenerife (Spain).