Conference Paper

Data-driven models to forecast PM10 concentration

Politecmco di Torino, Torino
DOI: 10.1109/IJCNN.2007.4370953 Conference: Proceedings of the International Joint Conference on Neural Networks, IJCNN 2007, Celebrating 20 years of neural networks, Orlando, Florida, USA, August 12-17, 2007
Source: DBLP

ABSTRACT The research activity described in this paper concerns the study of the phenomena responsible for the urban and suburban air pollution. The analysis carries on the work already developed by the NeMeFo (neural meteo forecasting) research project for meteorological data short-term forecasting. The study analyzed the air pollution principal causes and identified the best subset of features (meteorological data and air pollutants concentrations) for each air pollutant in order to predict its medium-term concentration (in particular for the particulate matter with an aerodynamic diameter of up to 10 mum called PM10). The selection of the best subset of features was implemented by means of a backward selection algorithm which is based on the information theory notion of relative entropy. The final aim of the research is the implementation of a prognostic tool able to reduce the risk for the air pollutants concentrations to be above the alarm thresholds fixed by the law. The implementation of this tool will be carried out using data-driven models based on some of the most wide-spread statistical data-learning techniques (artificial neural networks and support vector machines).

  • 01/1987; Wiley.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Most algorithms for the least-squares estimation of non-linear parameters have centered about either of two approaches. On the one hand, the model may be expanded as a Taylor series and corrections to the several parameters calculated at each iteration on the assumption of local linearity. On the other hand, various modifications of the method of steepest-descent have been used. Both methods not infrequently run aground, the Taylor series method because of divergence of the successive iterates, the steepest-descent (or gradient) methods because of agonizingly slow convergence after the first few iterations. In this paper a maximum neighborhood method is developed which, in effect, performs an optimum interpolation between the Taylor series method and the gradient method, the interpolation being based upon the maximum neighborhood in which the truncated Taylor series gives an adequate representation of the nonlinear model. The results are extended to the problem of solving a set of nonlinear algebraic e
    Journal of the Society for Industrial and Applied Mathematics 06/1963; 11(2):431-441. DOI:10.1137/0111030
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Variable and feature selection have become the focus of much research in areas of application for which datasets with tells or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods.
    Journal of Machine Learning Research 01/2003; 3:1157-1182. DOI:10.1162/153244303322753616 · 2.85 Impact Factor