Chapter
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The LEXIS Weather and Climate Large-scale Pilot will deliver a system for prediction of water-food-energy nexus phenomena and their associated socio-economic impacts. The system will be based on multiple model layers chained together, namely global weather and climate models, high-resolution regional weather models, domain-specific application models (such as hydrological, forest fire risk forecasts), impact models providing information for key decision and policy makers (such as air quality, agriculture crop production, and extreme rainfall detection for flood mapping). This paper will report about the first results of this pilot in terms of serving model output data and products with Cloud and High Performance Data Analytics (HPDA) environments, on top a Weather Climate Data APIs (ECMWF), as well as the porting of models on the LEXIS Infrastructure via different virtualization strategies (virtual machine and containers).

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
The evolution of High-Performance Computing (HPC) platforms enables the design and execution of progressively larger and more complex workflow applications in these systems. The complexity comes not only from the number of elements that compose the workflows but also from the type of computations they perform. While traditional HPC workflows target simulations and modelling of physical phenomena, current needs require in addition data analytics (DA) and artificial intelligence (AI) tasks. However, the development of these workflows is hampered by the lack of proper programming models and environments that support the integration of HPC, DA, and AI, as well as the lack of tools to easily deploy and execute the workflows in HPC systems. To progress in this direction, this paper presents use cases where complex workflows are required and investigates the main issues to be addressed for the HPC/DA/AI convergence. Based on this study, the paper identifies the challenges of a new workflow platform to manage complex workflows. Finally, it proposes a development approach for such a workflow platform addressing these challenges in two directions: first, by defining a software stack that provides the functionalities to manage these complex workflows; and second, by proposing the HPC Workflow as a Service (HPCWaaS) paradigm, which leverages the software stack to facilitate the reusability of complex workflows in federated HPC infrastructures. Proposals presented in this work are subject to study and development as part of the EuroHPC eFlows4HPC project.
Chapter
The LEXIS (Large-scale EXecution for Industry & Society) H2020 project is building an advanced engineering platform taking advantage of HPC, Cloud solutions and Big Data, leveraging existing HPC infrastructures. In the framework of the LEXIS project, CIMA Research Foundation is running a three nested domain WRF Model with European coverage and radar data assimilation over Italy. WRF data is then processed by ITHACA Extreme Rainfall Detection System (ERDS), an early warning system developed for the monitoring of heavy rainfall events. The WRF-ERDS workflow has been applied to the heavy rainfall event that affected Southern Italy, in particular Calabria Region, at the end of November 2020. Rainfall depths obtained using global-scale rainfall datasets and WRF data have been compared both with rain gauge data and with the daily bulletins issued by the Italian Civil Protection Department. The data obtained by running the WRF-ERDS workflow shows as an advanced engineering platform based on HPC and cloud solutions can provide more detailed forecasts to an early warning system like ERDS.
Article
Full-text available
In this work, we describe the integration of Weather and Research Forecasting (WRF) forecasts produced by CIMA Research Foundation within ITHACA Extreme Rainfall Detection System (ERDS) to increase the forecasting skills of the overall early warning system. The entire workflow is applied to the heavy rainfall event that affected the city of Palermo on 15 July 2020, causing urban flooding due to an exceptional rainfall amount of more than 130 mm recorded in about 2.5 h. This rainfall event was not properly forecasted by meteorological models operational at the time of the event, thus not allowing to issue an adequate alert over that area. The results highlight that the improvement in the quantitative precipitation scenario forecast skills, supported by the adoption of the H2020 LEXIS computing facilities and by the assimilation of in situ observations, allowed the ERDS system to improve the prediction of the peak rainfall depths, thus paving the way to the potential issuing of an alert over the Palermo area.
Chapter
The Extreme Rainfall Detection System (ERDS) is an early warning system (EWS) developed for the monitoring and forecasting of rainfall events on a global scale. Within ERDS the near real-time rainfall monitoring is performed using the Global Precipitation Measurement data, while rainfall forecasts are provided by the Global Forecast System model. Rainfall depths determined on the basis of these data are then compared with a set of rainfall thresholds to evaluate the presence of heavy rainfall events: in places where the rainfall depth is higher than a rainfall threshold, an alert of a severe rainfall event is issued. The information provided by ERDS is accessible through a WebGIS application (http://erds.ithacaweb.org) in the form of maps of rainfall depths and related alerts to provide immediate and intuitive information also for nonspecialized users. This chapter is intended to describe the input data and the extreme rainfall detection methodology currently implemented in ERDS. Furthermore, several case studies (2019 Queensland flood event, 2017 Atlantic hurricane season, and 2017 Eastern Pacific hurricane season) are included to highlight the strengths and weaknesses of this EWS based on global-scale rainfall datasets.
Article
Full-text available
The Mediterranean region is frequently struck by severe rainfall events causing numerous casualties and several million euros of damages every year. Thus, improving the forecast accuracy is a fundamental goal to limit social and economic damages. Numerical Weather Prediction (NWP) models are currently able to produce forecasts at the km scale grid spacing but unreliable surface information and a poor knowledge of the initial state of the atmosphere may produce inaccurate simulations of weather phenomena. The STEAM (SaTellite Earth observation for Atmospheric Modelling) project aims to investigate whether Sentinel satellites constellation weather observation data, in combination with Global Navigation Satellite System (GNSS) observations, can be used to better understand and predict with a higher spatio-temporal resolution the atmospheric phenomena resulting in severe weather events. Two heavy rainfall events that occurred in Italy in the autumn of 2017 are studied-a localized and short-lived event and a long-lived one. By assimilating a wide range of Sentinel and GNSS observations in a state-of-the-art NWP model, it is found that the forecasts benefit the most when the model is provided with information on the wind field and/or the water vapor content.
Chapter
Full-text available
High Performance Computing (HPC) infrastructures (also referred to as supercomputing infrastructures) are at the basis of modern scientific discoveries, and allow engineers to greatly optimize their designs. The large amount of data (Big-Data) to be treated during simulations is pushing HPC managers to introduce more heterogeneity in their architectures, ranging from different processor families to specialized hardware devices (e.g., GPU computing, many-cores, FPGAs). Furthermore, there is also a growing demand for providing access to supercomputing resources as in common public Clouds. All these three elements (i.e., HPC resources, Big-Data, Cloud) make “converged” approaches mandatory to address challenges emerging in scientific and technical domains.
Article
Full-text available
The typical complex orography of the Mediterranean coastal areas support the formation, of the so called back-building Mesoscale Convective Systems (MCS) producing torrential rainfall often resulting into flash floods. As these events are usually very small-scaled and localized, they are hardly predictable from a hydro-meteorological standpoint, frequently wrecking significant amount of fatalities and socio-economic damages. Liguria, a north-western Italian region, is characterized by small catchments with very short hydrological response time, thus extremely prone to the impacts of back-building MCSs. Indeed Liguria has been hit by three intense back-building MCSs between 2011 and 2014 causing a total death toll of 20 people and several hundred millions of euros of damages. Consequently, it is compulsory the use of hydro-meteorological forecasting frameworks coupling the fine-scale Numerical Weather Prediction (NWP) outputs with rainfall-runoff models to provide timely and accurate streamflow forecasts. Concerning the aforementioned back-building MCS episodes recently occurred in Liguria, this work assesses the predictive capability of a hydro-meteorological forecasting framework composed by a km-scale cloud resolving NWP model (WRF), including a 6-hour cycling 3DVAR assimilation of radar reflectivity and conventional weather stations data, a rainfall downscaling model (rainfall filtered autoregressive model, RainFARM) and a fully distributed hydrological model (Continuum). A rich portfolio of WRF 3DVAR direct and indirect reflectivity operators has been explored to drive the meteorological component of the proposed forecasting framework. The results confirm the importance of rapidly-refreshing and data intensive 3DVAR for improving the quantitative precipitation forecast, and, subsequently, the flash-floods prediction in case of back-building MCSs events.
Article
Full-text available
Many studies have shown a growing trend in terms of frequency and severity of extreme events. As never before, having tools capable to monitor the amount of rain that reaches the Earth’s surface has become a key point for the identification of areas potentially affected by floods. In order to guarantee an almost global spatial coverage, NASA Global Precipitation Measurement (GPM) IMERG products proved to be the most appropriate source of information for precipitation retrievement by satellite. This study is aimed at defining the IMERG accuracy in representing extreme rainfall events for varying time aggregation intervals. This is performed by comparing the IMERG data with the rain gauge ones. The outcomes demonstrate that precipitation satellite data guarantee good results when the rainfall aggregation interval is equal to or greater than 12 h. More specifically, a 24-h aggregation interval ensures a probability of detection (defined as the number of hits divided by the total number of observed events) greater than 80%. The outcomes of this analysis supported the development of the updated version of the ITHACA Extreme Rainfall Detection System (ERDS: erds.ithacaweb.org). This system is now able to provide near real-time alerts about extreme rainfall events using a threshold methodology based on the mean annual precipitation.
Article
Full-text available
From 1970 to 2012, about 9000 high impact weather events were reported globally causing the loss of 1.94 million lives and damage of US$ 2.4 trillion (United Nations International Strategy for Disaster Reduction, UNISDR report 2014). The scientific community is called to action to improve the predictive ability of such events and communicate forecasts and associated risks both to affected populations and to those making decisions. At the heart of this challenge lies the ability to have easy access to hydrometeorological data and models, and to facilitate the necessary collaboration between meteorologists, hydrologists, and computer science experts to achieve accelerated scientific advances. Two EU funded projects, DRIHM and DRIHM2US, sought to help address this challenge by developing a prototype e-Science environment providing advanced end-to-end services (models, datasets and post-processing tools), with the aim of paving the way to a step change in how scientists can approach studying these events, with a special focus on flood events in complex topography areas. This paper describes the motivation and philosophy behind this prototype e-Science environment together with certain key components, focusing on hydro-meteorological aspects, which are then illustrated through actionable research for a critical flash flood event, which occurred in October 2014 in Liguria, Italy.
Article
Full-text available
TOSCA offers a structured (XML based) language that defines different components of an application and relations between them using an application topology while capturing all management tasks in management plans. The main motivation behind this document is to provide an informational overview of TOSCA to people who are new to the recent developments in the field. As such, this document contains a description of a representative set of works in literature that made contributions to TOSCA.
Article
Full-text available
Full process description and distributed hydrological models are very useful tools in hydrology as they can be applied in different contexts and for a wide range of aims such as flood and drought forecasting, water management, prediction of impact on the hydrologic cycle due to natural and human induced changes. Since they must mimic a variety of physical processes they can be very complex and with a high degree of parameterization. This complexity can be increased by necessity of augmenting the number of observable state variables in order to improve models’ validation or to allow data assimilation. In this work a model, aiming at balancing the need to reproduce the physical processes with the practical goal of avoiding over-parameterization, is presented. The model is designed to be implemented in different contexts with a special focus on data scarce environments, e.g. with no streamflow data. All the main hydrological phenomena are modeled in a distributed way. Mass and energy balance are solved explicitly. Land surface temperature, which is particularly suited to being extensively observed and assimilated, is an explicit state variable. A performance evaluation, based on both traditional and satellite derived data, is presented with a specific reference to the application in an Italian catchment. The model has been firstly calibrated and validated following a standard approach based on streamflow data. The capability of the model in reproducing both the streamflow measurements and the land surface temperature from satellites, has been investigated. The model has been then calibrated using satellite data and geomorphologic characteristic of the basin in order to test its application on a basin where standard hydrologic observations (e.g., streamflow data) are not available. The results have been compared with those obtained by the standard calibration strategy based on streamflow data.
Chapter
The HPC-as-a-Service concept is to provide users with simple and intuitive access to a supercomputing infrastructure without the need to buy and manage their own physical servers or data centers. This article presents the commonly used services and implementations of this concept and introduces our own in-house application framework called High-End Application Execution Middleware (HEAppE Middleware). HEAppE’s universally designed software architecture enables unified access to different HPC systems through simple object-oriented web-based APIs, thus providing HPC capabilities to users without the necessity to manage the running jobs forms the command-line interface of the HPC scheduler directly on the cluster. This article also contains the list of several pilot use-cases from a number of thematic domains where the HEAppE Platform was successfully used. Two of those pilots, focusing on satellite image analysis and bioimage informatics, are presented in more detail.
Article
Since its initial release in 2000, the Weather Research and Forecasting (WRF) Model has become one of the world’s most widely used numerical weather prediction models. Designed to serve both research and operational needs, it has grown to offer a spectrum of options and capabilities for a wide range of applications. In addition, it underlies a number of tailored systems that address Earth system modeling beyond weather. While the WRF Model has a centralized support effort, it has become a truly community model, driven by the developments and contributions of an active worldwide user base. The WRF Model sees significant use for operational forecasting, and its research implementations are pushing the boundaries of finescale atmospheric simulation. Future model directions include developments in physics, exploiting emerging compute technologies, and ever-innovative applications. From its contributions to research, forecasting, educational, and commercial efforts worldwide, the WRF Model has made a significant mark on numerical weather prediction and atmospheric science.
Article
Forecasting severe hydro-meteorological events under time constraints requires reliable and robust models, which facilitate energy-aware allocations on high performance computing infrastructures. We present an innovative approach to quantify the performance of six heuristics in selecting optimal allocations to distributed HPC resources for an ensemble of meteorological forecasts for a flash-flood producing storm in Genoa (Liguria, Italy) in October 2014. The computing environments are expected to be dynamic and heterogeneous in nature with varying availability, performance and energy-to-solution. The results of the allocations are assessed and compared. The aim is to provide a robust, reliable and energy-aware resource allocation for ensembles of forecasts for time-critical decision support.
Conference Paper
DEWETRA is a fully operational platform used by the Italian Civil Protection Department and designed by CIMA Research Foundation to support operational activities at national or international scale. The system is a web-GIS platform aimed to multi-risk mapping, forecasting and monitoring. Using the tools of the platform it is possible to aggregate data both in a temporal or spatial way and to build scenarios of risk and damage. Two case studies are presented: the synthetic design of a risk scenario for the catastrophic rainstorm occurred on October 25th, 2011 in the easternmost part of Liguria and north-western Tuscany, and the contribution to emergency management of Emilia Romagna 2012 Earthquake. The usefulness of the platform has been proved also with a very short-notice deployment due to sudden crisis in Pakistan and Japan, by instance, and with full implementations nowadays operational at Lebanese, Albanian and Bolivian national civil protection, in addition to Barbados and the Organization of Eastern Caribbean States (OECS), in the framework of the ERC Project (Enhancing Resilience to Reduce Vulnerability in the Caribbean). On March 25th, 2014 The World Meteorological Organization (WMO) has signed a cooperation agreement with the Italian Civil Protection Department in order to install and deploy Dewetra in countries requesting it through WMO: so far, Philippines, Ecuador and Guyana have gone for it.
Article
Many countries perform national air quality assessments using grid-based numerical air dispersion models, generally referred to as 'regional' models. Advantages of these models include the ability to use temporally and spatially varying meteorology and model chemical reactions over large temporal and spatial scales. These models usually perform reasonably well against rural and urban background monitors, but predictions at roadside monitors are underestimated. City-scale air dispersion models have been developed to give high spatial resolution, but are usually restricted to use spatially homogeneous meteorological data and to model simplified chemical reactions over short time scales. Thus, regional and city-scale air dispersion models have complementary strengths and a system where a city-scale model is nested within a regional model allows accurate air dispersion modelling over a range of spatial scales. This paper presents preliminary modelling results from a system where the local model ADMS-Urban is nested within the regional model, CMAQ.
Article
In this paper, the architecture and the application of a system designed for the assessment of the distribution of dynamic wildland fire risk over the whole Italian territory are presented. Such an assessment takes place on the basis of static information concerning the vegetation cover and the topography of the territory and of dynamic information consisting of the meteorological forecast, over a certain time horizon, provided by a Limited Area Model. The risk distribution assessment is obtained through the use of two different models, namely the fuel moisture model and the potential fire spread model, which are applied, at each time interval, for each of the cells in which the area of study is discretized. In this paper, the structure of such models is presented in detail, and similarities and differences with respect to other existing models are discussed. As the performance of the overall system is heavily dependent on the choice of the values of the parameters appearing in such models, a careful calibration of such parameters is needed. In this respect, the results provided by a preliminary calibration are presented, carried out with the objective of maximizing the accordance between the outputs of the presented risk assessment system and the information relevant to really occurred fires.
Article
ADMS and AERMOD are the two most widely used dispersion models for regulatory purposes. It is, therefore, important to understand the differences in the predictions of the models and the causes of these differences. The treatment by the models of flat terrain has been discussed previously; in this paper the focus is on their treatment of complex terrain. The paper includes a discussion of the impacts of complex terrain on airflow and dispersion and how these are treated in ADMS and AERMOD, followed by calculations for two distinct cases: (i) sources above a deep valley within a relatively flat plateau area (Clifty Creek power station, USA); (ii) sources in a valley in hilly terrain where the terrain rises well above the stack tops (Ribblesdale cement works, England). In both cases the model predictions are markedly different. At Clifty Creek, ADMS suggests that the terrain markedly increases maximum surface concentrations, whereas the AERMOD complex terrain module has little impact. At Ribblesdale, AERMOD predicts very large increases (a factor of 18) in the maximum hourly average surface concentrations due to plume impaction onto the neighboring hill; although plume impaction is predicted by ADMS, the increases in concentration are much less marked as the airflow model in ADMS predicts some lateral deviation of the streamlines around the hill.
Supporting Keycloak in iRODS systems with OpenID authentication
  • R J García-Hernández
  • M Golasowski
Integrated multi-satellitE retrievals for GPM (IMERG) technical documentation
  • G J Huffman
  • D T Bolvin
  • E J Nelkin
  • GJ Huffman