Table 1 - uploaded by Korbinian Breinl

Content may be subject to copyright.

Source publication

Human settlements are often at risk from multiple hydro-meteorological hazards, which include fluvial floods, short-time extreme precipitation (leading to ′pluvial′ floods), or coastal floods. In the past, considerable scientific effort has been devoted to assessing fluvial floods. Only recently have methods been developed to assess the hazard and...

## Contexts in source publication

**Context 1**

... our model experiment, daily totals of pre- cipitation and daily mean temperature were available from 1987 to 2010. For precipitation, the stations 1-19 were used; for temperature, the station 15, 16 and 19 (see Figure 2 and Table 1). ...

**Context 2**

... The focus is on urban flooding, i.e. the main urbanised area of Salzburg (area approximately 30 km 2 ). • Rainfall that is relevant for pluvial floods is represented by a single, centrally located urban precipitation gauge [Salzburg-Freisaal (gauge 19), see Figure 2 and Table 1]. ...

**Context 3**

... larger cities, additional precipitation gauges would likely be required to capture the spatial variability of precipitation. Table 1). ...

**Context 4**

... such a wide window does not represent a 'combined' flood in its actual sense, analyses are useful for the fire service or the civil protection in regard to preparedness and logistics. Table 10 summarises the simulation results of combined floods. ...

**Context 5**

... same delay can be observed for combined flood days. The catchment delay also explains the low probability of combined floods on the same day, as the probability of com- bined floods significantly increases with only a small increase of the observation window (see Table 10). The right panel of Figure 12 shows the same analysis, but conducted for the simulated daily urban precipitation. ...

**Context 6**

... analysis of com- bined flood days has been conducted assuming three differ- ent thresholds (20, 25 and 30 mm/h). The framework is Table 10 Return periods of combined fluvial-pluvial flood days, assuming three different thresholds for pluvial days (5th percentile, mean and 95th percentile). A window of 1 means combined events on the same day; a window of 7 days means at least one fluvial and pluvial event within 1 week etc. designed for urban rainstorms at hourly resolution, but could be adapted to simulate rainstorms at finer scales. ...

## Similar publications

Floods in the City of Osijek area and its surroundings have been recorded since the 18thcentury. From that time period, a part of the surrounding wetland area has been transformed into settlements and agricultural surfaces whereas the Drava course has been shortened and its inundation surfaces reduced. The vicinity of the Drava and Danube confluenc...

## Citations

... Cowpertwait et al., 1996;Koutsoyiannis et al., 2003;Onof and Wang, 2020;Rodriguez-Iturbe et al., 1987, studied the cluster-based Poisson process for stochastic rainfall modeling. Wójcik and Buishand, 2003;Westra et al., 2012;Breinl et al., 2015;Breinl and Di Baldassarre, 2019, put forward another model based on the method of fragments. Olsson (1998) has modeled the scaling behavior of rainfall using a cascade process called microcanonical multiplicative random cascade (MMRC) model. ...

Temporal disaggregation of rainfall has been of particular focus because of the non-availability of higher-resolution rainfall data for a long-duration period. Fine temporal resolution rainfall is used in a multitude of hydrological applications. Researchers have proposed various disaggregation models to disaggregate coarse temporal resolution rainfall. In this paper, firstly, the microcanonical multiplicative random cascade (MMRC) model is applied for disaggregation from daily rainfall to a one-hour scale. The model is applied in four different rainfall stations for disaggregation having varying rainfall patterns and characteristics. It is observed that the MMRC model can generate statistically reliable rainfall time series; however, the extreme rainfall characteristics are not well conserved by the model for all the stations.
This paper describes a new model based on a random multiplicative cascade process where classification and parameter generation is done by k-means clustering such that it can better conserve extreme rainfall conditions and generate a reliable rainfall time series (MMRC-K). K-means clustering is a vector quantization method that divides the observations into a particular number of clusters based on the nearest mean called cluster centroid. The novel approach is tested with the same four Indian cities. The use of k-means clustering has made the classification and parameter generation of the model robust such that it can work with data sets of varying characteristics. It is found that MMRC-K provides improved conservation of extreme rainfall characteristics compared to the MMRC model for all four stations. The MMRC-K model reproduces the IDF curves of Delhi and Mumbai stations quite well; however, a little discrepancy was observed at higher resolution and larger return periods in Kolkata and Chennai stations. Extreme rainfall at finer resolution is used in various hydrological analyses and design problems like urban drainage design, stormwater management, etc. The overall superior conservation of the extreme rainfall characteristics in the model-generated rainfall time series by the MMRC-K model compared to the MMRC model supports the potential applicability of the model for temporal disaggregation.

... These models consider the pairwise dependence of peak discharges at multiple locations and generate synthetic series of multiple dependent flow peaks. The second possibility is based on the generation of spatially distributed meteorological fields by a weather generator, either stationbased with subsequent interpolation Falter, 2016;Breinl et al., 2017;Evin et al., 2018;Raynaud et al., 2019) or raster-based (Buishand and Brandsma, 2001;Peleg et al., 2017). Synthetic meteorological fields are subsequently used to drive hydrological simulations to generate streamflow values across the study area. ...

Flood risk assessment is an important prerequisite for risk management decisions. To estimate the risk, i.e. the probability of damage, flood damage needs to be either systematically recorded over a long period or modelled for a series of synthetically generated flood events. Since damage records are typically rare, time series of plausible, spatially coherent event precipitation or peak discharges need to be generated to drive the chain of process models. In the present study, synthetic flood events are generated by two different approaches to modelling flood risk in a meso-scale alpine study area (Vorarlberg, Austria). The first approach is based on the semi-conditional multi-variate dependence model applied to discharge series. The second approach relies on the continuous hydrological modelling of synthetic meteorological fields generated by a multi-site weather generator and using an hourly disaggregation scheme. The results of the two approaches are compared in terms of simulated spatial patterns of peak discharges and overall flood risk estimates. It could be demonstrated that both methods are valid approaches for risk assessment with specific advantages and disadvantages. Both methods are superior to the traditional assumption of a uniform return period, where risk is computed by assuming a homogeneous return period (e.g. 100-year flood) across the entire study area.

... To generate high intensity rainfall events at different spatio-temporal scales, stochastic rainfall modeling of these phenomena requires long-term observation (Apel et al., 2016;Breinl et al., 2017). This is also the case for alternative rainfall databases such as re-analysis and/or Global Climate Model (GCM) simulation data as their target period is generally as long as the observation. ...

... Several methods exist for the temporal disaggregation, e.g. method of fragments (Wójcik and Buishand, 2003;Westra et al., 2012;Breinl et al., 2015;Breinl and Di Baldassarre, 2019), rectangular pulse models (Koutsoyiannis and Onof, 2001) and cascade models. Cascade models are well-known disaggregation models for the generation of high-resolution rainfall time series and were developed originally in the field of turbulence theory (Mandelbrot, 1974). ...

In urban hydrology rainfall time series of high
resolution in time are crucial. Such time series with sufficient length can
be generated through the disaggregation of daily data with a micro-canonical
cascade model. A well-known problem of time series generated in this way is the
inadequate representation of the autocorrelation. In this paper two cascade
model modifications are analysed regarding their ability to improve the
autocorrelation in disaggregated time series with 5 min resolution. Both
modifications are based on a state-of-the-art reference cascade model
(method A). In the first modification, a position dependency is introduced
in the first disaggregation step (method B). In the second modification the
position of a wet time step is redefined in addition by taking into account
the disaggregated finer time steps of the previous time step instead of the
previous time step itself (method C). Both modifications led to an
improvement of the autocorrelation, especially the position redefinition
(e.g. for lag-1 autocorrelation, relative errors of −3 % (method B) and
1 % (method C) instead of −4 % for method A). To ensure the
conservation of a minimum rainfall amount in the wet time steps, the mimicry
of a measurement device is simulated after the disaggregation process.
Simulated annealing as a post-processing strategy was tested as an
alternative as well as an addition to the modifications in methods B and C.
For the resampling, a special focus was given to the conservation of the
extreme rainfall values. Therefore, a universal extreme event definition was
introduced to define extreme events a priori without knowing their
occurrence in time or magnitude. The resampling algorithm is capable of
improving the autocorrelation, independent of the previously applied cascade
model variant (e.g. for lag-1 autocorrelation the relative error of −4 %
for method A is reduced to 0.9 %). Also, the improvement of the
autocorrelation by the resampling was higher than by the choice of the
cascade model modification. The best overall representation of the
autocorrelation was achieved by method C in combination with the resampling
algorithm. The study was carried out for 24 rain gauges in Lower Saxony,
Germany.

... These models consider the pairwise dependence of peak discharges at multiple locations and generate synthetic series of multiple dependent flow peaks. The second possibility is based on the generation of spatially distributed 20 meteorological fields by a weather generator, either station-based with subsequent interpolation Falter, 2016;Breinl et al., 2017) or raster-based (Buishand and Brandsma, 2001;Peleg et al., 2017). Synthetic meteorological fields are subsequently used to drive hydrological simulations to generate streamflow values across the study area. ...

Abstract. Flood risk assessment is an important prerequisite for risk management decisions. To estimate the risk, flood damages need to be either systematically recorded over long period or they need to be modelled for a series of synthetically generated flood events. Since damage records are typically rare, time series of plausible, spatially coherent event precipitation or peak discharges need to be generated to drive the chain of process models. In the present study, synthetic flood events are generated by two different approaches to model flood risk in a meso-scale alpine study area (Vorarlberg, Austria). The first approach is based on the semi-conditional multi-variate dependence model applied to discharge series. The second approach is based on the continuous hydrological modelling of synthetic meteorological fields generated by a multi-site weather generator and using an hourly disaggregation scheme. The results of the two approaches are compared in terms of simulated spatial patterns and overall flood risk estimates. It could be demonstrated that both methods are valid approaches for risk assessment with specific advantages and disadvantages. Both methods are superior to the traditional assumption of a uniform return period, where risk is computed by assuming a homogeneous return period (e.g. 100-year flood) across the entire study area.

... Yang and Zhang [159] ont étudié la distribution de probabilité conjointe des vents extrêmes et des vagues simulées à travers la copule de Gumbel. La combinaison des crues fluviales et pluviales est rarement étudiée dans la littérature [5,24]. Apel et al. [5] considèrent les crues fluviales et pluviales comme totalement indépendantes. ...

La modélisation des combinaisons de phénomènes d’inondation est une problématique d’actualité pour la communauté scientifique qui s’intéresse en priorité aux sites urbains et nucléaires. En effet, il est fort probable que l’approche déterministe explorant un certain nombre de scénarios possède certaines limites car ces scénarios déterministes assurent un conservatisme souvent excessif. Les approches probabilistes apportent une précision supplémentaire en s’appuyant sur les statistiques et les probabilités pour compléter les approches déterministes. Ces approches probabilistes visent à identifier et à combiner plusieurs scénarios d’aléa possibles pour couvrir plusieurs sources possibles du risque. L’approche probabiliste d’évaluation de l’aléa inondation (Probabilistic Flood Hazard Assessment ou PFHA) proposée dans cette thèse permet de caractériser une (des) quantité(s) d’intérêt (niveau d’eau, volume, durée d’immersion, etc.) à différents points d’un site en se basant sur les distributions des différents phénomènes de l’aléa inondation ainsi que les caractéristiques du site. Les principales étapes du PFHA sont : i) identification des phénomènes possibles (pluies, niveau marin, vagues, etc.), ii) identification et probabilisation des paramètres associés aux phénomènes d’inondation sélectionnés, iii) propagation de ces phénomènes depuis les sources jusqu’aux point d’intérêt sur le site, iv) construction de courbes d’aléa en agrégeant les contributions des phénomènes d’inondation. Les incertitudes sont un point important de la thèse dans la mesure où elles seront prises en compte dans toutes les étapes de l’approche probabiliste. Les travaux de cette thèse reposent sur l’étude de la conjonction de la pluie et du niveau marin et apportent une nouvelle méthode de prise en compte du déphasage temporel entre les phénomènes (coïncidence). Un modèle d’agrégation a été développé afin de combiner les contributions des différents phénomènes d’inondation. La question des incertitudes a été étudiée et une méthode reposant sur la théorie des fonctions de croyance a été utilisée car elle présente des avantages divers par rapport aux autres concepts (modélisation fidèle dans les cas d’ignorance totale et de manque d’informations, possibilité de combiner des informations d’origines et de natures différentes, etc.). La méthodologie proposée est appliquée au site du Havre, en France.

... Several methods exist for the temporal disaggregation, e.g. method of fragments 10 (Wójcik and Buishand, 2003, Breinl et al., 2015, Breinl and Di Baldassarree, 2019, rectangular pulse models (Koutsoyiannis and Onof, 2001) and cascade models. Cascade models are well-known disaggregation models for the generation of high-resolution rainfall time series and were developed originally in the field of turbulence theory (Mandelbrot, 1974). ...

In urban hydrology rainfall time series of high resolution in time are crucial. Such time series with sufficient length can be generated through the disaggregation of daily data with a micro-canonical cascade model. A well-known problem of time series generated so is the underestimation of the autocorrelation. In this paper two cascade model modifications are analysed regarding their ability to improve the autocorrelation. Both modifications are based on a state-of-the-art reference cascade model. In the first modification, a position-dependency is introduced in the first disaggregation step. In the second modification the position of a wet time step is redefined in addition. Both modifications led to an improvement of the autocorrelation, especially the position redefinition. Simultaneously, two approaches are investigated to avoid the generation of time steps with too small rainfall intensities, the conservation of a minimum rainfall amount during the disaggregation process itself and the mimicry of a measurement device after the disaggregation process. The mimicry approach shows slight better results for the autocorrelation and hence was kept for a subsequent resampling investigation using Simulated Annealing. For the resampling, a special focus was given to the conservation of the extreme rainfall values. Therefore, a universal extreme event definition was introduced to define extreme events a priori without knowing their occurrence in time or magnitude. The resampling algorithm is capable of improving the autocorrelation, independent of the previously applied cascade model variant. Also, the improvement of the autocorrelation by the resampling was higher than by the choice of the cascade model modification. The best overall representation of the autocorrelation was achieved by method C in combination with the resampling algorithm. The study was carried out for 24 rain gauges in Lower Saxony, Germany.

... A recent study by Breinl (2016), with the focus on exploring the effect of differently complex weather generators, concluded that FFA and CMA can achieve comparable results on a daily scale. The CMA studies were limited to either single-site applications on sub-daily time scale (Grimaldi et al. 2012b, Arnaud et al. 2017 or multi-site applications on a daily time scale (Hundecha and Merz 2012, Falter et al. 2015, Breinl 2016, Breinl et al. 2017. ...

... In recent years, different methods have been developed to disaggregate daily rainfall to sub-daily time steps. They include resampling techniques based upon the method of fragments (Buishand and Brandsma 2001, Sharma and Srikanthan 2006, Leander and Buishand 2009, Pui et al. 2012, Westra et al. 2013, Breinl et al. 2017, multiplicative cascade models (Olsson 1998, Güntner et al. 2001, Haberlandt and Radtke 2014, Förster et al. 2016, Müller and Haberlandt 2018, and more complex stochastic disaggregation procedures, for example based on the Bartlett-Lewis rectangular pulse model (Koutsoyiannis et al. 2003, Kossieris et al. 2016. ...

... The applied disaggregation method follows, generally, the modelling steps proposed by Lall and Sharma (Sharma and Srikanthan 2006). In contrast to other resampling procedures (Sharma and Srikanthan 2006, Nowak et al. 2010, Pui et al. 2012, Breinl et al. 2017, the temperature is disaggregated simultaneously to the precipitation. The course of temperature during the day is thereby expressed as relative difference to daily mean temperature. ...

Design flood estimation is an essential part of flood risk assessment. Commonly applied are flood frequency analyses and design storm approaches, while the derived flood frequency using continuous simulation has been getting more attention recently. In this study, a continuous hydrological modelling approach on an hourly time scale, driven by a multi-site weather generator in combination with a k-nearest neighbour resampling procedure, based on the method of fragments, is applied. The derived 100-year flood estimates in 16 catchments in Vorarlberg (Austria) are compared to (a) the flood frequency analysis based on observed discharges, and (b) a design storm approach. Besides the peak flows, the corresponding runoff volumes are analysed. The spatial dependence structure of the synthetically generated flood peaks is validated against observations. It can be demonstrated that the continuous modelling approach can achieve plausible results and shows a large variability in runoff volume across the flood events.

... (14)), which also turned out to work well with nearest neighbor algorithms for univariate precipitation disaggregation (e.g. Breinl et al., 2017b). ...

Study region: This study focuses on two study areas: the Province of Trento (Italy; 6200 km²), and entire Sweden (447000km²). The Province of Trento is a complex mountainous area including subarctic, humid continental and Tundra climates. Sweden, instead, is mainly dominated by a subarctic climate in the North and an oceanic climate in the South.
Study focus: Hydrological predictions often require long weather time series of high temporal resolution. Daily observations typically exceed the length of sub-daily observations, and daily gauges are more widely available than sub-daily gauges. The issue can be overcome by disaggregating daily into sub-daily values. We present an open-source tool for the non-parametric space-time disaggregation of daily precipitation and temperature into hourly values called spatial method of fragments (S-MOF). A large number of comparative experiments was conducted for both S-MOF and MOF in the two study regions.
New hydrological insights for the region: Our experiments demonstrate the applicability of the univariate and spatial method of fragments in the two temperate/subarctic study regions where snow processes are important. S-MOF is able to produce consistent precipitation and temperature fields at sub-daily resolution with acceptable method related bias. For precipitation, although climatologically more complex, S-MOF generally leads to better results in the Province of Trento than in Sweden, mainly due to the smaller spatial extent of the former region.

... Past uncertainty analysis efforts via unsteady models have been based largely on a joint flood frequency-shape analysis, with only a very limited number of studies using ensembles of hydrological models to provide uncertain hydraulic model boundary conditions (e.g. Pappenberger et al. 2005, Bermúdez et al. 2017, Breinl et al. 2017. However, those studies that have used hydrological models to provide boundary conditions in this way have not done so with the intention of creating design hydrographs. ...

Prediction of design hydrographs is key in floodplain mapping using hydraulic models, which are either steady-state or unsteady. The former, which only requires an input peak, substantially overestimates the volume of water entering the floodplain compared to the more realistic dynamic case simulated by the unsteady models that require the full hydrograph. Past efforts to account for the uncertainty of boundary conditions using unsteady hydraulic modeling have largely been based on a joint flood frequency-shape analysis, with only a very limited number of studies using hydrologic modeling to produce the design hydrographs. This study therefore presents a generic probabilistic framework that couples a hydrologic model with an unsteady hydraulic model to estimate the uncertainty of flood characteristics. The framework is demonstrated on the Swannanoa River watershed in North Carolina, USA. Given its flexibility, the framework can be applied to study other sources of uncertainty in hydrologic models and other watersheds.