Article

A framework for flood risk assessment under nonstationary conditions or in the absence of historic data

Wiley
Journal of Flood Risk Management
Authors:
  • WEST Consultants Inc
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We present a diagnostic framework to assess changes in flood risk across multiple scales in a river network, under nonstationary conditions or in the absence of historical hydro-meteorological data. The framework combines calibration-free hydrological and hydraulic models with urban development information to demonstrate altered flood risk. Our models utilize hydraulic geometric data and high-resolution remote-sensing information provided on a nearly global basis. The need for calibration is eliminated because model parameters are directly related to the physical properties of the system. We apply the methodology in a case study for Mecklenburg County, NC, in which we assess the effects of land cover changes on flood frequency. We obtained maps of expected inundated zones under different conditions of land cover and storm return periods and compared them with the 100-year return period inundation maps developed by Federal Emergency Management Agency that are based on more complex hydraulic models. The close agreement supports our framework's applicability and generality.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Effective flood risk management activities-risk reduction, emergency response, recovery-require an accurate and timely characterization of the hazard and its possible consequence (losses) at a given location and for the entire affected region [3]; that is, the flood loss footprint. Current significant annual economic damage and human losses caused by riverine floods, combined with projected increases in flood intensity and frequency due to climate change and land cover change [4,5,46], highlights the need for such information. However, methods that are able to accurately simulate or observe flood magnitudes over large areas, across multiple spatial scales, and in a timely manner are typically unavailable. ...
... We simulate streamflow using CUENCAS, a spatially explicit physically based hydrological model. Prior flood research using CUENCAS has been presented by Mantilla and Gupta [24], Mandapaka et al [25]; Cunha et al [5]; Cunha et al [26], Seo et al [27], Ayalew et al [28], Ayalew et al [29]. Even though other hydrological models that simulate floods are available, we chose CUENCAS due to its computational efficiency, valid and detailed representation of the river network, simplified parameters estimation based on measurable physical properties, and its ability to simulate floods across a large range of scales (from hillslope to large watersheds). ...
... From model 1, if a census tract were to increase its observed maximum FPR by one unit, the expected number of claims from an event would increase by a factor of 1.81 while holding all other variables in the model constant. From model 3, census tracts experiencing FPRs classified as action, minor, or moderate have expected number of claims that are respectively 72%, 66% and 56% of the number of claims expected for tracts experiencing major FPR while holding all other variables in the model constant 5 . As expected, from the inflated portion of the model (supplementary material table 3), a higher observed FPR value is not a statistically significant driver of a less likely zero-flood claim occurrence. ...
Article
Full-text available
Among all natural disasters, floods have historically been the primary cause of human and economic losses around the world. Improving flood risk management requires a multi-scale characterization of the hazard and associated losses—the flood loss footprint. But this is typically not available in a precise and timely manner, yet. To overcome this challenge, we propose a novel and multidisciplinary approach which relies on a computationally efficient hydrological model that simulates streamflow for scales ranging from small creeks to large rivers. We adopt a normalized index, the flood peak ratio (FPR), to characterize flood magnitude across multiple spatial scales. The simulated FPR is then shown to be a key statistical driver for associated economic flood losses represented by the number of insurance claims. Importantly, because it is based on a simulation procedure that utilizes generally readily available physically-based data, our flood simulation approach has the potential to be broadly utilized, even for ungauged and poorly gauged basins, thus providing the necessary information for public and private sector actors to effectively reduce flood losses and save lives.
... As such, its accurate representation and modeling is important to understanding how peak discharges are organized at different scales and how their scaling property is controlled by different catchment physical properties. CUENCAS is extensively tested across multiple scales ranging from a single hillslope (0.01 km 2 ) to mesoscale watersheds (20,000 km 2 ) and is found to produce reasonable results [30]. ...
... We systematically organized the simulation experiment into four distinctive groups in order to separately study the effect of that range from 0.1 to 2 m/s and set k 1 ¼ 0:3 and k 2 ¼ À0:1. These k 1 and k 2 values are supported by field data [30,31] and, as shown in Fig. 3, are also appropriate for our study catchments. For a given v r , these k 1 and k 2 values generally lead to v c values that, in addition to increasing with increasing rainfall intensity, increase in the downstream direction. ...
... According to Askew (1997), floods cause about one-third of all deaths, one-third of all injuries and one third of all damages from natural disasters. Since 1990, in the United States alone, floods have caused more than 10 000 deaths, and economic losses are greater than US$470 billion (Cunha et al., 2011). In Europe, there were 157 major floods during the period of 1971-1995, and the cost of damages in the period of 1991-1995 was estimated as European currency unit, ECU, 99 billion (European Environment Agency, 2001). ...
... Recently in 2010, a destructive flood in Pakistan affected up to 20 million people and left more than 1800 dead (Ahmad et al., 2011). The impacts of floods are expected to increase all over the world because of climate change, population growth and migration to coastal areas (Cunha et al., 2011). ...
Article
Traditional flood design methods are increasingly supplemented by risk-oriented methods based on comprehensive risk analysis. This analysis requires: (1) the estimation of flood hazard that represents intensity of a flood, (2) estimation of vulnerability, e.g. percentage of damage to total property as a function of flood depth and duration, and (3) the consequences of flooding, e.g. loss of life and damage to property. In this study, flood hazard maps of the Balu-Tongikhal River system within the eastern part of Dhaka City are prepared using geoprocessing tools and a hydrodynamic model. The raster-based vulnerability maps and expected damage maps of several return period floods are then produced. In comparison with the classical inundation maps, these damage maps generate more information about the flooding events. Consequently, the produced maps are useful in evaluating policy alternatives and minimising property loss because of floods in the study area.
... This presents an opportunity to improve the method by incorporating learning over time: we utilize the predicted future values of state variables to essentially reweight SOWs over time according to updated beliefs about which ones are more likely. By taking advantage of new observations and learning which type of SOW is being experienced over time, policies can be modified as the underlying veil of ignorance is lifted gradually (Cunha et al., 2011;van der Pol et al., 2014). ...
Article
Full-text available
Direct policy search (DPS) is a method for identifying optimal policies (i.e., rules) for managing a system in response to changing conditions. In this article, we introduce a new adaptive way to incorporate learning into DPS. The standard DPS approach identifies “robust” policies by optimizing their average performance over a large ensemble of future states of the world (SOW). Our approach exploits information gained over time, updating prior beliefs about the kind of SOW being experienced. We first run the standard DPS approach multiple times, but with varying sets of weights applied to the SOWs when calculating average performance. Adaptive “metapolicies” then further improve performance by specifying how control of the system should switch between policies identified using different weight sets, depending on our updated beliefs about the relative likelihood of being in certain SOWs. We outline the general method and illustrate it using a case study of efficient dike heightening that simultaneously minimizes protection system costs and flood damage resulting from rising sea levels and storm surge. The solutions identified by our adaptive algorithm dominate the standard DPS on these two objectives, with an average marginal damage reduction of 35.1% for policies with similar costs; improvements are largest in SOWs with relatively lower sea level rise. We also evaluate how performance varies under different ways of implementing the algorithm, such as changing the frequency with which beliefs are updated.
... The indicators involved in the risk assessment are complex and consist of uncertainties in terms of temporal and spatial manner (Meyer et al. 2009). The main challenge is acquiring and collecting data of the selected indicators like hydrological, meteorological, LULC, etc., as it requires a workforce and is highly time-consuming for flood-prone areas (Cunha et al. 2011;Emanuelsson et al. 2014;Mishra and Sinha 2020). ...
Article
Floods are frequently occurring events in the Assam region due to the presence of the Brahmaputra River and the heavy monsoon period. An efficient and reliable methodology is utilized to prepare a GIS-based flood risk map for the Assam region, India. At the regional and administrative level, the flood hazard index (FHI), flood vulnerability index (FVI), and flood risk index (FRI) are developed using multi-criteria decision analysis (MCDA) – analytical hierarchy process (AHP). The selected indicators define the topographical, geological, meteorological, drainage characteristics, land use land cover, and demographical features of Assam. The results show that more than 70%, 57.37%, and 50% of the total area lie in moderate to very high FHI, FVI, and FRI classes, respectively. The proposed methodology can be applied to identify high flood risk zones and to carry out effective flood risk management and mitigation strategies in vulnerable areas.
... In place of these hydrologic models, conceptual models are often employed as they are very fast in simulation and thus enable the use of long time series, many meteorological scenarios or different states of initial conditions (Blazkova & Beven, 2004;Cameron et al., 1999;Haberlandt & Radtke, 2014;Paquet et al., 2013;Winter et al., 2019;Zeimetz et al., 2018). They thus appear to be more suitable for PRFA than physically based models that, although do not need a direct calibration (Cunha et al., 2011), are slow in simulation. Such conceptual models require calibration with observed data and are thus linked with uncertainty (Renard et al., 2011;Sikorska & Renard, 2017;. ...
Article
Full-text available
Hydrologic models are often employed in flood risk studies to simulate possible hydrologic responses. They are, however, linked with uncertainty that is commonly represented with uncertainty intervals constructed based on a simulation ensemble. This work adapts an alternative clustering-based approach to first, learn about hydrological responses in the frequency space, and second, select an optimal number of clusters and corresponding representative parameters sets for a hydrologic model. Each cluster is described with three parameter sets, which enable percentile and prediction intervals to be constructed. Based on a small Swiss catchment with 10,000 years of daily pseudo-discharge simulations, it was found that clustering the ensemble of 1000 members into 5–7 groups is optimal to derive reliable flood prediction intervals in the frequency space. This lowers the computational costs of using a hydrological model by 98%. The developed approach is suitable for probabilistic flood risk analysis with current or future climate conditions to assess hydrologic changes.
... The quantitative-based empirical models depend upon data analysis targeted at evaluating the relationship between flood occurrence and flood causing/contributing factors which will be referred to herein and after as Flood Influencing Factors (FIFs) while the qualitative approach relies upon experts' opinion of the same. Flood prediction through empirical modelling strives to create a connection between the physical processes which encompasses flood generation through regression equations and parameters that are capable of evaluating flood frequency analysis or flood spatial extent (Cunha et al., 2011;Feloni et al., 2019;Skakun et al., 2014). The qualitative approach is usually subjected to experts' opinion which is the reason why the popular MCDM, even though the weights are empirically derived, remains as a semiquantitative method. ...
Article
Flood hazard mapping (FHM) has undergone significant development in terms of approach and capacity of the result to meet the target of policymakers for accurate prediction and identification of flood-prone or affected regions. FHM is a vital tool in flood hazard and risk management analysis. Previous review studies have focused on flood inundation modelling methods. This present study presents a thorough and current review of the physically-based, empirical, and physical modelling methods in FHM. The study gives insight into strengths, limitations, case studies, and uncertainties associated with the methods. It further discusses the approaches in handling uncertainties related to each method, and its recent development. The review study is targeted at enlightening researchers and decision-makers with an extensive understanding of the methods and of the recent improvements in FHM thereby empowering flood management agencies, decision-makers, design engineers, early warning system agencies, and responders in addressing and making accurate decisions in flood-related problems, employing best management practices in flood management, and adaption of climate decision-making towards building resilient infrastructures.
... Due to climate change, population growth, and urban development, flooding will continue to occur more often with severe consequences (Zhang et al., 2018;Sadler et al., 2017). Soil conditions and topography of an area contribute to increased flood risk (Cunha et al., 2011). Floods can also occur as a result of human-made causes. ...
Article
Natural disasters, such as flooding, can cause severe social, environmental, and economic damage to a community. Transportation infrastructure plays an essential role in flood response and recovery efforts. However, flooding may disturb road functionality and generate direct and indirect adverse impacts, including the loss of access to essential services. This paper presents a comprehensive analysis of flood impacts on road network topology and accessibility to amenities for major communities in the State of Iowa using graph-theoretic methods, including single-source shortest path analyses. We assessed the disruption of transportation networks on the accessibility to critical amenities (e.g., hospitals) under 100 and 500-year flood scenarios. Our analysis methodology leads toward the development of an integrated real-time decision support system that will allow decision-makers to explore “what if” flood scenarios to identify vulnerable areas and population in their authority. These analyses could promote possible improvements (e.g., temporary relocation of critical services) to mitigate the consequences of road system failure during flooding. Due to varying environmental conditions at specific locations and effects on road topology under flood events, the results show differential impacts in edge and node losses as well as access to critical services. Results indicate that floods can lead to edge losses of up to 18%, and not only large cities but also some small cities can experience significant vulnerability to flooding. Some new or reconstructed bridges have failed to operate during analyzed flood events. During the 100 and 500-year flood return periods, the total number of inaccessible bridges within the selected cities is 184 and 294, respectively. Our work found that the shortest path length to the closest critical amenity under baseline condition can flip to the second or higher-orders during flooding. Many critical amenities have been found at risk of flooding in the studied cities.
... 2 Flood hazard and vulnerability assessment As described, flood risk is commonly defined as the product of hazard, (the physical and statistical characteristics of the floods) and the vulnerability of exposed environment (Wisner et al. 2004;Apel et al. 2009;Muller et al. 2011). The flood risk assessment process involves hydrological and hydraulic modelling to estimate the hazard, and socioeconomic studies to estimate the vulnerability to flooding (Cunha et al. 2011;Mynett and Vojinovic 2009). The following sections describe an overview of flood hazard and vulnerability assessment. ...
Chapter
Full-text available
The present paper discusses a novel approach for flood risk assessment and mitigation in areas with cultural heritage. The ambition of the present paper is to provide a ‘road map’ of the holistic way of working towards climate change adaptation which was introduced in some earlier publications of authors. It is designed to provide the reader with some basic ideas of the holistic view of flood risk, its practicalities and supporting frameworks for implementation. The work was undertaken in Ayutthaya heritage site in Thailand. The approach combined qualitative and quantitative data and methods. The qualitative part of analysis involved a more active role of stakeholders whereas the quantitative part was based on the use of numerical models and engineering principles. Based on the results obtained, this paper argues that perceptions of flood hazard and flood risk (i.e., qualitative part of analysis) yield a richer understanding of the problems and should be incorporated into the engineering analysis (i.e., quantitative part of analysis) to achieve more effective climate change adaptation and flood risk mitigation. Several benefits can be achieved applying the approach advocated in this paper. First, the combination of qualitative and quantitative data and methods opens up new views for risk analysis and selection of measures. Second, since it is based on a more active stakeholder participation the potential for success of this novel approach should be higher than any of the traditional approaches. Finally, design of measures can generate more favourable alternative as it employs a combination of measures that can deliver multiple benefits to stakeholders.
... Setting a reference site is fundamental in biomonitoring as it should encompass more than index estimation but also ranking the measures against a reference or benchmark which requires leastwise one reference site as a part of the successful and manageable bioassessment (Simpson and Norris, 2000). Establishing reference site is an ultimate goal for environmental management, protection and recreational efforts (Cunha et al., 2011). Depending on the conclusion of Fishar and Williams (2008), the Nile River lacked a reference site as they classified the site just below Aswan High Dam (the cleanest site in the Nile) at very polluted level. ...
Article
Full-text available
Designing successful biomonitoring and bioassessment programs are an important basis for the rehabilitation and restoration of aquatic ecosystems. Developed countries have great progress in this regard, meanwhile, developing countries still have much to do to improve the ecological status of their aquatic systems. Many shortages and deficits are common in the developing countries, among them is the development of comprehensive and coherent bioassessment program. Researchers in many developing countries, as in Egypt, apply biotic indices primarily established in the developed countries, sometimes they develop their own ones, but to date, they are still unable to make a noticeable progress in their water quality. This review aims to evaluate biomonitoring and bioassessment attempts and reappraising the application of both national and foreign developed biotic indices. Their deficits and shortages are clarified and discussed with some recommendations.
... Several methods have been proposed to address nonstationarities in the time series (Cunha et al., 2011;Yilmaz and Perera, 2013;Jang et al., 2015;Moon et al., 2016), and many studies have been conducted to examine changes in design rainfall depth or return levels under nonstationary conditions (Salvadori and DeMichele, 2010;Graler et al., 2013;Hassanzadeh et al., 2013;Salas and Obeysekera, 2013;Shin et al., 2014;Choi et al., 2019). Looking at the probability distributions and parameters applied to the above studies, most of the nonstationary frequency analysis is performed by expressing specific parameters of the Gumbel or generalized extreme value (GEV) distribution as a function of the covariate including time . ...
Article
Full-text available
Several methods have been proposed to analyze the frequency of nonstationary anomalies. The applicability of the nonstationary frequency analysis has been mainly evaluated based on the agreement between the time series data and the applied probability distribution. However, since the uncertainty in the parameter estimate of the probability distribution is the main source of uncertainty in frequency analysis, the uncertainty in the correspondence between samples and probability distribution is inevitably large. In this study, an extreme rainfall frequency analysis is performed that fits the peak over threshold series to the covariate-based nonstationary generalized Pareto distribution. By quantitatively evaluating the uncertainty of daily rainfall quantile estimates at 13 sites of the Korea Meteorological Administration using the Bayesian approach, we tried to evaluate the applicability of the nonstationary frequency analysis with a focus on uncertainty. The results indicated that the inclusion of dew point temperature (DPT) or surface air temperature (SAT) generally improved the goodness of fit of the model for the observed samples. The uncertainty of the estimated rainfall quantiles was evaluated by the confidence interval of the ensemble generated by the Markov chain Monte Carlo. The results showed that the width of the confidence interval of quantiles could be greatly amplified due to extreme values of the covariate. In order to compensate for the weakness of the nonstationary model exposed by the uncertainty, a method of specifying a reference value of a covariate corresponding to a nonexceedance probability has been proposed. The results of the study revealed that the reference covariate plays an important role in the reliability of the nonstationary model. In addition, when the reference covariate was given, it was confirmed that the uncertainty reduction in quantile estimates for the increase in the sample size was more pronounced in the nonstationary model. Finally, it was discussed how information on a global temperature rise could be integrated with a DPT or SAT-based nonstationary frequency analysis. Thus, a method to quantify the uncertainty of the rate of change in future quantiles due to global warming, using rainfall quantile ensembles obtained in the uncertainty analysis process, has been formulated.
... : Flood management components derived from Source:(Cunha, Krajewski, Mantila, & Cunha, 2011), ...
Conference Paper
Full-text available
Effective flood management practices leads to flood mitigation and plausible minimal physical &other losses. The objective of this research paper is to propose a framework for flood management to reduce the flood effects to certain manageable limits. As flood management is a complex task involving various uncertain parameters and can be obtained through data assimilation of various rigorous processes occurring in real life situations, various methods used, data collection etc in a systematic way by arranging all in chronological order in a suitable framework. In order to achieve the objectives, total 13 frameworks were studied extensively around the world based on the certain parameters which are considered significant. The paper focuses on the various methods of flood management in a systematic and logical manner starting from the main goal, problem identification, risk assessment, data collection, model simulation, simulation results, DSS, implementation and finally mitigation. This paper proposes a Framework for flood management (FFM) that can be effectively used in any watershed basin with any hydrological model used in it to obtain a predetermined goal of managing floods effectively.
... et al., 2013), and land use or climatic change(Cunha et al., 2011). Third, at 838 least some "blame" for poor model performance lies on the precipitation and other meteorological 839 inputs. ...
Article
Full-text available
Stochastic Storm Transposition (SST) involves resampling and random geospatial shifting (i.e. transposition) of observed storm events to generate hypothetical but realistic rainstorms. Though developed as a probabilistic alternative to probable maximum precipitation (PMP) and sharing PMP’s storm transposition characteristic, SST can also be used in more typical rainfall frequency analysis (RFA) and flood frequency analysis (FFA) applications. This paper explains the method, discusses its origins and linkages to both PMP and RFA/FFA, and reviews the development of SST research over the past six decades. Discussion topics includes: the relevance of recent advances in precipitation remote sensing to frequency analysis, numerical weather prediction, and distributed rainfall-runoff modeling; uncertainty and boundedness in rainfall and floods; the flood frequency challenges posed by climatic and land use change; and the concept of multi-scale flood frequency. Recent literature has shown that process-based multiscale FFA, in which the joint distributions of flood-producing meteorological and hydrological processes are synthesized and resolved using distributed physics-based rainfall-runoff models, provides a useful framework for translating nonstationary hydroclimatic conditions into flood frequency estimates. SST pairs well with the process-based approaches. This pairing is promising because it can leverage advances from other branches of hydrology and hydrometeorology that appear to be difficult to integrate into better-known RFA and FFA approaches. The paper closes with several recommendations for future SST research and applications.
... The SCS-CN method estimates direct event runoff for a rainfall amount of known quantity using a CN value obtained from catchment characteristics and antecedent rainfall 5-days prior to the event. The CN values for an ungauged catchment are derived from lookup tables based on the soil hydrologic group (a function of soil type and land cover and land use), and antecedent moisture condition (AMC) ( Cunha et al., 2010;Maidment, 1993;Mishra et al., 2013;USDA, 1986;Woodward et al., 2003). Numerous studies have shown that CN values, either theoretical or those obtained using the SCS-CN guidelines, can be substantially different from empirically calculated values using observed rainfall-runoff events (Banasik and Woodward, 2010;Ebrahimian et al., 2012;Epps et al., 2013a;King and Balogh, 2008;Soulis and Valiantzas, 2013;Walega and Salata, 2019), warranting a need for their improvement (Caviedes-Voullième et al., 2012;Ponce and Hawkins, 1996;Walega and Rutkowska, 2015) including in the application for events with less than a 24-h duration (Meadows, 2016). ...
Article
Full-text available
Study region: Southeastern United States Study focus: The objective was to evaluate the ability of two modified SCS-CN models to predict direct runoff (DRO) and peak discharge rate (Q p) for selected storm events in three forested watersheds in the region-one low-gradient system in South Carolina, two high-gradient upland systems in North Carolina, and a mid-gradient upland system in Arkansas. New hydrological insights for the region: The calculated peak discharge rate Q p values by all methods were unsatisfactory when using the default pond and swamp adjustment factor (F p) value of 0.72 recommended in the SCS TR55 guideline, indicating that use of the default F p value may result in erroneous Q p estimates for forest watersheds with high retention capacities. These findings, indicating the superiority of the modified Sahu-Mishra-Eldo (SME_m) method to the original SCS method in runoff calculations and substantially lower F p values in the associated Q p method, are significant for hydrologists and engineers who frequently apply the methods in design of storm water management structures including road culverts on forested landscapes.
... Long-term records of such variables, particularly soil moisture, are virtually nonexistent. There have been numerous examples within the FFA literature pointing to situations in which discharge-based analyses can be inferior to those based on hydrologic modeling, including cases of basin storage "discontinuities" (Rogger et al., 2012), reservoirs (Ayalew et al., 2013), and land use change (Cunha et al., 2011). ...
Article
Full-text available
Floods are the product of complex interactions among processes including precipitation, soil moisture, and watershed morphology. Conventional flood frequency analysis (FFA) methods such as design storms and discharge-based statistical methods offer few insights into these process interactions and how they “shape” the probability distributions of floods. Understanding and projecting flood frequency in conditions of nonstationary hydroclimate and land use require deeper understanding of these processes, some or all of which may be changing in ways that will be undersampled in observational records. This study presents an alternative “process-based” FFA approach that uses stochastic storm transposition to generate large numbers of realistic rainstorm “scenarios” based on relatively short rainfall remote sensing records. Long-term continuous hydrologic model simulations are used to derive seasonally varying distributions of watershed antecedent conditions. We couple rainstorm scenarios with seasonally appropriate antecedent conditions to simulate flood frequency. The methodology is applied to the 4002 km2 Turkey River watershed in the Midwestern United States, which is undergoing significant climatic and hydrologic change. We show that, using only 15 years of rainfall records, our methodology can produce accurate estimates of “present-day” flood frequency. We found that shifts in the seasonality of soil moisture, snow, and extreme rainfall in the Turkey River exert important controls on flood frequency. We also demonstrate that process-based techniques may be prone to errors due to inadequate representation of specific seasonal processes within hydrologic models. If such mistakes are avoided, however, process-based approaches can provide a useful pathway toward understanding current and future flood frequency in nonstationary conditions and thus be valuable for supplementing existing FFA practices.
... Floods represent about 40% of the natural disasters in the world (Ohl & Tapsell, 2000) and they killed most people by 2016 (Guha-Sapir, Hoyois, & Below, 2011). In addition, extraordinary river floods considerably increased the past years involving economic loss (Cunha & Krajewski, 2011;Dottori et al., 2018;Gilles, Young, & Schroeder, 2012). Hydraulic models allow analysing and mapping floods that help to prevent, mitigate, and reduce flood risks and their consequences. ...
Article
Full-text available
Floods represent a severe cause of deaths and economic loss. In order to prevent, mitigate, and reduce flood risks and their consequences, hydraulic models allow analysing and mapping floods. The results of an appropriate model that works under local conditions are a valuable tool for local governments leading to sustainable management of floodplains. Around the world, high-mountain rivers have been poorly modelled; their orography and data scarcity present an extra research difficulty. Considering that all one-dimensional models assume that the river bed slope is small, this study evaluated two widely applied one-dimensional models: Mike11 and HEC-RAS, for modelling a high mountain river. Their best configuration under complex topographical conditions and their potential use was assessed by calibration and validation of the models. We found that the HEC-RAS model was not able to define a stable solution of the hydrodynamic modelling of the river, while Mike11 yielded stable results. Furthermore, the validation of the Mike11 model showed good performance. This study sets a precedent in the 1D modelling of high-mountain rivers with data scarcity.
... Long-term records of such variables, particularly soil moisture, are virtually nonexistent. There have been numerous examples within the FFA literature pointing to situations in which discharge-based analyses are inferior to others based on hydrologic modeling, including cases of basin storage "discontinuities" (Rogger et al., 20 2012), reservoirs (Ayalew et al., 2013), and land use change (Cunha et al., 2011). ...
Article
Full-text available
Floods are the product of complex interactions of processes including rainfall, soil moisture, and watershed morphology. Conventional flood frequency analysis (FFA) methods such as design storms and discharge-based statistical methods offer few insights into process interactions and how they shape the probability distributions of floods. Understanding and projecting flood frequency in conditions of nonstationary hydroclimate and land use requires deeper understanding of these processes, some or all of which may be changing in ways that will be undersampled in observational records. This study presents an alternative process-based FFA approach that uses stochastic storm transposition to generate large numbers of realistic rainstorm scenarios based on relatively short rainfall remote sensing records. Long-term continuous hydrologic model simulations are used to derive seasonally varying distributions of watershed antecedent conditions. We couple rainstorm scenarios with seasonally appropriate antecedent conditions to simulate flood frequency. The methodology is applied in Turkey River in the Midwestern United States, a watershed that is undergoing significant climatic and hydrologic change. We show that using only 15 years of rainfall records, our methodology can produce more accurate estimates of present-day flood frequency than is possible using longer discharge or rainfall records. We found that shifts in the seasonality of soil moisture conditions and extreme rainfall in Turkey River exert important controls on flood frequency. We also demonstrate that process-based techniques may be prone to errors due to inadequate representation of specific seasonal processes within hydrologic models. Such mistakes are avoidable, however, and our approach may provide a clearer pathway toward understanding current and future flood frequency in nonstationary conditions compared with more conventional methods.
... Mathematically, risk can be defined as the product of hazard and vulnerability (or negative consequences) (Li et al. 2012;UN 1992). This definition has been used extensively in the academic literature for flood risk assessment (Aerts et al. 2013;Cunha et al. 2011;Grossi 2005;Kron 2005;Meyer et al. 2009). The USACE conceptualized flood risk as a function of hazard, performance, exposure, vulnerability, and consequences (USACE 2017b). ...
Article
Full-text available
Flooding is one of the most devastating and most deadly natural disasters. Every year, millions of people are affected by flooding worldwide with massive economic losses that averaged around $50 billion annually. Over the years, the frequency of floods and resulting economic losses have increased considerably around the world. This trend is projected to continue in the coming years due to factors such as population growth, increasing urbanization, infrastructure decay, and the potential impact of climate change. In recent years, there have been attempts by researchers to incorporate these factors in flood risk modeling. However, several challenges still exist. This paper aims to review current approaches to modeling the potential impact of climate change, population growth, increasing urbanization, and infrastructure decay on flood risk with the goal of identifying future research needs. We also examine the current approach to flood risk communication and public risk perception with the aim of identifying the challenges to effective risk communication. Issues identified for future research include uncertainty propagation, decoupling changes in flood patterns caused by climate change and that caused by other factors such as population and economic growth, consideration of infrastructure decay in future flood risk modeling, and consideration of intangible benefits of risk mitigation.
... For example, one of the most devastating floods occurred in China during 1941 caused by overflowing of water from Huang He River (Yellow River). It caused the complete inundation of eightyeight thousand square km of land, four million loss of life and eighty million people were homeless [2]. Since 1990, floods have caused more than 10,000 deaths, and economic losses are greater than US 70 billion in the United States alone. ...
Conference Paper
Full-text available
Natural calamities such as flooding, volcanic eruption, tornado hampers our daily life and causes many sufferings. Flood is one of the most catastrophic among the natural calamities. Assessing flood risk helps us to take necessary steps and save human lives. Several heterogeneous factors are used to assess flood risk on the livelihood of an area. Moreover, several types of uncertainties can be associated with each factor. In this paper, we propose a web based flood risk assessment expert system by combining belief rule base with the capability of reading data and generating web-based output. This paper also introduces a generic RESTful API which can be used without writing the belief rule based expert system from scratch. This expert system will facilitate the monitoring of the various flood risk factors, contributing in increasing the flood risk on livelihood of an area. Eventually, the decision makers should be able to take measures to control those factors and to reduce the risk of flooding in an area. Data for the expert system has been collected from a case study area by conducting interviews.
... With increasing urbanization a growth has been observed in flood incidence, and this, as well as a population increase, has led to increased numbers of deaths and property destruction (Cunha et. al., 2011;National Research Council, 2000). With these increased dangers from urban flooding it is important to review and understand what has generated increased flooding in the urban environment. One of the most important causes of flooding has always been understood to be the increase in impervious surfaces due to the process of urbanization (N ...
Thesis
Full-text available
Observations of the Los Angeles River Watershed over the past 83 years (1930-2012) provide insight into the impact of changes in the impermeable surface coefficient (ISC) on urban discharge. The California ISC developed by California’s Office of Environmental Health Hazard Assessment (OEHHA) was employed to determine ISC value for each parcel within the Los Angeles River Watershed and associated with change in discharge over the 83 years. The average ISC value showed an increase of over 225 percent, 0.193 to 0.438. Discharge of the Los Angeles River has exhibited an increasing trend, yet precipitation did not show a significant influence on discharge. Analyzing the connection between ISC and discharge resulted in Kendall tau correlation of τ = 0.460, p < 0.05 depicting a strong positive influence of ISC on discharge from the Los Angeles River. The results support the understanding that urbanization leads to increased discharge of urban rivers.
... Meanwhile, as discussed in Wright et al. (2014b), combining SST (or other rainfall-based approaches, e.g. Cunha et al., 2011) with a distributed hazard model allows the analyst to incorporate changes in land use and land cover into nonstationary hazard estimates. ...
Article
RainyDay is a Python-based platform that couples rainfall remote sensing data with Stochastic Storm Transposition (SST) for modeling rainfall-driven hazards such as floods and landslides. SST effectively lengthens the extreme rainfall record through temporal resampling and spatial transposition of observed storms from the surrounding region to create many extreme rainfall scenarios. Intensity-Duration-Frequency (IDF) curves are often used for hazard modeling but require long records to describe the distribution of rainfall depth and duration and do not provide information regarding rainfall space-time structure, limiting their usefulness to small scales. In contrast, RainyDay can be used for many hazard applications with 1–2 decades of data, and output rainfall scenarios incorporate detailed space-time structure from remote sensing. Thanks to global satellite coverage, RainyDay can be used in inaccessible areas and developing countries lacking ground measurements, though results are impacted by remote sensing errors. RainyDay can be useful for hazard modeling under nonstationary conditions.
... In the absence of more sophisticated physically based approaches, based on detailed information of each specific cross section that is rarely available due to limited field surveys, the literature suggests to estimate the bankfull flow as the flood having a 1.5-to 2-year return period (Carpenter et al., 1999;Reed et al., 2007;Harman et al., 2008;Wilkerson, 2008;Hapuarachchi et al., 2011;Cunha et al., 2011;Ward et al., 2013) and a flow that is slightly higher than bankfull may be identified with the 2-year return period flood (Carpenter et al., 1999;Reed et al., 2007). ...
Article
Full-text available
In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often 2 years, which may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever the function form, such models are generally parameterised by minimising the mean square error, which assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, which penalises the overpredictions more. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the country of Italy. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.
... Cunha et al 7 proposed a calibration-free framework to analyze changes in flood hazard across multiple scales in a river network under non-stationary conditions. The frame work combines hydrological and hydraulic models with urban development information to show altered flood hazard. ...
Article
Full-text available
The Jungrimsa's Pagoda (built in the 5th century) is one of the major heritages of the Baekje dynasty (BC18-AC660). In this study, the flood hazard of Jungrimsa's pagoda is assessed based on the flood hazard assessment tools with flood hazard assessment scenarios. The flood hazard assessment tools are comprised of a rainfall runoff model (HEC-HMS), a river hydraulic model (HEC-RAS), a flood inundation model and a geographical information system. A flood hazard is indicated with the expected inundated map of 12 flood hazard scenarios. The most severe scenario is the case of a combination of a case of high tide in the estuary barrage on the West Sea and a case of maximum discharge from Daecheong Dam (which is located in the upper Geum River) and a case of breaking the embankment in Buyeo. The inundation risk was estimated based on the situation of a breaking of the embankment in the Geum River with these extreme scenarios. However, the results of this flood hazard assessment show that Jungrimsa's Pagoda is safe from all flood hazard scenarios. This study suggests a direction for the flood hazard management of the architectural heritage site based on scenario analysis.
... 2 Flood hazard and vulnerability assessment As described, flood risk is commonly defined as the product of hazard, (the physical and statistical characteristics of the floods) and the vulnerability of exposed environment (Wisner et al. 2004;Apel et al. 2009;Muller et al. 2011). The flood risk assessment process involves hydrological and hydraulic modelling to estimate the hazard, and socioeconomic studies to estimate the vulnerability to flooding (Cunha et al. 2011;Mynett and Vojinovic 2009). The following sections describe an overview of flood hazard and vulnerability assessment. ...
Article
Full-text available
This research proposes a holistic approach to flood risk assessment that combines quantitative and qualitative aspects. This approach was developed and applied in the Ayutthaya region in Thailand, which is a UNESCO World Heritage Site. First, flood risk was assessed traditionally as a product of hazard and vulnerability. Both qualitative and quantitative data were gathered from publicly available sources and through interviews, questionnaires, and focus group discussions to assess the vulnerability, using various weights for the different vulnerability dimensions. The hazard was assessed using a coupled 1D-2D flood model, and the resulting vulnerability and risk were mapped. Second, an alternative flood risk map was produced based on group mapping exercises with local residents, which captures the level of perceived risk. The traditional flood risk map was adjusted by varying the vulnerability weights to better match the perceived risk map. The analysis of these two maps revealed that two approaches to flood risk assessment can be used effectively in gaining different insights of the phenomena, and as such, they both should be used in flood risk management planning.
... It is computationally efficient, the required inputs are generally available, and it relates the runoff to the soil type, land use, and management practices. To derive CN values (valid for storm duration shorter than 1 day) for an ungauged catchment, NRCS provided tables based on the soil type, land cover and land use, hydrological conditions, and antecedent moisture condition (AMC) (Cunha et al. 2011, Maidment 1993, Mishra et al. 2013. ...
Article
Full-text available
The aim of this study was to evaluate the usefulness of modified methods, developed on the basis of NRCS-CN method, in determining the size of an effective rainfall (direct runoff). The analyses were performed for the mountain catchment of the Kamienica river, right-hand tributary of the Dunajec. The amount of direct runoff was calculated using the following methods: (1) Original NRCS-CN model, (2) Mishra-Singh model (MS model), (3) Sahu-Mishra-Eldho model (SME model), (4) Sahu 1-p model, (5) Sahu 3-p model, and (6) Q_base model. The study results indicated that the amount of direct runoff, determined on the basis of the original NRCS-CN method, may differ significantly from the actually observed values. The best results were achieved when the direct runoff was determined using the SME and Sahu 3-p model.
... 1) extraction of the basin river network using 30-m DEM (USGS) and CUENCAS, a geographical information system developed for river network analyses (Mantilla and Gupta 2005) and hydrological simulation (Cunha et al. 2011Cunha et al. , 2012); 2) estimation of hourly rain-rate time series for each hillslope using hillslope mask and different rainfall datasets (Stage IV, IFC-SP, and Merged-DP); ...
Article
The NEXRAD program has recently upgraded the WSR-88D network observational capability with dual polarization (DP). In this study, DP quantitative precipitation estimates (QPEs) provided by the current version of the NWS system are evaluated using a dense rain gauge network and two other single-polarization (SP) rainfall products. The analyses are performed for the period and spatial domain of the Iowa Flood Studies (IFloodS) campaign. It is demonstrated that the current version (2014) of QPE from DP is not superior to that from SP mainly becauseDP QPEequations introduce larger bias than the conventional rainfall- reflectivity [i.e., R(Z)] relationship for some hydrometeor types. Moreover, since the QPE algorithm is based on hydrometeor type, abrupt transitions in the phase of hydrometeors introduce errors in QPE with surprising variation in space that cannot be easily corrected using rain gauge data. In addition, the propagation of QPE uncertainties across multiple hydrological scales is investigated using a diagnostic framework. The proposed method allows us to quantifyQPE uncertainties at hydrologically relevant scales and provides information for the evaluation of hydrological studies forced by these rainfall datasets.
... Ivanov et al., 2004;Martina and Entekhabi, 2006;Noto et al., 2008) with specific regard to the application of the width function (WF) geomorphic approach Rodriguez-Iturbe et al., 1994;Rodríguez-Iturbe and Rinaldo, 1997) also considering varying geomorphoclimatic (Graf, 1977;Rodriguez-Iturbe and Valdes, 1979;Rodriguez-Iturbe et al., 1982;Naden, 1992;Di Lazzaro and Volpi, 2011;Volpi et al., 2013) and urban settings (Veitzer and Gupta, 2001;Smith et al., 2002;Ogden et al., 2011) for providing flood risk managers and decision makers an accurate informative framework to develop safe river basin and urban area development and management plans (e.g. Cunha et al., 2011;Faulkner et al., 2012;Pedersen et al., 2012;Ciervo et al., 2014;Emanuelsson et al., 2014). ...
Article
The small ungauged basins of the highly urbanized area of the city of Rome are often the subject of critical flood conditions for the significant human-made transformations. In this work the EBA4SUB framework, implementing the hydrogeomorphic WFIUH rainfall-runoff model, and using the DEM, land use and synthetic precipitation as main input information, is applied for evaluating extreme hydrologic forcing conditions at the basic scale. The goal is to understand the rationale behind the observed increasing frequency of local urban inundations that are also observed in the uplands. Results present the impact of urbanization expressed by both the runoff coefficient, the artificial drainage, impacted by paved surfaces and a dramatic number of river-road intersections (i.e. culverts), and the upstream to downstream non natural scaling behavior of hydrologic parameters and in particular the peak discharge per unit drainage area.
... Furthermore, the flood-generating processes may have changed or may be anticipated to change in the future (e.g. land use change and channelisation) so that the statistics of historical floods cannot easily be used to represent future flood risk (Cunha et al., 2011). ...
Article
Coastal floods can result from multiple forcing variables, such as rainfall and storm tides that are simultaneously extreme. In these situations, flood risk estimation methods must account for the joint dependence between the forcing variables. The design variable method is a statistically rigorous, flexible and efficient approach for evaluating the joint probability distribution. However, in practice a number of factors need to be considered in order to produce accurate estimates of flood risk; these include data selection and pairing, temporal variability of dependence, dependence parameter inference and bias, the estimation of confidence intervals, and the incorporation of possible time-varying changes to each of the forcing variables due to climate change. This paper addresses these factors using a case study from Perth, Western Australia, to show how the design variable method can be applied to coastal flood risk under historical and future climates.
... Climate-related nonstationarity in a time series of precipitation extremes can be associated with climate variability as well as climate change 39 . There are different parametric and non-parametric methods to address nonstationarity in time series [40][41][42][43][44][45][46][47][48] . In this paper, we assess the effect of nonstationarity on IDF curves and the occurrence of extremes. ...
Article
Full-text available
Extreme climatic events are growing more severe and frequent, calling into question how prepared our infrastructure is to deal with these changes. Current infrastructure design is primarily based on precipitation Intensity-Duration-Frequency (IDF) curves with the so-called stationary assumption, meaning extremes will not vary significantly over time. However, climate change is expected to alter climatic extremes, a concept termed nonstationarity. Here we show that given nonstationarity, current IDF curves can substantially underestimate precipitation extremes and thus, they may not be suitable for infrastructure design in a changing climate. We show that a stationary climate assumption may lead to underestimation of extreme precipitation by as much as 60%, which increases the flood risk and failure risk in infrastructure systems. We then present a generalized framework for estimating nonstationary IDF curves and their uncertainties using Bayesian inference. The methodology can potentially be integrated in future design concepts.
... In general, hydrological models that use a semi-discrete approximation of flow transport of the Saint-Venant equations or the Muskingum method fall in this category (Kampf and Burges, 2007). This type of application is relevant to hydrology of floods (see Mantilla and Gupta (2005); Mandapaka et al. (2009); Mantilla et al. (2011); Cunha et al. (2011)), but the numerical methods developed by Small et al. (2012) are more general and can be extend to other types of physical problems where the connectivity between equations is determined by a directed tree structure. ...
Article
Full-text available
A recently developed parallel asynchronous solver for systems of ordinary differential equations (ODEs) is used to simulate flows along the channels in a river network. In our model, precipitation is applied over the hillslopes adjacent to the river network links and water movement from hillsope to link and along the river network is represented as a system of ODEs. The numerical solver is based on dense output Runge-Kutta methods that allow for asynchronous integration. A static partition method is used to distribute the workload among different processes, enabling a parallel implementation that capitalizes on a distributed memory system. Communication between processes is performed asynchronously. We illustrate the solver capabilities by integrating flow transport equations for a ~32,000 km2 river basin subdivided into 574,000 sub-watersheds that are interconnected by the river network. We show that the runtime for an eight month-long simulation forced by 1-km resolution NEXRAD rainfall is completed in under 4 minutes using 64 computing nodes. In addition, we include equations to simulate small reservoirs spread throughout the river network and estimate changes in hydrographs at multiple locations. Our results provide a firm theoretical basis for the concept of distributed flood control systems.
... Igualmente, las pérdidas económicas ocasionadas por crecidas extraordinarias de ríos se han visto incrementados considerablemente en los últimos años (Douben, 2006; Mosquera-Machado y Ahmad, 2007; Cunha y col., 2011; Erwann y Howard, 2011; Gilles y col., 2012). El incremento en el riesgo de inundación ha sido atribuido principalmente a las actividades humanas, nuevos asentamientos humanos que incluyen medidas de protección (p.e. ...
Article
Full-text available
Revista semestral de DIUC 87 Mapeo del peligro de inundación en ríos de montaña, caso de estudio del río Burgay locales (p.e. municipios) pueden utilizar los mapas de zonificación de peligro de inundación para realizar una planificación y manejo sustentable de las llanuras aluviales, mediante medidas estructurales y/o no-estructurales (p. ABSTRACT HEC-RAS, an one-dimensional hydraulic model, was used to simulate and map floods along a 10 km stretch of the Burgay river. Analysis of the results reveals that the model is capable of simulating the flood and inundation situation along rivers in the Andean region, notwithstanding scarcity of information. Local governments (e.g. municipalities) can use the flood hazard zoning maps for the sustainable management of alluvial plains, through the planning and implementation of structural or non-structural measures (e.g. land use planning) considering the physical conditions of the riverbanks.
... For the Xi River, Ling River, You River, Chishui River, Yuxian River and Shidi River sub-basins, there were no hydrological stations, and the calibration and validation of rainfall-runoff in these sub-basins could not be carried out. Due to the similarity of topography and land cover, the calibrated parameters of the model in the Ba River basin were transferred to the six sub-basins that do not have any stations (Cunha et al., 2011). The areal average rainfall in each sub-basin was used as the input to the model, and the streamflow in each sub-basin was obtained on the basis of Rainfall-runoff simulation for unclosed river basin the transferred parameters. ...
Article
As the largest tributary of Yellow River and the ‘Mother River’ of Guanzhong Plain in Shaanxi Province, Wei River plays a great role in the development of West China. The lower reach of Wei River is the key economic district of this region. In this study, TOPMODEL with infiltration excess runoff was modified and applied to simulate inflow process between Xianyang and Huaxian (outlet of the study area) hydrological stations for an unclosed river basin with a complex river system as well as ungauged tributaries. At Qinduzhen and Muduwang hydrological stations, the Nash-Sutcliffe efficiencies (NE) were between 0.70 and 0.84, and the relative errors RE were between -2% and 12%. Integrated with the observed streamflow in Jing River and Shichuan River and the segmented Muskingum flow routing method, streamflow at Huaxian was simulated. Comparison of the simulation results with and without the south inflow showed that the simulated streamflow at Lintong hydrological station with the south inflow between Xianyang and Lintong were closer to the observed streamflow than the one without the south inflow. However, the difference between the simulated streamflow at Huaxian with and without the south inflow was insignificant. This demonstrates that the streamflow simulation at Lintong is important for simulating flows in the study area with reliability.
... [10] When detailed information is available on catchment topography, land use and soil type, river bed geometry, hydraulic regulation, and floodplain conditions, physically based models can produce accurate estimates of flooding from rainfall input. In spite of their cost, such models are being increasingly used by water authorities for flood management and forecasting of specific river basins and subbasins [Willems et al., 2002;Wermer, 2004;Pender and Neelz, 2007;Cunha et al., 2011;Van Steenbergen et al., 2012]. ...
Article
[1] We develop a probabilistic model to estimate the rate of flood-induced losses for a set of properties distributed over a large geographical region (e.g., a portfolio of insured properties within a country). The use of detailed physically based models over large areas becomes difficult due to the vast amount of data needed and the high implementation cost. The proposed model allows one to incorporate results from such detailed models but can also be used in regions that have not been studied in much detail. Minimal required information includes the rate and spatial extent of severe precipitation, the topography and river network from which regions at risk of flooding can be identified, and information on historical floods with an approximate delineation of the flooded area, and associated aggregate losses for at least a few major events. An application to river flood loss from residential buildings in Belgium is presented.
... It is likely that uncertainty about the structural increase in extreme water level observations and the related flood risk under climate change will be reduced over time as time series grow longer, as 'low-data' methods are developed and as model uncertainties in climate and hydrological models are reduced (cf. Wagener et al., 2003;IPCC, 2007;Lenderink et al., 2007;Cunha et al., 2011). We refer to the event of obtaining new information as learning. ...
Article
Water level extremes for seas and rivers are crucial to determine optimal dike heights. Future development in extremes under climate change is, however, uncertain. In this paper, we explore impacts of uncertainty and learning about increasing water levels on dike investment. We extend previous work in which a constant rate of structural water level increase is assumed. We introduce a probability distribution for this rate and study the impact of learning about this rate. We model learning as a single stochastic event where full information becomes available. Numerical solutions are obtained with dynamic programming. We find that the expected value of information can be substantial. Before information arrives, investment size is reduced as compared with the benchmark without learning, but investment frequency may be increased. The impact of learning on the initial investment strategy, however, is small as compared with the impact of uncertainty about increasing water levels by itself.
... More flexible assessment methods that can manage uncertainty, such as real options analysis (Dobes 2008;Ranger et al. 2010), robust decision-making (Lempert and Collins 2007) and pathways assessments (Haasnoot 2012) have been applied to assessment of large infrastructure projects, but have yet to be applied in New Zealand, due to resourcing and capacity constraints. While simplified methods are beginning to emerge (Cunha et al. 2011, Hall et al. 2012, Lawrence et al. 2013) further development of simple and less resource-intensive methods is required. ...
... Physics-based distributed rainfall-runoff modeling approaches, such as that described in Cunha et al. [2011] and the SST-based framework presented here, have advantages over statistical methods, particularly in nonstationary conditions and in heterogeneous settings such as urban watersheds. Namely, the modeler can incorporate current or projected land use or rainfall changes in order to simulate flooding along the drainage network. ...
Article
Flooding is the product of complex interactions among spatially and temporally varying rainfall, heterogeneous land surface properties, and drainage network structure. Conventional approaches to flood frequency analysis rely on assumptions regarding these interactions across a range of scales. The impacts of these assumptions on flood risk estimates are poorly understood. In this study, we present an alternative flood frequency analysis framework based on stochastic storm transposition (SST). We use SST to synthesize long records of rainfall over the Charlotte, North Carolina, USA metropolitan area by “reshuffling” radar rainfall fields, within a probabilistic framework, from a 10 year (2001–2010) high-resolution (15 min, 1 km2) radar data set. We use these resampled fields to drive a physics-based distributed hydrologic model for a heavily urbanized watershed in Charlotte. The approach makes it possible to estimate discharge return periods for all points along the drainage network without the assumptions regarding rainfall structure and its interactions with watershed features that are required using conventional methods. We develop discharge estimates for return periods from 10 to 1000 years for a range of watershed scales up to 110 km2. SST reveals that flood risk in the larger subwatersheds is dominated by tropical storms, while organized thunderstorm systems dominate flood risk in the smaller subwatersheds. We contrast these analyses with examples of potential problems that can arise from conventional frequency analysis approaches. SST provides an approach for examining the spatial extent of flooding and for incorporating nonstationarities in rainfall or land use into flood risk estimates.
... In general, hydrological models that use a semi-discrete approximation of flow transport of the Saint–Venant equations or the Muskingum method fall into this category [3]. Although this application is of principal importance to the authors (see [1,456), the results in this paper are more general and extend to other types of physical problems where the connectivity between equations is determined by a directed tree structure. Thus, the problem and algorithms will make no assumptions about the underlying application, except for the presence of discontinuities. ...
Article
Floods persist as a recurring and daunting peril in the Brahmaputra plain of Assam. Notwithstanding advance ment, Bongaigaon is a highly flood-afflicted district in the lower part of this region, inflicting significant damage to both lives and property almost every consecutive year. Hence, the delineation of precise and reliable flood risk susceptibility zones within the district constitutes the foremost concern of the study. The present work considered a total of sixteen multi-collinearity free parameters integrating with the GIS- based Multi-Criteria Decision Making-Analytical Hierarchy Process (MCDM-AHP) model for identifying potential flood hazard zones (FHZ), f lood vulnerability zones (FVZ), and flood risk zones (FRZ) for the region. The result revealed that over 28% of the district’s total area falls under the high to very high flood risk zone. Srijangram circle covers the highest flood risk zone with 343.19 sq. km. The FHZ map of the district demonstrated reliability exceeding 90% in ROC-AUC and below 40% in MSE and RMSE. Additionally, sensitivity analyses depict the role indicators in the predictive model, placed as a virgin gap for study in the region. Moreover, a multivariate correlation statistic is used to examine the potential risk zones and temporal flood effects on different Revenue Circles (RC), showing R 2 over 0.6 in each category. The robustness of this model manifests sensible findings, aiding in fortifying sustainable flood management strategies to mitigate risks at different levels. Strategies adopted here contain greater potential for current as well as future trends in similar domains of research. This study may provide invaluable insights for decision-makers, thinkers, administrators, and developers working in this region.
Article
Full-text available
In this study, non-stationary frequency analysis was carried out to apply non-stationarity of extreme rainfall driven by climate change using the scale parameter of two parameters of the Gumbel distribution (GUM) as a co-variate function. The surface air temperature (SAT) or dew-point temperature (DPT) is applied as the co-variate. The optimal model was selected by comparing AICs, and 17 of 60 sites were found to be suitable for the non-stationary GUM model. In addition, SAT was chosen as the more appropriate co-variate among 13 of the 17 sites. As a result of estimating changes in design rainfall depth with future SAT rises at 13 sites, it is likely to increase by 10% in 2040 and 18% in 2070. HIGHLIGHTS Rainfall extremes in Korea have non-stationary characteristics with co-variate of surface air temperature or dew-point temperature.; Non-stationary frequency analysis using SAT or DPT can be a reasonable approach for analyzing future rainfall extremes.;
Article
Flood hydrologic response is influenced by rainfall structure (i.e., variability in space and time). How this structure shapes flood frequency is unknown, and flood frequency analyses typically neglect or simplify potentially important aspects of rainfall variability. This study seeks to understand how rainfall structure impacts flood frequency. We use stochastic storm transposition combined with a 15-year record of hourly, 4-km² radar rainfall to generate 10,000 synthetic extreme rain events. These events are resampled into four “scenarios” with differing spatial and temporal resolutions, which are used as input to a distributed hydrologic model. Analysis of variance is used to identify the proportions of total flood peak variability attributable to spatial and to temporal rainfall variability under two antecedent soil moisture conditions. We simulate peak discharges for recurrence intervals of 2 to 500 years for 1,343 subwatersheds ranging in size from 16 to 4,400 km² in Turkey River in the Midwestern United States, which is situated in a typically humid continental climactic region. Antecedent soil moisture modulates the role of rainfall structure in simulated flood response, particularly for more frequent events and large watershed scales. Rainfall spatial structure is more important than temporal structure for drainage areas larger than approximately 2,000 km² (approximately 200 km²) for wet (dry) initial soil conditions, especially when soils are dry, while the reverse is true for smaller subwatersheds. The results appear to be related to the differing propensities for surface and subsurface runoff production as a function of basin scale, event magnitude, and soil saturation. Our results suggest that hydrologic model-based flood frequency analyses, and particularly efforts attempting to spanning a range of scales, must carefully consider rainfall structure.
Chapter
The modeling of rainfall‐driven floods at continental and global scales for both short‐term forecasting and long‐term hazard and risk assessment is increasingly feasible due to increases in data availability and computational power. Flood modeling is difficult or impossible, however, if key information regarding rainfall (when, where, and how much) is of poor or unknown accuracy. This chapter discusses the basic requirements of rainfall information for flood modeling and introduces the different classes of rainfall information, including rain gauges, satellites, and atmospheric models. It also argues that the large‐scale rainfall data sets that are available today, including from rain gauges, satellites, and atmospheric models, face serious constraints for flood modeling due to limited accuracy, coarse resolution, high latency (the time elapsed between a rainfall observation and its subsequent availability for modeling), and the lack of long, consistent data records. There are a number of developments under way, however, that should dramatically improve the quality of such data sets in the coming years, paving the way for exciting opportunities in global flood modeling to better inform early warnings, situational awareness, and long‐term resiliency decision‐making. For these developments to be realized, sustained support and outreach from the international remote sensing, weather, and hydrology communities will be critical.
Article
Badania prowadzono w zlewni rzeki Kamienicy. Zlewnia ta zali-czana jest do trzech mezoregionów: Beskidu Sądeckiego – górna część zlewni, Beskidu Niskiego i Kotliny Sądeckiej – środkowa i dolna część zlewni. W celu weryfikacji przydatności metody NRCS do obliczania od-pływu bezpośredniego wybrano siedem wezbrań jakie wystąpiły w latach 1997-2010 w górnej części zlewni Kamienicy. Wartość parametru CN wg metody NRCS ustalono wykorzystując zaobserwowane zjawiska opad-odpływ. W tym celu dokonano podziału całkowitego hydrogramu odpły-wu na odpływ gruntowy (bazowy) i odpływ bezpośredni. Uzyskane wy-niki potwierdzają doniesienia innych autorów, że parametr CN określony na drodze empirycznej jest znacznie wyższy od wartości teoretycznej dla warunków normalnych. Przyjęto zatem założenie, że w okresie bezopado-wym lub w przypadku opadów normalnych, źródłem zasilania cieków są wody podziemne pierwszego poziomu wodonośnego. Analizy wykazały, iż wykorzystanie przepływu bazowego jako miary uwilgotnienia podłoża w przypadku zlewni górskiej do obliczania parametru CN jest zasadne. Wydaje się, iż przepływ bazowy lepiej niż suma opadów poprze-dzających wezbranie charakteryzuje stopień uwilgotnienia zlewni, bo-wiem w sposób pełniejszy opisuje związki hydrauliczne wód podziem-nych i powierzchniowych oraz poniekąd zdolności retencyjne zlewni.
Article
The estimation procedure to generate rainfall maps in real-time consists of Level II radar volume data collection, quality checks of the acquired data, and rainfall estimation algorithms such as non-meteorological target detection, advection correction, Z-R conversion, and grid transformation. The rainfall intensity map that is generated using data from seven radars around the State of Iowa is updated at nominal 5-min intervals, and the accumulation map is produced based on 15-min, 1-h, and daily intervals. These rainfall products are fed into a physically-based flood forecasting model called CUENCAS that uses landscape decomposition into hillslopes and channel links. The authors present preliminary results of analysis done on real-time radar-rainfall products using raingauge data and hydrological simulations from flood events in 2008 and 2009. They also show how differences in rainfall forcing affect peak flow discharge.
Article
Flood modelling of a large basin like the Fitzroy is a difficult task due to its large catchment size, the long duration of flood events, the non-uniform spatial distribution of rainfall and a lack of required data for modelling purposes. This paper presents a systematic methodology for the flood modelling of the Fitzroy Basin using hydrologic and hydrodynamic modelling approach with Geographic Information System (GIS) capabilities. This study developed five flood scenarios analysing historical flood events and considering three impacts of climate change: upstream subcatchments flooding, local rainfall fluctuations and sea level rise. A hydrologic model was developed within the wider Fitzroy Basin with the five upstream subcatchments and the upper Fitzroy subcatchment in order to simulate discharge data for these scenarios at the Gap measurement location. An integrated hydrologic-hydrodynamic model was developed for the lower Fitzroy subcatchment where output discharges of the hydrologic model were considered as the upstream boundaries. The peak flood levels, peak flow rates and flood inundation durations at Rockhampton city were identified using this integrated model. GIS capabilities were specially used for automatic watershed delineation and river cross-section extraction from Digital Elevation Model (DEM) data. The methodology proposed here is a case study and can be applied to other similar basins or catchments. © 2015 The Chartered Institution of Water and Environmental Management (CIWEM) and John Wiley & Sons Ltd.
Article
Full-text available
In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.
Book
Gmina Trzebinia jest częścią ziemi chrzanowskiej, która wraz z ziemią olkuską należały do Krakowskiego Okręgu Przemysłowego. Na opisywanym obszarze początki eksploatacji rud metali można datować na schyłek epoki brązu. Udokumentowane, podziemne wydobycie rud ołowiu sięga początku XV wieku. Wielowiekowe procesy wydobycia, przerobu, wytopu rud metali oraz transportu, pozyskiwania paliw, wytwarzania i użycia materiałów budowlanych doprowadziły do znacznej degradacji środowiska przyrodniczego gminy Trzebinia i pogorszenia warunków życia mieszkańców. Na terenach zurbanizowanych i poddanych długotrwałemu oddziaływaniu przemysłu bardzo ważne są miejsca, które dają mieszkańcom możliwość odpoczynku i rekreacji przy jednoczesnym zachowaniu wysokich walorów przyrodniczych. Zbiorniki wodne wraz z otoczeniem są predysponowane do pogodzenia tych funkcji. Woda stwarza wielorakie możliwości rekreacji aktywnej, pozwala również na podziwianie przyrody podczas wędkowania czy spaceru. Zbiorniki wodne i cieki to także bogate ekosystemy z wieloma gatunkami roślin i zwierząt. Cenne są nie tylko ekosystemy naturalne, ale także te utworzone przez człowieka, które z czasem ulegają naturalizacji w procesie sukcesji. Nadmierna i niekontrolowana antropopresja związana z wykorzystaniem rekreacyjnym może jednak stwarzać zagrożenie zarówno dla atrakcyjności wypoczynkowej obiektu, jak i dla środowiska przyrodniczego. W gminie Trzebinia ważnym miejscem wypoczynku jest utworzony w latach czterdziestych XX wieku zbiornik Chechło. Nieskoordynowane z walorami przyrodniczymi wykorzystanie obrzeży zbiornika niekorzystnie wpływa na jego estetykę i obniża atrakcyjność rekreacyjną. Dodatkowym efektem takiej presji ze strony człowieka jest także degradacja przyrodniczej funkcji tego obszaru. Zachowanie atrakcyjności rekreacyjnej zbiornika Chechło wymaga podjęcia aktywnych działań mających na celu zrównoważone zagospodarowanie akwenu i jego otoczenia. Wiele ekosystemów w sąsiedztwie zbiornika stanowią ekosystemy półnaturalne. Dla zachowanie całego ich bogactwa przyrodniczego konieczne jest stosowanie przemyślanych zabiegów. Podjęcie prac nad kompleksowym projektem wielofunkcyjnego wykorzystania zbiornika Chechło wymaga w przede wszystkim analizy i oceny jego stanu aktuanego. Dlatego w roku 2012 w ramach współpracy Wydziału Inżynierii Środowiska i Geodezji Uniwersytetu Rolniczego w Krakowie z Urzędem Miasta Trzebini, rozpoczęto badania mające dać podstawę do opracowania projektu rewitalizacji zbiornika Chechło. Charakterystykę techniczną zapory i zbiornika, warunki geologiczne i geotechniczne oraz badania właściwości fizyko-chemicznych i toksykologicznych osadów zalegających na dnie zbiornika zaprezentowano w monografii Uwarunkowania techniczne rewitalizacji zbiornika wodnego Chechło w gminie Trzebinia [Zawisza i in.2014]. Niniejsza monografia przedstawia przyrodniczą i hydrochemiczną analizę zbiornika i jego otoczenia. Czynnikiem podstawowym dla użytkowej funkcji zbiornika, ale także dla właściwego funkcjonowania ekosystemów wodnych i od wody zależnych jest odpowiednia jakość wody, którą oceniono na podstawie parametrów fizykochemicznych i biologicznych. Przedstawiono także poziom zanieczyszczenia metalami ciężkimi gleb i roślin w otoczeniu zbiornika. Scharakteryzowano roślinność w zbiorniku i jego otoczeniu oraz oceniono znaczenie zbiornika jako miejsca bytowania ptaków. Obie monografie mają być swoistym przedstawieniem state of the art na temat zbiornika Chechło oraz stanowić podstawę dla opracowania koncepcji jego rewitalizacji i starań o uzyskanie dofinansowania z funduszy Unii Europejskiej.
Article
Full-text available
Adaptation to climate change has been reviewed in several developed nations, but in none where consideration of the effects of climate change is required by statute and devolved to local government. We examine the role of institutional arrangements, the players operating under them, the barriers and enablers for adaptation decision-making in the developed nation of New Zealand. We examine how the roles and responsibilities between national, regional and local governments influence the ability of local government to deliver long-term flexible responses to changing climate risk. We found that the disciplinary practices of law, engineering and planning, within legal frameworks, result in the use of static mechanisms which create inflexible responses to changing risk. Several enablers are identified that could create greater integration between the different scales of government, including better use of national policy instruments, shared professional experience, standardised information collection and risk assessment methods that address uncertainties. The framing of climate risk as dynamic and changing that differentiates activities over their lifetime, development of mechanisms to fund transitions towards transformational change, are identified as necessary conditions for delivering flexible responses over time.
Article
Key Points A significant potential source of error exists in mosaicked radar‐rainfall maps. Different radar calibration offsets lead to misestimation of rainfall amounts. Systematic error in rainfall significantly affects hydrologic predictions.
Article
Full-text available
Dual-polarization radars are expected to provide better rainfall estimates than single-polarization radars because of their ability to characterize hydrometeor type. The goal of this study is to evaluate single- and dualpolarization radar rainfall fields based on two overlapping radars (Kansas City, Missouri, and Topeka, Kansas) and a dense rain gauge network in Kansas City. The study area is located at different distances from the two radars (23-72km for Kansas City and 104-157km for Topeka), allowing for the investigation of radar range effects. The temporal and spatial scales of radar rainfall uncertainty based on three significant rainfall events are also examined. It is concluded that the improvements in rainfall estimation achieved by polarimetric radars are not consistent for all events or radars. The nature of the improvement depends fundamentally on range-dependent sampling of the vertical structure of the storms and hydrometeor types. While polarimetric algorithms reduce range effects, they are not able to completely resolve issues associated with range-dependent sampling. Radar rainfall error is demonstrated to decrease as temporal and spatial scales increase. However, errors in the estimation of total storm accumulations based on polarimetric radars remain significant (up to 25%) for scales of approximately 650km2.
Article
A statistical objective analysis (SOA) scheme is used to reanalyze the stage III estimate of rainfall, an hourly mosaic of digital precipitation arrays produced by a network of WSR-88Ds. The technique also uses rainfall measurements from the Oklahoma Mesonetwork that are taken over the Lake Altus area in southwest Oklahoma. The Lake Altus area is monitored by four WSR-88D radars: Frederick, Oklahoma; Twin Lakes, Oklahoma; Amarillo, Texas; and Lubbock, Texas. A total of 185 hourly maps of precipitation accumulation between June 1995 and July 1996 are used in the reanalysis. The results indicate that the stage III analysis underestimates total rainfall accumulations by as much as 40% when compared to the SOA reanalysis. Furthermore, the largest discrepancies between the stage III analysis and the SOA reanalysis coincide with overlapping areas of coverage between WSR-88D umbrellas. Some stage III precipitation fields used in this study clearly show fictitious high gradients of rainfall that exactly coincide with the maximum range ring of adjacent WSR-88Ds. Currently, stage II precipitation fields are mosaicked by averaging nonzero rainfall accumulations, regardless of their respective distance from a WSR-88D, to generate the stage III analysis. It is shown that wherever WSR-88D surveillance areas overlap, analysis errors, introduced solely by the radar-range effect, will adversely affect the accuracy of the stage III estimate. An error-weighted averaging method is proposed to eliminate this problem.
Article
Full-text available
Multi-Resolution Land Characterization 2001 (MRLC 2001) is a second-generation Federal consortium designed to create an updated pool of nation-wide Landsat 5 and 7 imagery and derive a second-generation National Land Cover Database (NLCD 2001). The objectives of this multi-layer, multi-source database are two fold: first, to provide consistent land cover for all 50 States, and second, to provide a data framework which allows flexibility in developing and applying each independent data component to a wide variety of other applications. Components in the database include the following: (1) normalized imagery for three time periods per path/row, (2) ancillary data, including a 30 m Digital Elevation Model (DEM) derived into slope, aspect and slope position, (3) perpixel estimates of percent imperviousness and percent tree canopy, (4) 29 classes of land cover data derived from the imagery, ancillary data, and derivatives, (5) classification rules, confidence estimates, and metadata from the land cover classification. This database is now being developed using a Mapping Zone approach, with 66 Zones in the continental United States and 23 Zones in Alaska. Results from three initial mapping Zones show single-pixel land cover accuracies ranging from 73 to 77 percent, imperviousness accuracies ranging from 83 to 91 percent, tree canopy accuracies ranging from 78 to 93 percent, and an estimated 50 percent increase in mapping efficiency over previous methods. The database has now entered the production phase and is being created using extensive partnering in the Federal government with planned completion by 2006.
Article
Full-text available
The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land-cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. A multisupport approach is needed because these objectives require spatial units of different sizes for reference data collection and analysis. Determining a sampling design that meets the full suite of desirable objectives for the NLCD 2001 accuracy assessment requires reconciling potentially conflicting design features that arise from targeting the different objectives. Multi-stage cluster sampling provides the general structure to achieve a multi-support assessment, and the flexibility to target different objectives at different stages of the design. We describe the implementation of two-stage cluster sampling for the initial phase of the NLCD 2001 assessment, and identify gaps in existing knowledge where research is needed to allow full implementation of a multi-objective, multi-support assessment.
Article
Full-text available
The Niobrara River of Nebraska is a geologically, ecologically, and economically significant resource. The State of Nebraska has recognized the need to better manage the surface- and ground-water resources of the Niobrara River so they are sustainable in the long term. In cooperation with the Nebraska Game and Parks Commission, the U.S. Geological Survey is investigating the hydrogeomorphic settings and hydraulic geometry of the Niobrara River to assist in characterizing the types of broad-scale physical habitat attributes that may be of importance to the ecological resources of the river system. This report includes an inventory of surface-water and ground-water hydrology data, surface water-quality data, a longitudinal geomorphic segmentation and characterization of the main channel and its valley, and hydraulic geometry relations for the 330-mile section of the Niobrara River from Dun¬lap Diversion Dam in western Nebraska to the Missouri River confluence. Hydraulic microhabitats also were analyzed using available data from discharge measurements to demonstrate the potential application of these data and analysis methods. The main channel of the Niobrara was partitioned into three distinct fluvial geomorphic provinces: an upper province characterized by open valleys and a sinuous, equiwidth channel; a central province characterized by mixed valley and channel settings, including several entrenched canyon reaches; and a lower province where the valley is wide, yet restricted, but the river also is wide and persistently braided. Within the three fluvial geomorphic provinces, 36 geomorphic segments were identified using a customized, process-orientated classification scheme, which described the basic physical characteristics of the Niobrara River and its valley. Analysis of the longitudinal slope characteristics indicated that the Niobrara River longitudinal profile may be largely bedrock-controlled, with slope inflections co-located at changes in bedrock type at river level. Hydraulic geometry relations indicated that local (at-a-station) channel adjustments of the Niobrara River to changing discharge are accommodated mainly by changes in velocity, and streamwise adjustments are accommodated through changes in channel width. Downstream hydraulic geometry relations are in general agreement with values previously published for rivers of the Great Plains, but coefficients are likely skewed low because the locations of the streamflow-gaging stations used in this analysis are located at natural or engineered constrictions and may not be accurately representing downstream adjustment processes of the Niobrara River. A demonstration analysis of hydraulic microhabitat attributes at a single station indicated that changes in velocity-related habitat types is the primary microhabitat adjustment over a range of discharges, but the magnitude of that adjustment for any particular discharge is temporally variable.
Article
Full-text available
Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton–Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton–Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures.
Article
Full-text available
In a little-known series of papers beginning in 1966, Tokunaga introduced an infinite class of tree graphs based on the Strahler ordering scheme. As recognized by Tokunaga (1984), these trees are characterized by a self-similarity property, so we will refer to them as self-similar trees, or SSTs. SSTs are defined in terms of a generator matrix which acts as a "blueprint" for constructing different trees. Many familiar tree constructions are absorbed as special cases. However, in Tokunaga's work an additional assumption is imposed which restricts from SSTs to a much smaller class. We will refer to this subclass as Tokunaga's trees. This paper presents several new and unifying results for SSTs. In particular, the conditions under which SSTs have well-defined Horton-Strahler stream ratios are given, as well as a general method for computing these ratios. It is also shown that the diameters of SSTs grow like m/betam^/beta, where m is the number of leaves. In contrast to many other tree constructions, here /beta need not be equal to 1/2; thus SSTs offer an explanation for Hack's law. Finally, it is demonstrated that large discrepancies exist between the predictions of Shreve's well-known model and detailed measurements for large river networks, which other SSTs fit the data quite well. Other potential applications of the SST framework include diffusion-limited aggregation (DLA), lightning, bronchial passages, neural networks, and botanical trees.
Article
Full-text available
Stream flow information is essential for many important uses across a broad range of scales, including global water balances, engineering design, flood forecasting, reservoir operations, navigation, water supply, recreation, and environmental management. Growing populations and competing priorities for water, including preservation and restoration of aquatic habitat, are spurring demand for more accurate, timely, and accessible water data. To be most useful, stream flow information must be collected in a standardized manner, with a known accuracy and for a long and continuous time period. The U.S. Geological Survey (USGS) operates over 7000 stream gauges nationwide, which constitute over 90% of the nation's stream gauges that provide daily stream flow records, and that are accessible to the public. Most stream flow records are not based on direct measurement of river discharge, but are derived from continuous measurements of river elevations or stage. These stage data, recorded to 3‐mm accuracy are then converted into discharge by use of a stage/discharge relation (rating) that is unique for each stream gauging location. Because stream beds and banks are not static, neither is the stage discharge rating. Much of the effort and cost associated with stream gauging lies in establishing and updating this relation. Ten years ago, USGS personnel would visit stream gauging stations 8 to 10 times a year to make direct measurements of river depth, width, and velocity using mechanical instruments: a sounding rod or cable, a tagline, and a current meter. From these data, flow rates were computed.The range of measured flow and concurrent river stages were then used to build the rating curve for each site and to track changes to the rating curve.
Article
Full-text available
Analyses of mean daily discharges and annual peak discharges for streams in 14 of the United States and spanning a wide range of climates show that frequency of occurrence relationships for the large-discharge tails of both follow power laws. The number N(Q) of days on which the discharge exceeds Q, or the number of years in which the peak discharge exceeds Q, is related to Q by N(Q) \propto Q-alpha. Values of the exponent alpha (1 < alpha < 6) decrease in magnitude with increasing aridity so that the ratio of the frequency of occurrence of very large discharges to that of smaller discharges is higher in arid than in humid environments of the United States. To examine the effect of climate change on bed load transport and river incision, we obtain a curve-fit relating mean annual discharge per unit area of drainage basin (an effective precipitation rate \overline{P) to alpha: alpha - 1 \propto \overline{Pn, where n ~ 1.6. Using this relationship, we confirm that rivers in arid regions should incise less rapidly as climate becomes yet more arid. (If no water flows, the ``river'' transports no sediment.) Whether aridification of an initially humid environment leads to increased or decreased incision rates, however, depends on the minimum (threshold) discharge capable not only of transporting bed load but also sufficient to scour alluvium from the riverbed and then erode the bedrock. The curve fit relating $\overline{P to alpha implies that for aridification to accelerate incision, floods that recur only once or twice per millennium (or less frequently) must carry out most of the incision. An overestimate of n could permit smaller, more frequent floods to incise, but it appears that in only special circumstances will aridification accelerate steam incision.
Article
Full-text available
This paper presents evidence of frequency-size power-laws in several groups of snow avalanche paths. Other natural hazards, such as earthquakes and forest fires, exhibit similar power-law relationships. In addition, an analysis of the response of one group of snow avalanche paths to storms through time demonstrates a power-law between the response of the system and the binned frequency of those responses. Our results, as well as our experience with these complex, non-linear systems, are consistent with self-organized criticality. The practical implication of this work is that the frequency-size relationship for small and medium sized avalanches may be useful for quantifying the risk of large snow avalanches within a group of avalanche paths.
Article
Full-text available
Flood frequency analysis (FFA) is a form of risk analysis, yet a risk analysis of the activity of FFA itself is rarely undertaken. The recent literature of FFA has been characterized by: (1) a proliferation of mathematical models, lacking theoretical hydrologic justification, but used to extrapolate the return periods of floods beyond the gauged record; (2) official mandating of particular models, which has resulted in (3) research focused on increasingly reductionist and statistically sophisticated procedures for parameter fitting to these models from the limited gauged data. These trends have evolved to such a refined state that FFA may be approaching the 'limits of splitting'; at the very least, the emphasis was shifted early in the history of FFA from predicting and explaining extreme flood events to the more soluble issue of fitting distributions to the bulk of the data. However, recent evidence indicates that the very modelling basis itself may be ripe for revision. Self-similar (power law) models are not only analytically simpler than conventional models, but they also offer a plausible theoretical basis in complexity theory. Of most significance, however, is the empirical evidence for self-similarity in flood behaviour. Self-similarity is difficult to detect in gauged records of limited length; however, one positive aspect of the application of statistics to FFA has been the refinement of techniques for the incorporation of historical and palaeoflood data. It is these data types, even over modest timescales such as 100 years, which offer the best promise for testing alternative models of extreme flood behaviour across a wider range of basins. At stake is the accurate estimation of flood magnitude, used widely for design purposes: the power law model produces far more conservative estimates of return period of large floods compared to conventional models, and deserves closer study.
Article
Full-text available
National Oceanic and Atmospheric Administration Technical Paper-29, published in the late 1950s, remains the most commonly used reference for estimating extreme areal precipitation from station data in the United States. Although a number of alternative methods have been proposed over the intervening years, a rigorous evaluation of the assumptions used in the compilation of TP-29 has not been presented. Overall, TP-29 areal reduction factors provide a conservative means of relating station precipitation extremes to basin average values. For watershed areas less than 1000 km2, reevaluated areal reduction factors, are in close agreement with the TP-29 values. For larger watersheds, which TP-29 does not address, the areal reduction factors continue to decay exponentially. The areal reduction factors were found to be particularly sensitive to return period and season, with less extreme areal precipitation relative to the corresponding station precipitation at longer return periods and during the warm season. The reevaluated factors exhibit modest differences between study areas in North Carolina and New Jersey. The influence of station density, interpolation method, and topographical rainfall biases appears insignificant. Journal of Hyrologic Engineering
Article
Full-text available
Runoff data were analyzed from the semihumid 21.2 km2 Goodwin Creek Experimental Watershed ~GCEW! in northern Mississippi to examine watershed response over a range of scales. Runoff is monitored at the GCEW outlet and in 13 subcatchments, ranging in area from 0.06 to 17.6 km2. Previous data-based studies have shown that simple scaling theory fails to describe scaling of flood quantiles in large watersheds, and there is a fundamental change in scaling behavior in semihumid watersheds at an area of approximately 100 km2. It has been found that flood quantiles in nearly all subbasins in the GCEW are self-similar as described by simple scaling theory. It has also been found that expected values of peak flows during single runoff events are described by a power law function of catchment area. The primary reasons why flood quantiles are self-similar on Goodwin Creek are that precipitation is relatively uniform over the basin; peak discharges in smaller catchments are highly correlated with rainfall rates; nearly the entire watershed regularly contributes to runoff and; the groundwater table plays little role in runoff production.
Article
Full-text available
Cited By (since 1996): 9, Export Date: 2 April 2013, Source: Scopus, CODEN: JHYDA, doi: 10.1016/j.jhydrol.2007.11.015, Language of Original Document: English, Correspondence Address: Harman, C.; Department of Geography, University of Illinois, Room 220, Davenport Hall, 607 South Mathews Ave., Urbana, IL 61801-3671, United States; email: charman2@uiuc.edu, References: Archer, E.K., Roper, B.B., Henderson, R.C., Bouwes, N., Mellison, S.C., Kershner, J.L., 2004. Testing common stream sampling methods for broad-scale, long-term monitoring. USDA General Technical Report RMRS-GTR-122. Fort Collins, CO: US Department of Agriculture, Forest Service, Rocky Mountain Research StationBuhman, D.L., Gates, T.K., Watson, C.C., Stochastic variability of fluvial hydraulic geometry: Mississippi and Red rivers (2002) Journal of Hydraulic Engineering, ASCE, 128 (4), pp. 87-99;
Article
Full-text available
The central hypothesis of a nonlinear geophysical flood theory postulates that, given space-time rainfall intensity for a rainfall-runoff event, solutions of coupled mass and momentum conservation differential equations governing runoff generation and transport in a self-similar river network produce spatial scaling, or a power law, relation between peak discharge and drainage area in the limit of large area. The excellent fit of a power law for the destructive flood event of June 2008 in the 32,400-km2 Iowa River basin over four orders of magnitude variation in drainage areas supports the central hypothesis. The challenge of predicting observed scaling exponent and intercept from physical processes is explained. We show scaling in mean annual peak discharges, and briefly discuss that it is physically connected with scaling in multiple rainfall-runoff events. Scaling in peak discharges would hold in a non-stationary climate due to global warming but its slope and intercept would change.
Article
Full-text available
In general, different mechanisms may be iden-tified as responsible of runoff generation during ordinary events or extraordinary events at the basin scale. In a sim-plified scheme these mechanisms may be represented by different runoff thresholds. In this context, the derived flood frequency model, based on the effect of partial con-tributing areas on peak flow, proposed by Iacobellis and Fiorentino (2000), was generalized by providing a new for-mulation of the derived distribution where two runoff com-ponents are explicitly considered. The model was tested on a group of basins in Southern Italy characterized by annual maximum flood distributions highly skewed. The applica-tion of the proposed model provided good results in terms of descriptive ability. Model parameters were also found to be well correlated with geomorphological basin descrip-tors. Two different threshold mechanisms, associated respec-tively to ordinary and extraordinary events, were identified. In fact, we found that ordinary floods are mostly due to rain-fall events exceeding a threshold infiltration rate in a small source area, while the so-called outlier events, responsible of the high skewness of flood distributions, are triggered when severe rainfalls exceed a threshold storage in a large portion of the basin.
Article
The Soil Conservation Service (SCS) curve number (CN) method is one of the most popular methods for computing the runoff volume from a rainstorm. It is popular because it is simple, easy to understand and apply, and stable, and accounts for most of the runoff producing watershed characteristics, such as soil type, land use, hydrologic condition, and antecedent moisture condition. The SCS-CN method was originally developed for its use on small agricultural watersheds and has since been extended and applied to rural, forest and urban watersheds. Since the inception of the method, it has been applied to a wide range of environments. In recent years, the method has received much attention in the hydrologic literature. The SCS-CN method was first published in 1956 in Section-4 of the National Engineering Handbook of Soil Conservation Service (now called the Natural Resources Conservation Service), U. S. Department of Agriculture. The publication has since been revised several times. However, the contents of the methodology have been nonetheless more or less the same. Being an agency methodology, the method has not passed through the process of a peer review and is, in general, accepted in the form it exists. Despite several limitations of the method and even questionable credibility at times, it has been in continuous use for the simple reason that it works fairly well at the field level.
Chapter
Whether processes in the natural world are dependent or independent of the scale at which they operate is one of the major issues in hydrologic science. In this volume, leading hydrologists present their views on the role of scale effects in hydrologic phenomena occurring in a range of field settings, from the land surface to deep fractured rock. Self-contained and thought-provoking chapters cover both theoretical and applied hydrology. They provide critical insights into important topics such as general circulation models, floods, river networks, vadose-zone processes, groundwater transport, and fluid flow through fractured media. This book is intended as an accessible introduction for graduate students and researchers to some of the most significant questions and challenges that will face hydrologic science in the twenty-first century.
Article
In the design of many hydraulic structures such as dams, culverts and urban drainage systems, the flood frequency analysis plays a very significant role. In this research work the study area has been selected as Godavari basin of Polavam gauging station. Flood frequency analysis has been carried out using five distributions namely normal distribution, log-normal distribution, exponential distribution, two parameter gamma distribution, and logistic distribution. To identify the best distribution for the Polavam gauging station, goodness of fitness like Chi-squared and Kolmogorov-Smirnov tests are conducted. From these two tests, two parameter gamma distribution is identified as best distribution for the Polavam gauging station.
Article
The model bankfull discharge recurrence interval (annual series) (Ta) in streams has been approximated at a 1.5-year flow event. This study tests the linkage between regional factors (climate, physiography, and ecoregion) and the frequency of bankfull discharge events in the Pacific Northwest (PNW). Patterns of Ta were found to be significant when stratified by EPA Ecoregion. The mean value for Ta in the PNW is 1.4 years; however, when the data is stratified by ecoregion, the humid areas of western Oregon and Washington have a mean value of 1.2 years, while the dryer areas of Idaho and eastern Oregon and Washington have a mean value of 1.4 to 1.5 years. Among the four factors evaluated, vegetation association and average annual precipitation are the primary factors related to channel form and Ta. Based on the results of the Ta analyses, regional hydraulic geometry relationships of streams were developed for the PNW, which relate variables, such as bankfull cross-sectional area, width, depth, and velocity, to bankfull discharge and drainage area. The verification of Ta values, combined with the development of regional hydraulic geometry relationships, provides geographically relevant information that will result in more accurate estimates of hydraulic geometry variables in the PNW.
Article
The definition of a fractal distribution is that the number of objects N with a characteristic size greater than r scales with the relation N ~ r −D . The frequency-size distributions for islands, earthquakes, fragments, ore deposits, and oil fields often satisfy this relation. Fractals were originally introduced by Mandelbrot to relate the length of a coastline to the length of the measuring stick. This application illustrates a fundamental aspect of fractal distributions, scale invariance. The requirement of an object to define a scale in photographs of many geological features is one indication of the wide applicability of scale invariance to geological problems, scale invariance can lead to fractal clustering. Geophysical spectra can also be related to fractals; these are self-affine fractals rather than self-similar fractals. Examples include the earth’s topography and geoid.
Article
Linearity of basin runoff and peak response as a function of watershed scale was examined for a set of 29 nested semiarid watersheds within the U.S. Department of Agriculture–Agricultural Research Service Walnut Gulch Experimental Watershed, located in southeastern Arizona. Watershed drainage areas range from 1.83 × 103 to 1.48 × 108 m2 (0.183–14800 ha), and all stream channels are ephemeral. Observations of mean annual runoff, database-derived 2- and 100-year peak runoff rates, ephemeral channel area, and areal rainfall characteristics derived from 304 events were examined to assess the nature of runoff response behavior over this range of watershed scales. Two types of distributed rainfall-runoff models of differing complexity were applied to a subset of the watersheds to further investigate the scale-dependent nature of the collected data. Contrary to the conclusions of numerous studies in more humid regions, it was found that watershed runoff response becomes more nonlinear with increasing watershed scale, with a critical transition threshold area occurring roughly around the range of 3.7 × 105 to 6.0 × 105 m2 (37–60 ha). The primary causes of increasingly nonlinear response are the increasing importance of ephemeral channel losses and partial storm area coverage. The modeling results indicate that significant error will result in model estimates of peak runoff rates when rainfall inputs from depth area-frequency relationships are applied beyond the area of typical storm coverage. For runoff modeling in Walnut Gulch and similar semiarid environments, explicit treatment of channel routing and transmission losses from channel infiltration will be required for watersheds larger than the critical drainage area.
Article
Now in a greatly expanded second edition, this book relates fractals and chaos to a variety of geological and geophysical applications and introduces the fundamental concepts of fractal geometry and chaotic dynamics. In this new edition, Turcotte expands coverage of self-organized criticality and includes statistics and time series to provide a broad background for the reader. Topics include drainage networks and erosion, floods, earthquakes, mineral and petroleum resources, fragmentation, mantle convection, and magnetic field generation. The author introduces all concepts at the lowest possible level of mathematics consistent with their understanding, so that the reader requires only a background in basic physics and mathematics. He includes problems for the reader to solve. This book will appeal to a broad range of readers interested in complex natural phenomena.
Article
. Estimation of fine-sediment delivery rates at the watershed-scale is a long-standing problem. We investigated this problem using data from the Goodwin Creek Experimental Watershed (GCEW) located in the Northern-central Mississippi. The GCEW is a Hortonian catchment with 13 nested sub-catchments and a total area of 21.3 km2. Previous studies in this catchment showed that flood quantiles in nearly all sub- catchments are self-similar as described by simple scaling theory. We analyzed sixteen years of peak suspended sediment data from 12 gauging stations. Results showed that simple scaling holds better for peak sediment flow quantiles for the larger sub-catchments with slopes similar to the average slope of the catchment. In the same manner, predictions using derived power-law relations showed better performance for the larger sub-catchments. The log-transform of peak sediment flow and the computed event-total sediment volume showed a strong relation, which could be exploited in estimating the total sediment volume from single measured sediment concentrations. We hypothesize that the relatively uniform distribution of precipitation and land use and the similarity in slope of many of the subcatchments contributes to simple scaling behavior in the GCEW. Moreover, the balance between deposition and re-suspension of sediment along the river reach, which enables channels to have a relatively uniform sediment concentration, helps the peak suspended sediment flux to be proportional to the flow peak and area for larger sub-catchments with similar slopes.
Article
Few studies have attempted to examine explicitly the factors that produce variation in hydraulic geometry parameters. A continuously varying parameter model of downstream hydraulic geometry is formulated to address this issue. Parameter variation is viewed as a function of channel sediment characteristics and flood magnitude. Results for the Missouri River basin show that these factors are strongly associated with variation in hydraulic geometry coefficients and exponents. The results also indicate that bivariate relationships are biased by these associations. The analysis yields site-specific parameters that uniquely describe the relationship between active channel form and mean discharge at a particular location within the basin. The continuously varying parameter approach has several advantages: (1) it isolates the separate influences of each explanatory variable on each hydraulic geometry parameter, (2) it maintains the precision of parameter estimates by including the entire data set in the analysis, and (3) it allows testing of the conception that hydraulic geometry parameters vary continuously rather than discretely, yet permits analysis of discrete change through the use of dummy variables.
Article
This paper uses generalized expressions for both cross-section geometry and hydraulics (generalization of the Chézy and Manning relations) to derive explicit equations for the exponents and coefficients in the power-law at-a-station hydraulic geometry relations. The exponents are shown to depend only on the depth exponent in the hydraulic relation (p) and the exponent that reflects the form of the cross-section (r). The coefficients depend on p and r, but also on the slope exponent in the generalized hydraulic relation and on the physical characteristics of the section: bankfull width, bankfull maximum depth, hydraulic conductance, and slope. The theoretical ranges of coefficient and exponent values derived herein are generally consistent with the averages and individual observed values reported in previous studies. However, observed values of the exponents at particular cross-sections commonly fall outside the theoretical ranges. In particular, the observed value of the velocity exponent m is commonly greater than the theoretical value, suggesting that hydraulic conductance often increases more strongly with discharge than predicted by the assumed hydraulic relations. The developments presented here provide new theoretical insight into the ways in which hydraulic and geometric factors determine hydraulic geometry. This insight should help to explain the variation of at-a-station hydraulic geometry and may facilitate prediction of hydraulic geometry at river reaches where detailed measurements are unavailable.
Article
Low-flow stream chemistry and rainfall event reactions of nested catchments in the mesoscale Dreisam catchment in the Black Forest Mountains, southwest Germany, were analyzed to investigate how the dominant runoff generation processes change with scale. The catchment sizes range from a 0.015 km2 headwater catchment to the 258 km2 Dreisam catchment. Synoptic sampling during low flows was used to address the spatial heterogeneity of the investigated tracers. Six events were sampled using three different experimental designs with the environmental tracers dissolved silica, oxygen-18, deuterium, and potassium to investigate event processes at different scales. Results are elucidated with respect to the widely noticed Representative Elementary Area (REA) concept. Most of the observed differences between the catchments could be related to changes in the topography and other catchment properties (i.e., soils, geology, land use). The test site catchments less than 1–2 km2 were found to be nonrepresentative of the runoff generation in larger catchments due to the topographic structure and a reduced number of hydrological response units (HRU) and, consequently, generated runoff components. In catchments larger than 40 km2 an additional runoff component, the surface runoff from urban areas, became increasingly important. However, surprisingly small differences in the tracer responses at catchments between 1 and 40 km2 were observed. Although the lower threshold (1–2 km2) was similar for both methods (low-flow and event investigations), results suggest that the thresholds depend on the investigated scale and hydrological parameters as well as the catchment properties. Applying microscale tracer methods at the mesoscale provided detailed insights into the scaling of the dominant runoff processes. The results show that this approach is an important component when addressing the scaling behavior besides the numerous microscale studies and modeling approaches. However, quantitative interpretations are limited owing to inherent heterogeneity at this scale.
Article
This study extends the earlier contribution of Julien and Wargadalam in 1995. A larger database for the downstream hydraulic geometry of alluvial channels is examined through a nonlinear regression analysis. The database consists of a total of 1,485 measurements, 1,125 of which describe field data used for model calibration. The remaining 360 field and laboratory measurements are used for validation. The data used for validation include sand-bed, gravel-bed, and cobble-bed streams with meandering to braided planform geometry. The five parameters describing downstream hydraulic geometry are: channel width W, average flow depth h, mean flow velocity V, Shields parameter *, and channel slope S. The three independent variables are discharge Q, median bed particle diameter ds, and either channel slope S or Shields parameter * for dominant discharge conditions. The regression equations were tested for channel width ranging from 0.2 to 1,100 m, flow depth from 0.01 to 16 m, flow velocity from 0.02 to 7 m/s, channel slope from 0.0001 to 0.08, and Shields parameter from 0.001 to 35. The exponents of the proposed equations are comparable to those of Julien and Wargadalam 1995, but based on R2 values of the validation analysis, the proposed regression equations perform slightly better. DOI: 10.1061/ASCE0733-94292006132:121347 CE Database subject headings: Geometry; Hydraulics; Alluvial channels; Channel design; Channel morphology; Databases; Rivers; River flow.
Article
Two-dimensional hydraulic models of floodplain flow are at the forefront of current research into flood inundation mechanisms, but they are, however, currently constrained by inadequate parameterization of topography and friction, primarily due to insufficient or inaccurate data. This paper reviews the effects of topographic representation on flood inundation extent prediction and presents initial results of a flood simulation using a new topographic parameterization surface produced from airborne LIDAR (laser induced direction and ranging) data.
Article
The SCS-CN method was first published in 1956 in Section 4 of the National Engineering Handbook of the Soil Conservation Service (now called the Natural Resources Conservation Service), U.S. Department of Agriculture. The publication has since been revised several times. However, the contents of the methodology have been nonetheless more or less the same. Being an agency methodology, the method is, in general, accepted in the form in which it exists. Despite several limitations of the method and even questionable credibility at times, it has been in continuous use for the simple reason that it works fairly well at the field level. Recent contributions have significantly enhanced the understanding of the SCS-CN method and consequently, its potential for wider application. In the simplest form, the fundamental proportionality concept of the method relates the two orthogonal hydrological processes of surface water and groundwater, and the other hypothesis relates to the atmospheric process. Qualitatively, the method broadly integrates all three major processes of the hydrologic cycle, and it can thus form one of the fundamental concepts of hydrology.
Article
This paper presents key issues associated with uncertainty in flood inundation mapping. Currently flood inundation extent is represented as a deterministic map without consideration to the inherent uncertainties associated with various uncertain variables (precipitation, stream flow, topographic representation, modeling parameters and techniques, and geospatial operations) that are used to produce it. Therefore, it is unknown how the uncertainties associated with topographic representation, flow prediction, hydraulic model, and inundation mapping techniques are transferred to the flood inundation map. In addition, the propagation of these individual uncertainties and how they affect the overall uncertainty in the final flood inundation map is not well understood. By using a sample data set for Strouds Creek, N.C., this paper highlights key uncertainties associated with flood inundation mapping. In addition, the idea of a probabilistic flood inundation map is articulated, and an integrated framework approach that will connect data, models, and uncertainty analysis techniques in producing probabilistic flood inundation maps is presented.