Article

A novel spatiotemporal multi-attribute method for assessing flood risks in urban spaces under climate change and demographic scenarios

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The combined effects of global warming and population growth induce policymakers to enhance flood risk management practices in order to reduce substantially the causes of floods and to mitigate their future and multiple impacts simultaneously. This way, this paper puts forward a new multi-attribute and non-stationary decision model for managing flood risks in urban areas under climate demographic changes, which we call the Non-Stationary Multi-Attribute Utility Theory (NSMAUT). Under expected utilities, our model inserts time-dependency into the decision-maker's (DM's) preference statements based on his/her trade-offs regarding psychological distance induced by delayed prospects. The NSMAUT analyzes flood risks under five attributes, linked to environmental, financial, human, mobility, and social concerns of society. By combining climate and demographic scenarios for this century, our findings make use of statistical, graphic, and risk performance measures that increase the DM's perception of risk in order to control and monitor the dynamism of circumstances of risk and thus to facilitate on what he/she should base future decisions on adapting cities for future adverse events arising from flooding. A numerical application in a Brazilian municipality from 2021 – 2100 is used to validate our approach. Moreover, the model can be replicated in other urban contexts.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Possible future climate scenarios exacerbate the changes already observed in measured hydro-meteorological data series in the past (IPCC 2021(IPCC , 2022, stressing the human world, especially populated areas exposed to natural hazards like river floods (Rojas et al. 2017, Wing et al. 2022. The International Emergency Events Database (CRED 2021, da Silva et al. 2022) reveals that hydrological disasters, mainly floods, tend to be among the most recurrent hazards that disturb urban systems. The mechanisms and links between climate change and floods are reasonably well understood (Pour et al. 2020, Alifu et al. 2022, and different scenarios can be described in detail (Farid et al. 2023, Kim et al. 2023. ...
... Wang et al., 2015). This shift reflects advancements in technology and a deeper grasp of hydrological processes (da Silva et al., 2022;Islam & Sado, 2000;Lyu et al., 2019). ...
Article
Full-text available
This study employs Geospatial Artificial Intelligence (GeoAI) and the Random Forest Machine Learning (ML) algorithm to enhance flood hazard assessments in Portugal. It utilizes NASA's LP DAAC (2023) Digital Elevation Model (DEM) and slope data from EPIC WEBGIS PORTUGAL DATA, offering detailed topographical insights for environmental planning. Additionally, it incorporates data on proximity to water bodies from the Portuguese Environment Agency and the European Environment Agency, and soil characteristics from EPIC WEBGIS PORTUGAL DATA, facilitating a thorough examination of flood risks. This approach prioritizes long-term land features over short-term weather patterns, providing a comprehensive understanding of flood vulnerability. The study processes data at a 1 km x 1 km resolution, adapting TIFF maps for compatibility with the Random Forest model. The produced flood hazard maps identify potential flood hotspots at both national and city levels, crucial for urban planning. These maps aid in assessing the vulnerability of key infrastructure and assets, such as transport networks and buildings. The research highlights the importance of integrating additional data on assets and socioeconomic factors to enhance urban resilience. It sets the stage for future research aimed at improving predictive accuracy and underscores the necessity of extensive geospatial analytics in managing infrastructure risks.
... Addressing disaster risk stemming from natural hazards such as floods, earthquakes, and tropical cyclones in urban areas is primarily a developmental concern and necessitates integration within a broader urban development framework. Mitigating disaster risk will bolster urban resilience and foster sustainable urban development (Rentschler et al. 2022;Da Silva et al. 2022). Reducing disaster risk caused by natural hazards (floods, earthquakes, and tropical cyclones) in urban areas is largely a development issue and needs to be addressed within the context of a wider urban development framework. ...
Article
Full-text available
Alluvial fan surfaces present unique challenges for urban development due to their susceptibility to flooding and other environmental hazards. This study focuses on the Pardisan settlement in Qom, which houses over 200,000 residents and is particularly prone to flood risks due to its geographical characteristics. The primary aim of this research is to identify and manage areas within Pardisan that are optimal for safe and sustainable urban expansion. To address this goal, we utilized a comprehensive methodology that includes flood risk modeling with HEC-RAS 6, using the Soil Conservation Service (SCS) method for a hypothetical 100-year flood event. Additionally, we incorporated 13 development criteria, developed from expert consultations, to assess land suitability for urban development. These criteria were analyzed using the Analytic Hierarchy Process (AHP) combined with Fuzzy logic, facilitating a nuanced comparison of potential sites for expansion. Our findings reveal that, despite existing watershed management and road infrastructure adjustments, current settlement areas in Pardisan remain vulnerable to flooding. The research highlights deficiencies in current flood prevention measures and underscores the need for enhanced strategies and further studies. Flood modeling showed that major infrastructures, such as bridges in the eastern sector of the settlement, are likely to withstand a 100-year flood, whereas smaller railway bridges require ongoing monitoring to manage debris accumulation. Furthermore, the channel within Pardisan is inadequate for managing the water flow and sediment from such a flood event, and upstream dams do not sufficiently address the volume and sedimentation challenges posed by potential floods. The outcomes of the AHP-Fuzzy model identified the southeast section of Pardisan as the most suitable area for future urban development. This recommendation is based on a thorough evaluation of the specified criteria, pointing to significant potential for targeted, optimal urban expansion in this region.
... (Gebresellase et al. 2022;Hartmann 2016;Shrestha & Pradhanang 2022). Evaluating climate change and supporting decision-making in mitigating climate change impacts are crucial (da Silva et al. 2022;Illangasingha et al. 2023). Numerous climate investigations have been undertaken using the Coupled Model Intercomparison Project Phase 6 (CMIP6) GCM simulation data (Peng et al. 2022;Wu et al. 2022). ...
Article
Full-text available
This research assesses the efficacy of thirteen bias correction methods, including traditional and machine learning-based approaches, in downscaling four chosen GCMs of Coupled Model Intercomparison Project 6 (CMIP6) in Nigeria. The 0.5° resolution gridded rainfall, maximum temperature (Tmx), and minimum temperature (Tmn) of the Climate Research Unit (CRU) for the period 1975 − 2014 was used as the reference. The Compromise Programming Index (CPI) was used to assess the performance of bias correction methods based on three statistical metrics. The optimal bias-correction technique was employed to rectify bias to project the spatiotemporal variations in rainfall, Tmx, and Tmn over Nigeria for two distinct future timeframes: the near future (2021–2059) and the distant future (2060–2099). The study's findings indicate that the Random Forest (RF) machine learning technique better corrects the bias of all three climate variables for the chosen GCMs. The CPI of RF for rainfall, Tmx, and Tmn were 0.62, 0.0, and 0.0, followed by the Power Transformation approach with CPI of 0.74, 0.36, and 0.29, respectively. The geographic distribution of rainfall and temperatures significantly improved compared to the original GCMs using RF. The mean bias-corrected projections from the multimodel ensemble of the GCMs indicated a rainfall increase in the near future, particularly in the north by 2.7–12.7%, while a reduction in the south in the far future by -3.3% to -10% for different SSPs. The temperature projections indicated a rise in the Tmx and Tm from 0.71 °C and 0.63 °C for SSP126 to 2.71 °C and 3.13 °C for SSP585. This work highlights the significance of comparing bias correction approaches to determine the most suitable approach for adjusting biases in GCM estimations for climate change research.
... According to the study of (Luo and Zhang, 2022), LULC change due to urban growth has had significant effects on the hydrological regime, thereby leading to increased flood risk. (da Silva et al., 2022) indicated that climate change and population growth are important factors causing higher flood risk, as confirmed by Lin et al. (2020). According to these authors, urban growth and climate change increased flood risk from 2015 to 2050 in the Guangzhou Metropolitan Area of China. ...
... In turn, Andrés-Doménech et al. (2018) monitored and evaluated the performance of green roofs at building and city scales under in Benaguasil, Valencia, Spain, concluding that green roofs can potentially reduce flood risks at the building and city scales even in relatively dry climates such as the Mediterranean. Other examples of recent advancements in flood regulation by means of GI include the assessment of the effectiveness of planning and building regulations in coping with urban flooding under precipitation uncertainty (Piyumi et al., 2021), a multi-dimensional urban flood risk assessment supported by stakeholders' perceptions for the ranking/prioritization of districts (Ekmekcioglu et al., 2022), an innovative multi-attribute and non-stationary decision model for managing flood risks in urban areas under climatic and demographic changes (da Silva et al., 2022), and the development of an urban flood vulnerability index based on an interlinked social-ecological-technological systems vulnerability framework (Chang et al., 2021). ...
... The disruptions brought on by flood events pose substantial obstacles to the sustainable economy and society development (Ye et al., 2019). The costly disasters have become more frequent and intense in the last decade as a result of global climate change (da Silva et al., 2022), especially in urban areas with high exposure to material wealth, poor surface water permeability, and intolerable drainage systems (Anni et al., 2020). For instance, on July 21, 2012, a rainstorm in Beijing, China, claimed the lives of 79 people, had an impact on nearly 1.9 million people, and caused a $16.94 billion economic loss (Wang, H. et al., 2021). ...
Article
Timely understanding of affected areas during disasters is essential for the implementation of emergency response activities. As one of the low-cost and information-rich volunteer geographic information, social media data can reflect geographic events through human behavior, which is a powerful supplementary source for fine-grained flood monitoring in urban areas. However, the value of social media data has not been fully exploited as potential location and water depth information may be embedded in both text and images. In this study, we propose a novel framework for fine-grained information extraction and dynamic spatial-temporal awareness in disaster-stricken areas based on Sina Weibo. First, we construct a novel fine-grained location corpus specifically for urban flooding contexts. The corpus summarizes characteristics of address descriptions in flood-related Weibo texts, including standard address entities and spatial relationship entities, based on the named entity recognition (NER) model. Then, water depth information in texts and images is obtained based on different deep learning modules and fused at the decision level. Specifically, in text analysis module, we summarize and extract diverse descriptions of water depth, and in image analysis module, we develop a water level hierarchical mapping method. Finally, we analyze the spatio-temporal distribution characteristics and variation patterns of the extracted information to enhance situational awareness. Taking the urban flood occurred in Anhui, China as a case study, we find that the variation of flooding hotspot areas in Sina Weibo and rainfall centers show a significant spatial and temporal consistency, and the fusion of text and image-based information can facilitate dynamic perception of flood processes. The framework presented in this study provides a feasible way to implement refined situational awareness and spatio-temporal evolution analysis of urban floods at the city level in time.
... Cities are widely affected by flood risk worldwide (Jha et al., 2012), especially when considering urban areas located in floodplains or riverines or exposed to coastal events or flash floods (Nixon, 2016;Bernardini et al., 2021;Young & Jorge Papini, 2020;da Silva et al., 2022;Kvočka et al., 2016). Given the complexity of the urban built environment, flood risk assessments should thoroughly evaluate the conditions of each urban component, by integrating contextual (geographical) data with mesoscale information (Bernardini et al., 2021;Sharifi, 2019). ...
Article
Flood risk in an urban built environment depends on the combination of the hazard, the vulnerability of the built environment itself and its infrastructure (referred to as physical vulnerability), and the exposure and vulnerability of the people residing, working or visiting it (i.e., their human condition). However, factors affecting those people vary over space and time depending on the uses of the built environment. This research offers a methodology for combined spatiotemporal flood risk assessment, providing hourly variations in risks due to hazard, physical vulnerability, users’ exposure, and vulnerability. A mesoscale approach is adopted by collecting and managing data for each open space in the urban layout (e.g., street, square) and the facing buildings. In particular, users’ exposure and vulnerability are investigated for indoor and outdoor uses and their temporalities, providing hourly distributions of users’ density, age, familiarity with the built environment, and direct exposure to the floodwaters. Then, the Analytical Hierarchy process is used to combine risk factors. Finally, the application to a case study application (an urban district in Guimarães, Portugal) demonstrates how users’ factors alter the risk over the day within the same mesoscale element and considers different elements which share the same hazard and physical vulnerability.
... Flood discharge is associated with hydrological factors as well as an increase in imperviousness of surfaces (Ress et al., 2020). Flood risk also varies with population rate in urbanized areas, and the risk is expected to rise with increasing frequency of extreme storms as well as growing human population (da Silva et al., 2022). Aidi (2019) stated that precipitation rate, land use/land cover, slope, and surface flow/discharge are the most influencing factors of extreme floods (Yousuf and Romshoo, 2022). ...
Article
Extreme floods resulting from one of the heaviest ever documented rain storms over several urbanized coastal sub-districts in north-central Türkiye in August 2021, caused severe destruction, considerable property damage and tens of casualties. Peak discharges following the event were mostly greater than the projected 50-year and 100-year ones. This study aimed to examine the causes and consequences of this catastrophic event that occurred in the sub-districts of Bozkurt and Catalzeytin within Kastamonu, and Ayancik within Sinop provinces. Real-time 24-h precipitation follow-up by two neighboring Doppler Radar stations disclosed the spatial distribution of the phenomenon as it unfolded. Relative damages that occurred within each district following the event were evaluated, taking into account the meteorological, topographical and hydrological variables. Steadily increasing precipitation from 0 mm/h at midnight on August 11th to 35-40 mm/h around 4 p.m. in all sub-districts, kept on falling without slowing down until the following midnight, even further increasing to 45-47 mm/h in Bozkurt. Heavily forested feeder basins, 91.4% in Ayancik, 92.2% in Bozkurt and 88.2% in Catalzeytin were overwhelmed by the amount of precipitation, and quickly saturated surfaces allowed the excessive runoff to reach the sub-districts down the main channel. Tracing the entire phenomenon as it progressed over the sub-districts showed how site selection and unfavorable topography worsened the outcome. Topographic indices calculation showed that the main channels and their immediate surroundings in all sub-districts were the confirmed no-go zones for urban developments. The events highlighted several lessons for city and flood managers regarding flood planning, mitigation, response and recovery.
... Results from the scatterplots are in agreement with the previous studies on LULC (indicated through NDVI and NDBI) and LST. Loss of vegetation and increase in BUA and IC, resulting in relatively higher temperatures in the expanding urban areas (da Silva et al., 2022;Dai et al., 2018;Fu & Weng, 2016;Miguez & Veról, 2017). ...
Article
Full-text available
Increased urbanization and growing needs in urban areas resulted in causing extensive changes to the land use land cover (LULC) of urban areas. It was observed from many previous studies that the LULC changes in urban areas result in short and long-term repercussions, including increased land surface temperature (LST) and urban heat island. However, the established relationship between the LULC changes and LST needs a deeper understanding. This study examines and explores the relationship between the LULC changes in Bengaluru, India, and the corresponding impact on the LST. LULC classification done using Landsat images revealed that over 50% of vegetation had been lost between 2003 and 2021, while the built-up area had nearly doubled during the same period. Subsequently, the city experienced an increase in the minimum temperature from 16 °C in 2003 to 21 °C in 2021, whereas the mean LST had increased from 26 °C in 2003 to 29 °C in 2021. Spatial aggregation of the LST is examined using Moran's I spatial autocorrelation, and the results suggested moderately clustered LST values in the study area. Spatial indicators like normalized differentiated vegetative index (NDVI) and normalized differentiated built index (NDBI) are calculated and compared against the LST using scatter plots. The relationship is identified using simple linear regression (SLR) and geographically weighted regression (GWR), considering the non-spatial and spatial attributes inherently present within the data. Results from GWR suggest a more defined relationship between LULC and LST, NDVI, and NDBI than SLR. While NDVI exhibited a negative relationship with LST, NDBI displayed a positive relationship with LST. The results from this study highlight the need for planned growth of urban areas with sustained growth in vegetated areas through urban planning and design strategies, which can help contain the increase in LST in future scenarios.
... Climate change is a significant issue with undeniable consequences of various scales and magnitudes. Some of the effects are weather extremes, high temperatures, wildfires, a lack of rain, torrential rain, devastating floods, and dust-based air pollution [1][2][3]. According to IPCC's report [4], climate change could have a negative impact on urban development and socioeconomics, including decreasing accessibility to water, water security, and water quality for a large part of the world [5]. ...
Article
Full-text available
The upward trajectory of urbanization, coupled with the ever-growing demand for more water resources, has led to increased pressure on limited water resources, particularly in cities with dry climates such as Tehran. Since the balance of Tehran’s water ecosystems has been disturbed, and the quality and quantity of water resources have been affected in recent years, conducting an assessment of water environment carrying capacity (WECC) seemed vital for this city. WECC was used as the basis of water supply sustainability evaluation concerning Tehran’s land use and demographic characteristics on a neighborhood scale. Therefore, the effect size and correlation of 12 types of land use and six variables derived from the literature with water consumption patterns were examined in warm and cold seasons. The results show that land use, population density, percentage of deteriorated area, percentage of buildings over 30 years old, residential–commercial land use, and green spaces correlate significantly with water consumption. The percentage of deteriorated areas and buildings over 30 years old has a negative, and the rest has a positive impact on water consumption. It is also recommended to use the research findings to improve Tehran’s water environment carrying capacity and apply the proposed evaluation procedure to other cities. The results of this research can be used in planning large and densely populated cities with a neighborhood-oriented approach, in which local institutions play an essential role in attracting people’s participation and inclusive urban planning.
... They have emphasized the necessity for a community-focused disaster response plan (Goyal, 2019). Many studies on community behaviour have focused on the interaction of physical, social, and economic infrastructures (Silva, Alencar & Almeida, 2022). Most of these studies relied on in-person research or social media analysis (Quagliarini et.al.2022). ...
Article
Flooding is a significant hazard responsible for substantial damage and risks to human life worldwide. Effective emergency evacuation to a safer location remains a concern even though the crisis can be predicted and warnings were given. During a calamity, most residents cannot quickly and securely flee. As it is crucial to start evacuation at the right time to have a safe evacuation, this study focuses on a machine learning-based model for predicting a household's evacuation preparation time in the incident of a flood. The study is based on the data collected from flood-affected people from Kerala, India, through a questionnaire. The study indicates that people's demographic, geographical and behavioural aspects, awareness of natural hazards and management are the critical components for improved emergency actions. Further, the article also analysed the characteristics of the respondents and successfully created clusters in which the respondents broadly belong, which will help the rescue team operationalize the evacuation process.
... China's Henan province was also hit by rainstorm, causing severe flooding in several cities, including Zhengzhou city, which suffered 380 casualties and huge economic losses. Urban flooding is gradually becoming the key factor restricting the sustainable and healthy development of cities, and has aroused widespread concern worldwide (Bajabaa et al., 2014;da Silva et al., 2022;Salvadore et al., 2015;Wan et al., 2020). ...
Article
Full-text available
Urban flooding research cannot be overemphasized due to its environmental, economic and societal concern. Research on the response of urban flooding under different rainstorm scenarios would be conducive to the design of drainage projects, and helps the government department to master the waterlogging process in advance. Thus, this paper introduces a new rainstorm pattern designing method using Copula theory, Most-Likely design realization method and Monte Carlo simulation based on the rainfall data in a Chinese megacity, Zhengzhou. Furthermore, a coupled one-dimensional and two-dimensional hydrodynamic model, MIKE FLOOD, was used to determine the urban flooding response to various rainstorm scenarios. The results show that more than 75 percent of waterlogging disasters in Zhengzhou city were caused by short duration rainstorms within 3 hours, and the one-hour single-peak rainstorm process plays a main role. The rainstorm volume (RV) under Kendall return period type decreased significantly compared with the radical “Or” return period. And the RV under survival Kendall return period type was improved compared to the conservative “And” return period. Moreover, the rainstorm scenario corresponding to “Or” return period brought about the most serious waterlogging disaster, with the peak inundation volume of 1.8 × 10⁵ m³ and the maximum water depth of 1.86 m.
... Such hurdles are more exacerbated when accounting for the unprecedented impact of climate change combined with ever-increasing urbanization Casal-Campos et al., 2018;Sitzenfrei et al., 2022;Webber et al., 2022). These deviations often manifest themselves in heatwaves, flooding and droughts plaguing cities across the globe (AghaKouchak et al., 2014;da Silva et al., 2022;Najafi et al., 2021). For this reason, it is necessary to learn to adapt the infrastructures to mitigate the effects of such exceptional conditions (Shukla et al., 2019); and therefore, a paradigm shift from reliability to resilience for urban water infrastructures during all stages including planning, operation, and rehabilitation is needed (Sweetapple et al., 2022). ...
Article
Full-text available
Recent research underpinned the effectiveness of topological decentralization for urban stormwater networks (USNs) during the planning stage in terms of both capital savings and resilience enhancement. However, how centralized and decentralized USNs’ structures with various degrees of redundancy (i.e., redundant water flow pathways) project resilience under functional and structural failure remains an unresolved issue. In this work, we present a systemic and generic framework to investigate the impact of adding redundant flow paths on resilience based on three strategies for optimal centralized versus decentralized USNs. Furthermore, a tailored graph-theory based measure (i.e., eigenvector centrality) is proposed to introduce redundant paths to the critical locations of USNs. The proposed framework is then applied to a real large-scale case study. The results confirm the critical role of layout decentralization under both functional (e.g., extreme precipitation events), and structural failure (e.g., pipe collapse). Moreover, the findings indicate that the implementation of redundant paths could increase resilience performance by up to 8% under functional failure without changing the network's structural characteristics (i.e., sewer diameters, lengths, and storage capacity), only by leveraging the effective flow redistribution. The scheme proposed in this study can be a fruitful initiative for further improving the USNs’ resilience during both planning and rehabilitation stages.
Article
Full-text available
In order to accurately assess the urban flood losses under changing environments, a flood losses assessment framework has been proposed based on a bidirectional coupled 1D/2D hydrodynamic model. Using the proposed flood losses assessment framework and the derived future storm intensity formula under four SSP–RCP emission scenarios, modelling of flood inundation processes in the Qingshan district of Wuhan City was conducted under present climate, intermediate-future climate (2041–2070) and distant-future climate (2071–2100). Flood economic losses and affected population were estimated on the basis of the precise hydrodynamic simulations, inundation depth-loss rate relationship and socioeconomic data in the present and the future under four SSPs scenarios. These results indicate that: as compared with the present climate scenario, the percentage of inundation area on the urban surface would increase by 3.7–14.1% and 8.7–18.2% under the intermediate-future climate and distant-future climate. Flood economic losses from 2030 to 2100 would present increasing trends under four SSPs, which would increase by 0.4–21.7 times compared with the present scenario. The flood-affected population from 2030 to 2100 would present increasing trends under SSP2 and SSP3, and would exhibit increasing-decreasing trends under SSP1 and SSP5, which would show the maximum decrease of 8% and the maximum increase of 552% compared with the present scenario. The results can help to provide a useful insight into the evolution trend of flood losses and the scientific basis for the formulation of disaster mitigation measures.
Article
Previous research has primarily focused on evaluating flood risks influenced by single factors, often neglecting the combined impacts of climate change and urbanization on historical and future flood risks. This study utilized 12 assessment indicators across three criteria layers. It employed the Technique for Order Preference by Simi�larity to an Ideal Solution (TOPSIS) and an optimized Multi-scale Geographically Weighted Regression (MGWR) to develop a comprehensive assessment framework for historical and projected urban flood risks under combined scenarios of RCPs and SSPs. The risk of future extreme precipitation events increased under 4 RCPs. Compared with 2010, Beijing’s urban land area is expected to increase by an average of 53.64 % in 2060 under 5 SSPs, accompanied by rapid socioeconomic development. MGWR based on K-means optimization improves opera�tional speed by reducing the number of patches. The areas classified as high and highest flood risk levels increased by 38.87 % and 60.17 % from the 2010s to the 2060s, indicating an upward trend in future flood risk. Higher flood risk is observed under the high emissions combination scenario. Structural equation modeling (SEM) revealed that road network density (RND), runoff depth (RD), and distance to road (DRO) were the main influencing factors contributing to flood risks in Beijing.
Article
Full-text available
Vulnerable regions to drought require resource allocation to develop projects to combat low rainfall negative impacts. In this perspective, this study proposes a multicriteria sorting model to categorize municipalities affected by drought in order to allocate resources according to criticality. The proposed model is based on the FlowSort method for sorting municipalities into highly critical, moderately critical, and slightly critical levels to cope with drought. The model was applied in the Apodi-Mossoró river basin in Rio Grande do Norte, Brazil, for sorting 14 municipalities. The findings revealed that the public administration could effectively focus its resources on municipalities experiencing more severe drought conditions. Thus, the public administration can develop strategies, public policies, and actions in coping with extreme difficulty. Keywords: drought; multicriteria decision aiding; sorting
Article
Floods have been causing the world’s costliest weather-related catastrophes and their magnitude and frequency are projected to increase even further due to climate change. Current flood risk quantification procedures include the use of complex and highly uncertain hydrologic-hydraulic models for hazard mapping and computationally-tedious manipulations for vulnerability evaluation—hindering urban centers climate resilience planning. Adopting a novel approach that bypasses such time-consuming procedures, this study presents a deep learning-based rapid and accurate flood risk prediction tool, RAPFLO, to directly relate flood risk characteristics (level, extent, and likelihood) to their main drivers (e.g., climate, topography, and land cover). The approach employed to develop RAPFLO is generic in nature and the associated methodology is not site-dependent. To demonstrate its utility, RAPFLO is deployed on the City of Calgary, Canada, and is used to reproduce the fluvial flood risk across the city between the years 2010 and 2020. RAPFLO efficiently replicated the risk level with an overall accuracy of 80 % and the risk likelihood with a coefficient of determination of 0.96. Subsequently, RAPFLO was employed for predicting future fluvial flood risk from the year 2025 to 2100 under the RCP 8.5 climate scenario. RAPFLO presents a valuable computationally efficient, accurate, and rapid decision support system that empowers city managers and infrastructure operators to devise effective climate resilience strategies considering different climate projections and future what-if scenarios.
Article
Full-text available
The adoption of models based on key performance indicators to diagnose and evaluate the competitiveness of companies has been presented as a trend in the operations' management. These models are structured with different variables in complex interrelationships, making diagnosis and monitoring difficult due to the number of variables involved, which is one of the main management challenges of Small and Medium-sized Enterprises. In this sense, this article proposes the Gain Information Artificial Neural Network (GIANN) method. GIANN is a method to optimize the number of variables of assessment models for the competitiveness and operational performance of Small and Medium-sized Enterprises. GIANN is a hybrid methodology combining Multi-attribute Utility Theory with Entropy and Information Gain concepts and computational modeling through Multilayer Perceptron Artificial Neural Network. The model used in this article integrates variables such as fundamental points of view, critical success factors, and key performance indicators. GIANN was validated through a survey of managers of Small and Medium-sized Enterprises in Southern Brazil. The initial model was adjusted, reducing the number of key performance indicators by 39% while maintaining the accuracy of the results of the competitiveness measurement. With GIANN, the number of variables to be monitored decreases considerably, facilitating the management of Small and Medium-sized Enterprises.
Article
Public Open Spaces (POSs) such as streets and squares, in our cities are characterized by spatio-temporal variations of users' vulnerability and exposure in view of the hosted social, governmental, religious, and commercial functions. Single or multi-risks conditions in POSs can hence vary over time. This work proposes a methodology to perform local-scale analyses on use patterns in real-world POSs, pursuing a quick-to-apply approach based on remote analysis tools and easy-to-apply surveys, to be also used by non-expert technicians. Main literature-based factors concerning users' vulnerability/exposure and methods for their collection are identified. Rules to define typological (that is recurring) scenarios are provided through specific key performance indicators relating to overall POS use and daily/hourly temporalities. The methodology capabilities are preliminary assessed through a sample of 56 squares in historic Italian cities, considering working days and holidays. Results trace the overall typological characterization of the squares sample adopting a “robust-to-outliers” approach, and provide bases for expeditious assessment of users' vulnerability and exposure scenarios. The typological scenarios can be then used to support rapid risk assessment actions in POSs by safety designers and local authority technicians, and employed as input in simulation-based analyses to include the users' features in the related evaluations.
Article
Full-text available
Iran is one of the flood-prone areas in the world with inappropriate climatic patterns. In this study, flood risk maps in three scenarios by combining Analytic Hierarchy Process (AHP), Analytical Network Process (ANP) and Fuzzy Analytic Hierarchy Process (FAHP) models with Ordered Weighted Average (OWA), Weighted Linear Combination (WLC) models., Local Weighted Linear Combination (LWLC) and two new models, Weighted Multi-Criteria Analysis (WMCA) and Geo Technique for Order of Preference by Similarity to Ideal Solution (Geo TOPSIS) were prepared from Heraz watershed in northern Iran. The analysis of the results of the AHP model in the first scenario, the ANP model in the second scenario and the FAHP model in the third scenario show that the criteria of precipitation, slope, land use, elevation, drainage density and distance to river are the most important criteria for the occurrence of floods in Haraz basin. Evaluation of flood risk models show that on average, about 70%, 20%, 8%, and 2% of Haraz basin are in medium, low, high, and no flood risk situations, respectively. Geographically, the southeastern and central parts are in high and low flood risk, respectively, and other parts of the basin are in medium risk. In this basin, many forest lands, pastures, agriculture and population centers are in medium and high risk of flooding. Generally, based on the obtained results, WMCA and Geo TOPSIS models along with WLC, LWLC and OWA models are effective methods for flood risk studies and based on the obtained results, Haraz basin needs necessary planning for flood risk management.
Article
Flooding due to overflowing rivers affects the construction elements of many buildings. Although significant progress has been made in predicting this damage, there is still a need to continue studying this issue. For this reason, the main goal of this research focuses on finding out if, based on a small dataset of cases of a given area, it is possible to predict at least three degrees of affectation in buildings, considering only three environmental factors (minimum distance from the river, unevenness and possible water communication). To meet this goal, the methodological approach followed considers scientific literature review and collection and analysis of a small dataset from 101 buildings that have been affected by floods in the Guadalquivir River basin (Andalusia. Spain). After analyzing this data, algorithms based on machine learning (ML) are applied to predict the degree of affection. The results, analysis and conclusions indicate that, if the study focuses on a specific area and similar buildings, using a correlation matrix and ML algorithms such as the "Decision Tree" with cross-validation, around 90% can be achieved in the "Recall" and "Precision" of "High-Level-Affection" class, and an “Accuracy” around 80% in general.
Article
Full-text available
The coastal city of Guasave, Sinaloa, located on the Mexican Pacific coast, is subject to extreme precipitation events, which have caused flooding with damage to the city’s infrastructure. The factors that influence flooding are vegetation, geology, degree of soil saturation, drainage characteristics of the watershed, and the shape of the topographic relief. Of the above factors, the topographic relief, which is the subject of the study, has been partially modified in some areas by infrastructure works (from 20.2 m to 17.6 m), and the population of the urban area has grown by 51.8% in 17 years (2004–2021); therefore, the objective is to evaluate the potential flood risk due to changes in this factor and the growth of the urban area. When using this method, the potential flood risk was determined considering four extreme events, 1982, 1990, 1998, and 2019. It was found that the potential risk increases for the whole city, being more intense in sector III, which, before the modification of the topographic relief, was the area with the lowest risk of flooding. In an extreme event such as Hurricane Paul in 1982, practically the entire city would be flooded.
Article
Floods are among the most destructive sudden-onset disasters affecting worldwide communities and society. Pedestrians can be forced to evacuate affected areas thus being exposed to multiple risks. Outdoor built environment flood risks analyses should be performed through rapid, easy, and sustainable tools to speed up and support risk assessment and mitigations. Custom evacuation simulators have been developed, but are generally used in research, are not user-friendly, and need high-level training. On the contrary, generic (e.g. commercial) software tools seem to be more suitable for low-trained technicians but should be modified to include human behaviors effects, especially considering the evacuation, when people's peculiar choices depend on interactions with floodwaters and built environment layout/composing elements. This work provides preliminary setups of a generic software tool to perform quick and sustainable assessments of pedestrians’ flood safety in outdoor spaces, using an easy-to-apply no-code modification approach to include flood peculiar behaviors. Simulation outputs of the setup-based generic software are compared with a custom simulator relying on the same modelling approach, and with real-world observations, using an idealized literature-based outdoor scenario. Results provide the best setup of the generic software to reliably represent evacuation phenomena, thus encouraging its future application also by local authorities.
Article
Strategies for reducing flood risks and adapting urban systems involve estimating parameters and conducting difficult trade-offs among human, financial, and environmental issues, which are usually conflicting with each other. This way, multicriteria models are useful as they can aid risk-based decision-making by dealing with all these aspects simultaneously, while the decision-maker (DM) exerts a great influence when establishing his/her preferences. However, this problem is usually associated with uncertainties about defining the variables required, and these certainly affect the credibility of the decision. Hence, sensitivity analysis (SA) is a powerful tool for assessing how changes in these variables lead to robust results. In this context, this paper compiles a SA protocol and this includes using a Monte Carlo Simulation in a multicriteria decision model. It aims to prioritize flood risks in urban areas under climate effects. The model quantifies the risk by using Multi-Attribute Utility Theory and aggregates five criteria: accessibility to public services, economic, human, sanitary conditions, and the need for social assistance. By undertaking a critical analysis, the SA links risk and uncertainty so as to deal with climate risks adequately. It simulates the behavior of three groups of input data: climatic variability, exposure to risk, and the DM's preference statements. Our findings explore graphical and statistical tools to provide the DM with a broad range of evidence with the potential to increase confidence in his/her own decisions. Also, innovative insights emerged from conducting this study which leads us to making suggestions for new improvements in the multicriteria model.
Article
Urban waterlogging is a severe hazard that can directly damage environmental quality and human well-being. It would be desirable for hazard mitigation planning and sustainable urban design if potential waterlogging-prone areas under dynamic land use change could be appropriately predicted. However, previous related studies did not simultaneously consider the reliability of negative samples and the future influence of fine-scale land use change. To fill the knowledge gap, this research has developed a robust method for predicting future waterlogging-prone areas by coupling the maximum entropy (MAXENT) and the Future Land Use Simulation (FLUS) model. The former can ensure that no extra sampling bias will be introduced, while the latter can accurately forecast the spatio-temporal pattern of land use. This case study has confirmed the accuracy and feasibility of this method. It was found that the proportion of impervious surfaces, population density, and proportion of green areas are key spatial drivers behind urban waterlogging issues. In addition, the future hazard potential map provided by the MAXENT and FLUS implies that a large proportion of impervious surfaces will face huge waterlogging risks. Therefore, policymakers should focus more on places with a higher probability of urban waterlogging. In summary, this research is expected to offer a practical tool for future urban design and waterlogging risk prevention.
Article
Full-text available
Although densification of urban areas is being proposed as a sustainable urbanisation strategy, frameworks for detailed large-scale analysis of densification potentials and their evaluation are lacking. A geospatial simulation framework is presented to assess and evaluate densification potentials at the neighbourhood level of already built-up residential areas. The focus is on post-war neighbourhoods, which are particularly promising for sustainable densification. Neighbourhoods are localised using geospatial analysis and based on literature and architectural designs, potentials are estimated for different neighbourhood archetypes and densification strategies. Potentials are simulated at a national scale using supervised archetype classification. The embeddedness into current mobility infrastructure is used as a proxy for evaluating the sustainability of neighbourhood densification. The developed framework is tested for Switzerland. Depending on the densification strategy, the simulated additional inhabitants for populating post-war urban neighbourhoods range between 4–15% of the current population. More than half of this potential is located in central areas and is well connected by public transportation. The presented approach is suitable for assessing spatially explicit densification potential and for prioritising densification locations. We show that in countries with a high number of post-war neighbourhoods in well-connected locations, considerable densification opportunities could be realised in already built-up residential areas.
Article
Full-text available
This systematic review of reviews was conducted to examine housing precarity and homelessness in relation to climate change and weather extremes internationally. In a thematic analysis of 15 reviews (5 systematic and 10 non-systematic), the following themes emerged: risk factors for homelessness/housing precarity, temperature extremes, health concerns, structural factors, natural disasters, and housing. First, an increased risk of homelessness has been found for people who are vulnerably housed and populations in lower socio-economic positions due to energy insecurity and climate change-induced natural hazards. Second, homeless/vulnerably-housed populations are disproportionately exposed to climatic events (temperature extremes and natural disasters). Third, the physical and mental health of homeless/vulnerably-housed populations is projected to be impacted by weather extremes and climate change. Fourth, while green infrastructure may have positive effects for homeless/vulnerably-housed populations, housing remains a major concern in urban environments. Finally, structural changes must be implemented. Recommendations for addressing the impact of climate change on homelessness and housing precarity were generated, including interventions focusing on homelessness/housing precarity and reducing the effects of weather extremes, improved housing and urban planning, and further research on homelessness/housing precarity and climate change. To further enhance the impact of these initiatives, we suggest employing the Human Rights-Based Approach (HRBA).
Article
Full-text available
Aged earthworks constitute a major proportion of European rail infrastructures, the replacement and remediation of which poses a serious problem. Considering the scale of the networks involved, it is infeasible both in terms of track downtime and money to replace all of these assets. It is, therefore, imperative to develop a rational means of managing slope infrastructure to determine the best use of available resources and plan maintenance in order of criticality. To do so, it is necessary to not just consider the structural performance of the asset but also to consider the safety and security of its users, the socioeconomic impact of remediation/failure and the relative importance of the asset to the network. This paper addresses this by looking at maintenance planning on a network level using multi-attribute utility theory (MAUT). MAUT is a methodology that allows one to balance the priorities of different objectives in a harmonious fashion allowing for a holistic means of ranking assets and, subsequently, a rational means of investing in maintenance. In this situation, three different attributes are considered when examining the utility of different maintenance options, namely availability (the user cost), economy (the financial implications) and structural reliability (the structural performance and subsequent safety of the structure). The main impact of this paper is to showcase that network maintenance planning can be carried out proactively in a manner that is balanced against the needs of the organization.
Article
Full-text available
The 2015 Paris Agreement is falling short of its aspirations, as signatory countries are struggling to implement the policies required to meet the targets. The global scenario framework formed by the Shared Socioeconomic Pathways (SSPs) and the Representative Concentration Pathways (RCPs) places little emphasis on the dynamics of climate policy implementation. Social science approaches to understanding these dynamics are not well-integrated into climate scenario research. We apply an implementation research approach to analyse the transition to clean energy in the US and China, as well as two examples from Europe – Germany and Spain – which have shown markedly diverging implementation trajectories. We propose four implementation scenarios (ISs) for clean energy worldwide which relate to different configurations of actors in the policy system. These are: (1) Civil Society Takes Control (IS1) – where ideologically opposed governments are marginalised by citizens and forward-thinking investors; (2) Strong-arm Transition (IS2) – where a single party state drives the transition without the involvement of civil society; (3) Systemic Limits (IS3) – which highlights the need to focus on the whole energy system, not just renewables; and (4) Renewable Austerity (IS4) – where an economic downturn offers powerful anti-transition actors the opportunity to advocate removal of support for climate mitigation, as they did after the 2007–2008 financial crisis. This scenario could be repeated as countries seek to recover from the Covid-19 pandemic. Our study offers a framework for structured analysis of real-world constraints faced by implementing actors, which we argue is urgently needed to help national and international policy makers achieve climate goals. Key policy insights • The world is struggling to implement the Paris Agreement, partly because the complex dynamics of climate policy implementation are poorly understood. • Social science approaches to understanding these dynamics are not well-integrated into climate scenario research. • Implementation research focussing on the actors and context provides a useful framework for analysis of implementation efforts from major global carbon emitters. • The approach offers new and distinctive scenario narratives that go beyond Representative Concentration Pathways (RCPs) and Shared Socioeconomic Pathways (SSPs). • These new scenarios can help policy makers evaluate likely outcomes of climate policy implementation based on information about actors and context.
Article
Full-text available
Many day-to-day decisions may involve risky outcomes that occur at some delay after a decision has been made. We refer to such scenarios as delayed lotteries. Despite human choice often involves delayed lotteries, past research has primarily focused on decisions with delayed or risky outcomes. Comparatively, less research has explored how delay and probability interact to influence decisions. Within research on delayed lotteries, rigorous comparisons of models that describe choice from the discounting framework have not been conducted. We performed two experiments to determine how gain or loss outcomes are devalued when delayed and risky. Experiment 1 used delay and probability ranges similar to past research on delayed lotteries. Experiment 2 used individually calibrated delay and probability ranges. Ten discounting models were fit to the data using a genetic algorithm. Candidate models were derived from past research on discounting delayed or probabilistic outcomes. We found that participants' behavior was best described primarily by a three-parameter multiplicative model. Measures based on information criteria pointed to a solution in which only delay and probability were psychophysically scaled. Absolute measures based on residuals pointed to a solution in which amount, delay, and probability are simultaneously scaled. Our research suggests that separate scaling parameters for different discounting factors may not be necessary with delayed lotteries.
Article
Full-text available
This paper shows the interaction between probabilistic and delayed rewards. In decision- making processes, the Expected Utility (EU) model has been employed to assess risky choices whereas the Discounted Utility (DU) model has been applied to intertemporal choices. Despite both models being different, they are based on the same theoretical principle: the rewards are assessed by taking into account the sum of their utilities and some similar anomalies have been revealed in both models. The aim of this paper is to characterize and consider particular cases of the Time Trade-Off (PPT) model and show that they correspond to the EU and DU models. Additionally, we will try to build a PTT model starting from a discounted and an expected utility model able to overcome the limitations pointed out by Baucells and Heukamp.
Article
Full-text available
We conducted an artefactual field experiment in Vietnam to investigate whether and how experiencing a natural disaster affects individual attitudes toward risks. Using experimental and real household data, we show that households in villages affected by a flood in recent years exhibit more risk aversion, compared with individuals living in similar but unaffected villages. Interestingly, this result holds for the loss domain, but not the gain domain. In line with Prospect Theory, Vietnamese households distort probabilities. The distortion is related to aid received and social networks participation, but is unrelated to flood experience.
Article
Full-text available
The aim of this paper is to derive an index able to indicate if a discount function exhibits increasing or decreasing impatience, and, even, in the last case, whether the decreasing impatience is moderate or strong. Moreover, it will be shown that the sign of this indicator coincides with the sign of the convexity index of the discount function when only considering the cases of increasing and decreasing impatience. Consequently, this parameter supposes an improvement of Prelec’s index of convexity. The main advantage of this novel measure is that, the same as Prelec’s index, it uses the differential calculus and, moreover, can be easily plotted by showing the changes from a type of impatience to another one according to time.
Article
Full-text available
Climate change is having an increasing effect on human society and ecosystems. The United Nations has established 17 sustainable development goals, one of which is to cope with climate change. How to scientifically explore uncertainties and hazards brought about by climate change in the future is crucial. The new Intergovernmental Panel on Climate Change (IPCC) has proposed shared socioeconomic pathways (SSPs) to project climate change scenarios. SSP has been analyzed globally, but how regions and nations respond to the global climate change and mitigation policies is seldom explored, which do not meet the demand for regional environmental assessment and social sustainable development. Therefore, in this paper, we reviewed and discussed how SSPs were applied to regions, and this can be summarized into four main categories: (1) integrated assessment model (IAM) scenario analysis, (2) SSPs-RCPs-SPAs framework scenario analysis, (3) downscaling global impact assessment model, and (4) regional impact assessment model simulation. The study provides alternative ways to project land use, water resource, energy, and ecosystem service in regions, which can carry out related policies and actions to address climate change in advance and help achieve sustainable development.
Article
Full-text available
Downscaled climate projections need to be linked to downscaled projections of population and economic growth to fully develop implications for land, natural resources, and ecosystems for future scenarios. We develop an empirical spatiotemporal approach for jointly projecting population and income at the county scale in the United States that is consistent with neoclassical economic growth theory and overlapping labor markets and that accounts for labor migration and spatial spillovers. Downscaled projections generated for the five Shared Socioeconomic Pathways used to support global scenario analysis generally show growth focused around relatively few centers especially in the southeast and western regions, with some areas in the Midwest and northeast experiencing population declines. Results are consistent with economic growth theory and with historical trends in population change and convergence of per capita personal income across US counties.
Article
Full-text available
There is an increasing awareness and recognition of the importance of reflecting knowledge and lack of knowledge in relation to the understanding, assessment and management of risk. Substantial research work has been initiated to better link risk and knowledge. The present paper aims to contribute to this work by distinguishing between different types of knowledge: general knowledge and specific knowledge. For example, in relation to an offshore installation, the former captures knowledge about what could happen and why on offshore installations in general, whereas the latter covers more detailed knowledge related to the specific installation of interest and its operation. Risk management is viewed as the process of making sure that the general knowledge is sufficiently and efficiently used, including the identification of the specific knowledge needed, and ensuring that we have sufficient specific knowledge and control when assessing risk and making decisions. In the paper, we present a risk management framework built on these ideas and knowledge distinction. This framework clarifies interactions between the two knowledge bases, and how these bases can be used to improve the foundation and practice of risk assessment and management.
Article
Full-text available
Hydro-climatic extremes are influenced by climate change and climate variability associated to large-scale oscillations. Non-stationary frequency models integrate trends and climate variability by introducing covariates in the distribution parameters. These models often assume that the distribution function and shape of the distribution do not change. However, these assumptions are rarely verified in practice. We propose here an approach based on L-moment ratio diagrams to analyze changes in the distribution function and shape parameter of hydro-climate extremes. We found that important changes occur in the distribution of annual maximum streamflow and extreme temperatures. Eventual relations between the shapes of the distributions of extremes and climate indices are also identified. We provide an example of a non-stationary frequency model applied to flood flows. Results show that a model with a shape parameter dependent on climate indices in combination with a scale parameter dependent on time improves significantly the goodness-of-fit.
Article
Full-text available
General Circulation Models (GCMs) are advanced tools for impact assessment and climate change studies. Previous studies show that the performance of the GCMs in simulating climate variables varies significantly over different regions. This study intends to evaluate the performance of the Coupled Model Intercomparison Project phase 5 (CMIP5) GCMs in simulating temperature and precipitation over Iran. Simulations from 37 GCMs and observations from the Climatic Research Unit (CRU) were obtained for the period of 1901–2005. Six measures of performance including mean bias, root mean square error (RMSE), Nash-Sutcliffe efficiency (NSE), linear correlation coefficient (r), Kolmogorov-Smirnov statistic (KS), Sen’s slope estimator, and the Taylor diagram are used for the evaluation. GCMs are ranked based on each statistic at seasonal and annual time scales. Results show that most GCMs perform reasonably well in simulating the annual and seasonal temperature over Iran. The majority of the GCMs have a poor skill to simulate precipitation, particularly at seasonal scale. Based on the results, the best GCMs to represent temperature and precipitation simulations over Iran are the CMCC-CMS (Euro-Mediterranean Center on Climate Change) and the MRI-CGCM3 (Meteorological Research Institute), respectively. The results are valuable for climate and hydrometeorological studies and can help water resources planners and managers to choose the proper GCM based on their criteria.
Article
Full-text available
Assessing the condition of sewer pipelines is a backbone process to plan for rehabilitation and maintenance work. The closed circuit-television (CCTV) method is the widely adopted method to record the inner condition of the pipelines, which is then interpreted by a practitioner. This paper presents a condition assessment framework for sewer pipelines using multiattribute utility theory (MAUT). The condition assessment model utilizes MAUT to generate several utility functions for four sewer pipeline defects: deformation, settled deposits, infiltration, and surface damage. A proposed surface damage evaluation methodology is presented to assess the surface damage defect for three different materials: reinforced concrete, vitrified clay, and ductile iron. An aggregated condition index is computed based on the relative importance weights of the studied defects and tested with several rounding types. The rounding up type produced the optimum results, and the values were compared with the Concordia Sewer Protocol (CSP) suggested methodology yielding an average difference between the two approaches of 3.33%; and a mean absolute error (MAE) of 0.33. A sensitivity analysis was then carried out to check the impact of the change of the relative importance weights on the overall index. The proposed methodology aims to provide information for asset managers about the severity of some sewer defects existing in sewer pipelines. In addition, it reinforces their plans for rehabilitation and maintenance by suggesting the existing condition of the sewer pipelines.
Article
Full-text available
Transport networks underpin economic activity by enabling the movement of goods and people. During extreme weather events transport infrastructure can be directly or indirectly damaged, posing a threat to human safety, and causing significant disruption and associated economic and social impacts. Flooding, especially as a result of intense precipitation, is the predominant cause of weather-related disruption to the transport sector. Existing approaches to assess the disruptive impact of flooding on road transport fail to capture the interactions between floodwater and the transport system, typically assuming a road is fully operational or fully blocked, which is not supported by observations. In this paper we develop a relationship between depth of standing water and vehicle speed. The function that describes this relationship has been constructed by fitting a curve to video analysis supplemented by a range of quantitative data that has be extracted from existing studies and other safety literature. The proposed relationship is a good fit to the observed data, with an R-squared of 0.95. The significance of this work is that it is simple to incorporate our function into existing transport models to produce better estimates of flood induced delays and we demonstrate this with an example from the 28th June 2012 flood in Newcastle upon Tyne, UK.
Article
Full-text available
This paper considers preferences over risky timed outcomes and proposes the weighted temporal utility (WTU) model which separates anticipated subjective evaluations of outcomes from attitudes toward psychological distance induced by risks and delays. Anticipating the subjective evaluation of an outcome requires the decision maker to project himself to the future and to imagine how much he will appreciate the outcome once he receives it. This projection may, but need not, be accurate. We provide a characterization of the WTU model in a static setting and propose a nonparametric method to measure its weighting and utility functions. We also consider a dynamic setting which allows for a varying decision time. The dynamic WTU model can accommodate the standard discounted expected utility model as well as observed deviations from stationarity, time invariance, and time consistency. It therefore enhances our understanding of the drivers of these behavioral phenomena.
Article
Full-text available
Most cost‐benefit analysis ( CBA ) textbooks and guidelines recognize the objective of CBAs to improve social welfare—a function of well‐being of all individuals, conceptualized by utility. However, today's common practice to value flood risk management benefits as the reduction of the expected annual damages does not comply with this concept of social welfare, since it erroneously focuses on money instead of well‐being (utility). Diminishing marginal utility of money implies that risk aversion and income differences should be taken into account while calculating the social welfare benefits of flood risk management. This is especially important when social vulnerability is high, damage compensation is incomplete and the distribution of income is regarded as unfair and income is not redistributed in other ways. Disagreement, misconception, complexity, untrained professionals, political economy and failing guidance are potential reasons why these concepts are not being applied. Compared to the common practice, a theoretically more sound social welfare approach to CBA for flood risk management leads to different conclusions on who to target, what to do, how much to invest and how to share risks, with increased emphasis on resiliency measures for population segments with low income and high social vulnerability. The social welfare approach to CBA , illustrated in this study in the context of floods, can be applied to other climate risks as well, such as storms, droughts, and landslides. WIREs Clim Change 2017, 8:e446. doi: 10.1002/wcc.446 This article is categorized under: Climate Economics > Iterative Risk‐Management Policy Portfolios
Article
Full-text available
Substantial evidence shows that the frequency of hydrological extremes has been changing and is likely to continue to change in the near future. Non-stationary models for flood frequency analyses are one method of accounting for these changes in estimating design values. The objective of the present study is to compare four models in terms of goodness of fit, their uncertainties, the parameter estimation methods and the implications for estimating flood quantiles. Stationary and non-stationary models using the GEV distribution were considered, with parameters dependent on time and on annual precipitation. Furthermore, in order to study the influence of the parameter estimation approach on the results , the maximum likelihood (MLE) and Bayesian Monte Carlo Markov chain (MCMC) methods were compared. The methods were tested for two gauging stations in Slovenia that exhibit significantly increasing trends in annual maximum (AM) discharge series. The comparison of the models suggests that the stationary model tends to underestimate flood quantiles relative to the non-stationary models in recent years. The model with annual precipitation as a covariate exhibits the best goodness-of-fit performance. For a 10% increase in annual precipitation, the 10-year flood increases by 8%. Use of the model for design purposes requires scenarios of future annual precipitation. It is argued that these may be obtained more reliably than scenarios of extreme event precipitation which makes the proposed model more practically useful than alternative models.
Conference Paper
Full-text available
The response of the global climate system to atmospheric CO 2 concentration increase in time is scrutinized employing the Brazilian Earth System Model Ocean–Atmosphere version 2.3 (BESM-OA2.3). Through the achievement of over 2000 yr of coupled model integrations in ensemble mode, it is shown that the model simulates the signal of recent changes of global climate trends, depicting a steady atmospheric and oceanic temperature increase and corresponding marine ice retreat. The model simulations encompass the time period from 1960 to 2105, following the phase 5 of the Coupled Model Intercomparison Project (CMIP5) protocol. Notwithstanding the accurate reproduction of large-scale ocean–atmosphere coupled phenomena, like the ENSO phenomena over the equatorial Pacific and the interhemispheric gradient mode over the tropical Atlantic, the BESM-OA2.3 coupled model shows systematic errors on sea surface temperature and precipitation that resemble those of other global coupled climate models. Yet, the simulations demonstrate the model's potential to contribute to the international efforts on global climate change research, sparking interest in global climate change research within the Brazilian climate modeling community, constituting a building block of the Brazilian Framework for Global Climate Change Research.
Article
Full-text available
The increasing effort to develop and apply nonstationary models in hydrologic frequency analyses under changing environmental conditions can be frustrated when the additional uncertainty related to the model complexity is accounted for along with the sampling uncertainty. In order to show the practical implications and possible problems of using nonstationary models and provide critical guidelines, in this study we review the main tools developed in this field (such as nonstationary distribution functions, return periods, and risk of failure) highlighting advantages and disadvantages. The discussion is supported by three case studies that revise three illustrative examples reported in the scientific and technical literature referring to the Little Sugar Creek (at Charlotte, North Carolina), Red River of the North (North Dakota/Minnesota), and the Assunpink Creek (at Trenton, New Jersey). The uncertainty of the results is assessed by complementing point estimates with confidence intervals (CIs) and emphasizing critical aspects such as the subjectivity affecting the choice of the models’ structure. Our results show that (1) nonstationary frequency analyses should not only be based on at-site time series but require additional information and detailed exploratory data analyses (EDA); (2) as nonstationary models imply that the time-varying model structure holds true for the entire future design life period, an appropriate modeling strategy requires that EDA identifies a well-defined deterministic mechanism leading the examined process; (3) when the model structure cannot be inferred in a deductive manner and nonstationary models are fitted by inductive inference, model structure introduces an additional source of uncertainty so that the resulting nonstationary models can provide no practical enhancement of the credibility and accuracy of the predicted extreme quantiles, whereas possible model misspecification can easily lead to physically inconsistent results; (4) when the model structure is uncertain, stationary models and a suitable assessment of the uncertainty accounting for possible temporal persistence should be retained as more theoretically coherent and reliable options for practical applications in real-world design and management problems; (5) a clear understanding of the actual probabilistic meaning of stationary and nonstationary return periods and risk of failure is required for a correct risk assessment and communication.
Article
Full-text available
Risk and time are intertwined. The present is known while the future is inherently risky. This is problematic when studying time preferences since uncontrolled risk can generate apparently present-biased behavior. We systematically manipulate risk in an intertemporal choice experiment. Discounted expected utility performs well with risk, but when certainty is added common ratio predictions fail sharply. The data cannot be explained by prospect theory, hyperbolic discounting, or preferences for resolution of uncertainty, but seem consistent with a direct preference for certainty. The data suggest strongly a difference between risk and time preferences.
Article
Full-text available
We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks.
Article
Full-text available
Consumers often make decisions about outcomes and events that occur over time. This research examines consumers’ sensitivity to the prospective duration relevant to their decisions and the implications of such sensitivity for intertemporal trade-offs, especially the degree of present bias (i.e., hyperbolic discounting). The authors show that participants’ subjective perceptions of prospective duration are not sufficiently sensitive to changes in objective duration and are nonlinearand concave in objective time, consistent with psychophysical principles. More important, this lack of sensitivity can explain hyperbolic discounting. The results replicate standard hyperbolic discounting effects with respect to objective time but show a relatively constant rate ofdiscounting with respect to subjective time perceptions. The results are replicated between subjects (Experiment 1) and within subjects (Experiments 2), with multiple time horizons and multiple descriptors, and with different measurement orders. Furthermore, the authors showthat when duration is primed, subjective time perception is altered (Experiment 4) and hyperbolic discounting is reduced (Experiment 3).
Article
Full-text available
Intertemporal decision making under risk involves two dimensions: time preferences and risk preferences. This paper focuses on the impact of time on risk preferences, independent of the intertemporal trade-off of outcomes, i.e., time preferences. It reports the results of an experimental study that examines how delayed resolution and payment of risky options influence individual choice. We used a simple experimental design based on the comparison of two-outcome monetary lotteries with the same delay. Raw data clearly reveal that subjects become more risk tolerant for delayed lotteries. Assuming a prospect theory–like model under risk, we analyze the impact of time on utility and decision weights, independent of time preferences. We show that the subjective treatment of outcomes (i.e., utility) is not significantly affected by time. In fact, the impact of time is completely absorbed by the probability weighting function. The effect of time on risk preferences was found to generate probabilistic optimism resulting in a higher risk tolerance for delayed lotteries. This paper was accepted by Teck Ho, decision analysis.
Article
Full-text available
A new version of the atmosphere–ocean general circulation model cooperatively produced by the Japanese research community, known as the Model for Interdisciplinary Research on Climate (MIROC), has recently been developed. A century-long control experiment was performed using the new version (MIROC5) with the standard resolution of the T85 atmosphere and 1° ocean models. The climatological mean state and variability are then compared with observations and those in a previous version (MIROC3.2) with two different resolutions (medres, hires), coarser and finer than the resolution of MIROC5. A few aspects of the mean fields in MIROC5 are similar to or slightly worse than MIROC3.2, but otherwise the climatological features are considerably better. In particular, improvements are found in precipitation, zonal mean atmospheric fields, equatorial ocean subsurface fields, and the simulation of El Niño–Southern Oscillation. The difference between MIROC5 and the previous model is larger than that between the two MIROC3.2 versions, indicating a greater effect of updating parameterization schemes on the model climate than increasing the model resolution. The mean cloud property obtained from the sophisticated prognostic schemes in MIROC5 shows good agreement with satellite measurements. MIROC5 reveals an equilibrium climate sensitivity of 2.6 K, which is lower than that in MIROC3.2 by 1 K. This is probably due to the negative feedback of low clouds to the increasing concentration of CO2, which is opposite to that in MIROC3.2.
Article
Enhancing safety and maintaining profitable operations in various types of organizations, including in gas transmission and distribution companies, is a challenging task. Multidimensional risk analysis of Natural Gas Pipelines (NGP) has been carried out in decision-making in order to guide the decision-maker (DM) in managing resource allocation and prioritizing risks in pipeline sections. Although the Literature puts a spotlight on Expected Utility (EU) methods for assessing DM's preferences, the NGP problem is based on the probability of the occurrence of hazard scenarios being small, and yet there being high impacts when a failure occurs. That is why this paper proposes a new multidimensional model for assessing NGP risks: the MRDU model. To the best of our knowledge, there is an absence in the literature of studies on using non-Expected Utility (non-EU) methods. Non-EU is a new approach which is based on Utility Theory. Deviations of utilities are explored and this incorporates contributions from the Rank-Dependent Utility (RDU)-based risk approach. Relevant results are compared and an extensive sensitivity analysis is conducted. Results show that the risk approach based on non-EU gives greater support to the recommendations made to the DM with regard to prioritizing NGP sections.
Article
The inevitability of more frequent and more extensive floods, phenomena that display one aspect of the inherent variability of Nature, has been increasingly accepted by managers and policymakers. Climate changes due to global warming add considerable complexity to dealing with flooding, since increases in average temperature alter the patterns and intensity of precipitation. Thus, adverse events arising from natural hazards have tended to become even more frequent and their impacts more severe. Consequently, urban centers face a huge challenge and statistics already show the urgent need for all levels of government to review the adequacy of their plans for anticipating and combating extreme events. Therefore, this paper undertakes a systematic review of the literature, using well-founded and established search strategies, in order to determine the state-of-the-art as presented in 52 peer-reviewed articles strategically selected from two main sources of scientific data. Cross-matching important research metrics from these papers has enabled two axes of discussion to be detailed in this article: a bibliometric analysis and a content analysis. Among many findings, those that stand out are the lack of a formal methodology for climate modeling and the use and sharing of GIS tools, these being key-factors for developing new ways of visualizing risk. Finally, some guidelines for decision-making are put forward in order to share insights and to provide DMs, scholars, experts, stakeholders, and other professionals with an overall background of managerial aspects when engaging on applying flood risk management in urban areas.
Article
The severity and frequency of short-duration, but damaging, urban area floods have increased in recent years across the world. Alteration to the urban micro-climate due to global climate change impacts may also exacerbate the situation in future. Sustainable urban stormwater management using low impact development (LID) techniques, along with conventional urban stormwater management systems, can be implemented to mitigate climate-change-induced flood impacts. In this study, the effectiveness of LIDs in the mitigation of urban flood are analyzed to identify their limitations. Further research on the success of these techniques in urban flood mitigation planning is also recommended. The results revealed that LIDs can be an efficient method for mitigating urban flood impacts. Most of the LID methods developed so far, however, are found to be effective only for small flood peaks. They also often fail due to non-optimization of the site-specific and time-varying climatic conditions. Major challenges include identification of the best LID practices for the region of interest, efficiency improvements in technical areas, and site-specific optimization of LID parameters. Improvements in these areas will allow better mitigation of climate-change-induced urban floods in a cost-effective manner and will also assist in the achievement of sustainable development goals for cities.
Article
This paper highlights the effects of sealed surfaces in flood hazard in order to identify population exposure to heavy rain within Cabras watershed located in Campinas Metropolitan Area (Sao Paulo). We applied a set of methods compiled into an integrated system by the Department of Hydraulic Engineering of University of Sao Paulo (USP). We observed that, for relatively frequent flood events with 10y of return periods, changes in LULC represent an increase of maximum water depths that vary up to 0,036 m to 0,043 m, respectively. For less frequent events, with return periods of 100y, the maximum increase of water depth varies from 0,064 m to 0,071 m. Although the water depth reached is not considered risky (e.g., for an adult walking in flows), the water velocity (v = 2,32 m/s) and the amount of water downhill (∼5,000 litters) are considered high (i.e. pedestrians can be swept along streets during major storm events). Our scenarios project more than 5,000 people exposed, with a significant parcel at slums. Summarizing, an explicit innovation strategy helped us to identify the number of people at risk in ungauged basins, which can be beneficial for municipal decisions, especially if effectively implemented to reduce disaster risk and nature based solutions.
Article
Flooding can be an unexpected event which causes high impacts on everyone in society. These can include loss of life, injuries, creating homelessness, as well as causing damage to properties, and disrupting commerce and public services. Hence, seeking to reduce disasters caused by flooding is a significant challenge for policymakers, technical professionals, organizations, and civil society. However, ensuring that they interact with each other demands multiple objectives be set and achieved - such as those concerning economic, social, human, public health and accessibility factors, even though these factors may conflict with each other. Thus, this paper proposes a multidimensional decision model to prioritize risks from flooding in urban areas, and to assess the overall risk by using probabilistic analysis. The modeling is based on Multi-Attribute Utility Theory (MAUT). In order to validate the model, a case study that was carried out in a municipality in the Northeast of Brazil is simulated with realistic data. Furthermore, the results are discussed by taking advantage of visualizations of risk mapping developed from a Geographic Information System (GIS) that acts as a supplementary tool to guide local policies on preventing and mitigating floods. In addition, it can be used to improve practices, such as resource allocation planning, to achieve this goal. Moreover, the model is flexible and can be replicated in any region of the world. It presents an alternative proposal with potential for management improvements, since the decision-maker (DM) can access a broader range of information.
Article
With the global climate change and the rapid urbanization process, there is an increase in the risk of urban floods. Therefore, undertaking risk studies of urban floods, especially the depth prediction of urban flood is very important for urban flood control. In this study, an urban flood data warehouse was established with available structured and unstructured urban flood data. In this study, an urban flood data warehouse was established with available structured and unstructured urban flood data. Based on this, a regression model to predict the depth of urban flooded areas was constructed with deep learning algorithm, named Gradient Boosting Decision Tree (GBDT). The flood condition factors used in modeling were rainfall, rainfall duration, peak rainfall, evaporation, land use (the proportion of roads, woodlands, grasslands, water bodies and building), permeability, catchment area, and slope. Based on the rainfall data of different rainfall return periods, flood condition maps were produced using GIS. In addition, the feature importance of these conditioning factors was determined based on the regression model. The results demonstrated that the growth rate of the number and depth of the water accumulation points increased significantly after the rainfall return period of 'once in every two years' in Zhengzhou City, and the flooded areas mainly occurred in the old urban areas and parts of southern Zhengzhou. The relative error of prediction results was 11.52%, which verifies the applicability and validity of the method in the depth prediction of urban floods. The results of this study can provide a scientific basis for urban flood control and drainage.
Article
The multi-scenario simulation of urban land can effectively reveal the characteristics and trends of changes in urban space and the contradictions of land use in urban sustainable development. By designing a model based on the random forest algorithm and CA-Markov model, we simulated the evolution of urban space in Shanghai from 2015 to 2030 under two distinct scenarios — unconstrained development and development with planning intervention. Results of model validation indicate that the model accurately simulates urban land in 2015. In Shanghai, important factors affecting urban development are population, GDP and distance to subways. Under the unconstrained scenario, urban areas in Shanghai are predicted to increase by 157.79 km² between 2015 and 2030, and the spatial expansion of urban areas follows a concentric pattern. Meanwhile, under the scenario with planning intervention, urban expansion is at a lower speed, and more compact because of constraints of ecological, cultivated and cultural protection, and urban areas are predicted to increase by 95.46 km² in 2030 compared with 2015. We observe a similar concentric pattern, although significantly smaller in magnitude, in spatial expansion under this scenario. The results show that urban development will be more sustainable under the constraints of ecological and cultivated protection.
Article
This paper provides new knowledge on how to understand and describe climate change risk. This type of risk is of the utmost importance for us all, but current approaches for conceptualizing and characterizing it, as for example used by the Intergovernmental Panel on Climate Change (IPCC), suffer from severe weaknesses, resulting in poor communication and misguidance. Two main problems are that the risk concept is too strongly associated with statistically expected values, and that the risk characterizations fail to integrate probabilities and judgments of the strength of the knowledge supporting these. The present paper points to and discusses these weaknesses. It shows how a solid risk science foundation can be formed, which clarifies the meaning of key climate change risk concepts and supports and improves the evidence-informed communication and decision-making. Specifically, the paper provides insights on the nexus between climate change risk, uncertainties and knowledge, including the potential for surprises, as well as the links between risk and vulnerability (resilience). Recommendations are provided on how to assess uncertainties in relation to risk, using precise and imprecise probabilities, combining these with strength of knowledge judgement, and establishing scientific processes to scrutinize the underlying knowledges basis with respect to potential surprises.
Article
Metro systems are vulnerable to flooding during rainstorms and this situation worsens when a subsiding environment prevails. This study adopts the interval fuzzy analytic hierarchy process (FAHP) incorporated with fuzzy clustering analysis (FCA) to assess the flood risk of metro systems in subsiding environments, considering both the regional subsidence of the ground and the longitudinal settlement along metro lines as assessment factors. Shanghai metro system is analysed as an example by using the proposed method (the FAHP–FCA approach) together with the geographical information system (GIS). The assessment results show that more than 30% of metro lines in Shanghai are at a high risk of flooding, and the high-risk regions of metro lines are located in urban centres. The ratio of high-risk regions identified by the FAHP–FCA approach exceeds that obtained using the existing FAHP approach, which indicates that the FCA-FAHP approach can capture high-risk metro systems in Shanghai. Moreover, the comparison also shows that subsiding environments exacerbate the flood risk of metro systems. It is proposed that, to mitigate the risk of flooding, subsidence environments should be considered in more detail.
Article
Many studies have analyzed historical trends in annual peak flows in the United States because of the importance of flooding to bridges and other structures, and the concern that human influence may increase flooding. To help attribute causes of historical peak-flow changes, it is important to separate basins by characteristics that have different influences on peak flows. We analyzed historical trends by basin type: minimally altered basins, regulated basins (substantial reservoir storage but low urbanization), and urbanized basins (with low reservoir storage). Although many peak-flow magnitude changes were found in the last century across the conterminous United States, the trend magnitude and direction vary strongly by basin type and region. In general, there was a low percentage of significant increases and decreases for minimally altered basins while many regulated basins had significant decreases and the limited number of urbanized basins with long-term record showed a high percentage of increases. For urbanized basins, which are concentrated in the Northeast and Midwest, trend magnitude was significantly correlated with the amount of basin urbanization. For all basins regardless of type, parts of the Northeast quadrant of the U.S. had high concentrations of basins with large and significant increases while parts of the Southwest quadrant had high concentrations of basins with large and significant decreases. Basin regulation appears to have heavily influenced the decreasing trends in the Southwest quadrant; there were many large decreases for this basin type despite overall increases in heavy precipitation in this area. Changes over time in the number of 2-per-year and 1-per-5-year peaks over threshold are consistent with changes in the magnitude of annual peak flows.
Article
Upgrading urban services and maintaining their functionality, such as drainage systems, in a sustainable manner to keep up with the increasing demand and changing needs, like climate adaptation and socio-economic conditions, is a planning, implementation and management challenge. Retrofitting urban infrastructure is a socio-technical process like planned or autonomous adaptation, where the motivation and ability of the stakeholders should be ascertained to avoid implementation bottlenecks. This paper aims at improving the understanding of institutional decision making for implementing retrofitting responses using the Motivation and Abilities (MOTA) framework. Motivation and abilities of the stakeholders to retrofit the urban drainage system in Ho Chi Minh City for maintaining the service levels in a changing climate urbanization were explored. The MOTA scores, based on stakeholder consultation and surveys, were obtained for retrofitting responses. The analysis of MOTA scores revealed that the motivation and ability of the stakeholders differed based on the adaptation objectives for planning and implementing drainage retrofitting responses. Hence it is recommended to use MOTA framework as a practical means to explore the inherent biases and internal processes of various actors to understand and communicate the need to integrate decision making in a sustainable manner.
Article
We investigate decisions in the face of emergency situations where potential consequences include loss of life, for example, the choice of methods to rescue a group of people who are trapped in a fire. In particular, we conduct two sets of experiments to depict public risk preferences for potential fatalities in a rescue mission. The first set of experiments asks binary questions on a collection of hypothetical emergency scenarios such as fires, air crashes, and marine accidents. Our results show that people are inclined to choose risky options in emergency situations, indicating risk-seeking behavior. However, we observe no significant decline in the subjects’ willingness to save one more life as the base number of people in danger increases, which contradicts the risk-seeking utility function implied by expected utility theory. We then conduct the second set of experiments to elicit possible distortions of probabilities in addition to the value function of fatalities under prospect theory. The value functions of most subjects are concave with increasing marginal effect of losing one more life, and most subjects deflate the probability of an adverse event no matter whether the probability is small or large. We compare our results with prospect theory studies involving money and time in the loss domain, and provide possible explanations for the observed discrepancies.
Article
Socio-economic scenarios enable us to understand the extent to which global-, national- and local-scale societal developments can influence the nature and severity of climate change risks and response options. Shared socio-economic pathways (SSPs) enable a systematic exploration of the challenges to adaptation and mitigation that alternative futures entail. However, SSPs are primarily defined for the global scale. If countries are to test their adaptation and mitigation options for robustness across plausible future socio-economic conditions, then SSPs require country-relevant detail to understand climate change risks at the national and local scales. New Zealand is used to illustrate how nationally relevant socio-economic scenarios, nested within SSPs can be developed to inform national- and local-scale studies of climate change impacts and their implications. Shared policy assumptions were developed, involving a mix of climate-specific and non-climate-specific policies, to demonstrate how international links and global-scale developments are critical locally—local choices may accelerate, reduce or even negate the impact of global trends for extended periods. The typology was then ‘tested’ by applying it in a local context. The research challenges observed in developing credible, salient and legitimate national-scale socio-economic scenarios include issues in developing scenarios across a multidisciplinary team. Finally, recommendations for adapting shared climate policy assumptions to produce national and local scenarios, and for assessing the feasibility and effectiveness of climate change adaptation options are presented. These include the need for: guidelines to embed national scenarios in global frameworks; a limit the number of plausible futures; inter-operability of models; an ability to work towards effective multi-disciplinary teams and integrative research; and the opportunity to involve participatory processes where feasible.
Article
Multiple criteria decision making (MCDM/A) methods have been applied in many risk management contexts. This article aims to diagnose the state of the art and identify research directions of multicriteria models applied in risk management. A systematic literature review is structured in order to answer seven main research questions. The steps of the research methodology are highlighted. Data were extracted and analysed based on the main aspects of MCDM/A methods applied in risk management and risk analysis during the last 33 years. These include measuring the number of publications and citations; listing research areas and fields of applications; MCDM/A approaches and problematics; considering both group and individual decision making; discussing criteria and the number of criteria and the decision maker’s (DM’s) rationality, all in the context of risk management. Finally, the identification of research directions of MCDM/A models applied in risk studies aids managerial decision making since these research directions include a multidimensional view of problems and take into account the DM’s preferences structure.
Article
This paper provides a review of multi-criteria decision-making (MCDM) applications to flood risk management, seeking to highlight trends and identify research gaps. A total of 128 peer-reviewed papers published from 1995 to June 2015 were systematically analysed. Results showed that the number of flood MCDM publications has exponentially grown during this period, with over 82 % of all papers published since 2009. A wide range of applications were identified, with most papers focusing on ranking alternatives for flood mitigation, followed by risk, hazard, and vulnerability assessment. The analytical hierarchy process (AHP) was the most popular method, followed by Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and Simple Additive Weighting (SAW). Although there is greater interest in MCDM, uncertainty analysis remains an issue and was seldom applied in flood-related studies. In addition, participation of multiple stakeholders has been generally fragmented, focusing on particular stages of the decision-making process, especially on the definition of criteria weights. Therefore, addressing the uncertainties around stakeholders' judgments and endorsing an active participation in all steps of the decision-making process should be explored in future applications. This could help to increase the quality of decisions and the implementation of chosen measures.
Article
It is common to use conservatism in risk assessments, replacing uncertain quantities with values that lead to a higher level of risk. It is argued that the approach represents a practical method for dealing with uncertainties and lack of knowledge in risk assessment. If the computed probabilities meet the pre-defined criteria with the conservative quantities, there is strong support for the "real risk" to meet these criteria. In this paper we look more closely into this practice, the main aims being to clarify what it actually means and what the implications are, as well as providing some recommendations. The paper concludes that conservatism should be avoided in risk assessments - "best judgements" should be the ruling thinking, to allow for meaningful comparisons of options. By incorporating sensitivity analyses and strength of knowledge judgements for the background knowledge on which the assigned probabilities are based, the robustness of the conclusions can be more adequately assessed.
Article
Leptospirosis is a preeminent zoonotic disease concentrated in tropical areas, and prevalent in both industrialized and rural settings. Dose-response models were generated from 22 data sets reported in 10 different studies. All of the selected studies used rodent subjects, primarily hamsters, with the predominant endpoint as mortality with the challenge strain administered intraperitoneally. Dose-response models based on a single evaluation postinfection displayed median lethal dose (LD50 ) estimates that ranged between 1 and 10(7) leptospirae depending upon the strain's virulence and the period elapsed since the initial exposure inoculation. Twelve of the 22 data sets measured the number of affected subjects daily over an extended period, so dose-response models with time-dependent parameters were estimated. Pooling between data sets produced seven common dose-response models and one time-dependent model. These pooled common models had data sets with different test subject hosts, and between disparate leptospiral strains tested on identical hosts. Comparative modeling was done with parallel tests to test the effects of a single different variable of either strain or test host and quantify the difference by calculating a dose multiplication factor. Statistical pooling implies that the mechanistic processes of leptospirosis can be represented by the same dose-response model for different experimental infection tests even though they may involve different host species, routes, and leptospiral strains, although the cause of this pathophysiological phenomenon has not yet been identified.
Article
The response of the second-generation Canadian earth system model (CanESM2) to historical (1850-2005) and future (2006-2100) natural and anthropogenic forcing is assessed using the newly-developed representative concentration pathways (RCPs) of greenhouse gases (GHGs) and aerosols. Allowable emissions required to achieve the future atmospheric CO2 concentration pathways, are reported for the RCP 2.6, 4.5 and 8.5 scenarios. For the historical 1850-2005 period, cumulative land plus ocean carbon uptake and, consequently, cumulative diagnosed emissions compare well with observation-based estimates. The simulated historical carbon uptake is somewhat weaker for the ocean and stronger for the land relative to their observation-based estimates. The simulated historical warming of 0.9°C compares well with the observation-based estimate of 0.76 ± 0.19°C. The RCP 2.6, 4.5 and 8.5 scenarios respectively yield warmings of 1.4, 2.3, and 4.9°C and cumulative diagnosed fossil fuel emissions of 182, 643 and 1617 Pg C over the 2006-2100 period. The simulated warming of 2.3°C over the 1850-2100 period in the RCP 2.6 scenario, with the lowest concentration of GHGs, is slightly larger than the 2°C warming target set to avoid dangerous climate change by the 2009 UN Copenhagen Accord. The results of this study suggest that limiting warming to roughly 2°C by the end of this century is unlikely since it requires an immediate ramp down of emissions followed by ongoing carbon sequestration in the second half of this century.
Article
Determination of flood damage frequencies constitutes a fundamental component of any comprehensive flood-risk methodology. A time series of flood damage may contain zero values. Therefore, the probability distribution of damage should be derived taking into consideration these zero values. This distribution was derived using the total probability theorem (in conjunction with gamma, log-normal and Weibull distributions), order statistics, kinematic diffusion (KD) model, and the Box-Cox transformation. Flood damage frequencies determined using these methods were compared with those determined empirically for Alabama, Louisiana, Mississippi, and Texas in the United States. For the four southern states studied, it is found that of all three different analysis methods, the method based on the total probability theorem gave the best results for the flood damage analysis containing zero-damage, and the KD model method is not suitable for the flood damage analysis.
Article
This paper introduces time-tradeoff (TTO) sequences as a general tool to analyze intertemporal choice. We give several applications. For empirical purposes, we can measure discount functions without requiring any measurement of or assumption about utility. We can quantitatively measure time inconsistencies and simplify their qualitative tests. TTO sequences can be administered and analyzed very easily, using only pencil and paper. For theoretical purposes, we use TTO sequences to axiomatize (quasi-)hyperbolic discount functions. We demonstrate the feasibility of measuring TTO sequences in an experiment, in which we tested the axiomatizations. Our findings suggest rejections of several currently popular discount functions and call for the development of new ones. It is especially desirable that such discount functions can accommodate increasing impatience.
Article
This paper considers a number of parallels between risky and intertemporal choice. We begin by demonstrating a one-to-one correspondence between the behavioral violations of the respective normative theories for the two domains (i.e., expected utility and discounted utility models). We argue that such violations (or preference reversals) are broadly consistent with three propositions about the weight that an attribute receives in both types of multiattribute choice. Specifically, it appears that: (1) if we add a constant to all values of an attribute, then that attribute becomes less important; (2) if we proportionately increase all values of an attribute, or if we change the sign of an attribute, from positive to negative, then that attribute becomes more important. The generality of these propositions, as well as the constraints they would impose on separable representations of multiattribute preferences, is discussed.
Article
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber–Fechner law, a q-exponential discount model based on Tsallis’s statistics), simple hyperbolic discounting, and Stevens’ power law–exponential discounting (exponential discounting with Stevens’ power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber–Fechner discounting (general hyperbola) > Stevens’ power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber–Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.