Discover the world's scientific knowledge

With 160+ million publication pages, 25+ million researchers and 1+ million questions, this is where everyone can access science

You can use AND, OR, NOT, "" and () to specify your search.

PublicationsAuthorsQuestions
How can transactions using cryptocurrencies be formalised, institutionalised and made more secure?
Question
  • Mar 2023
How can cryptocurrency trading be formalised, institutionalised and made more secure?
How to build formalised and high-security transaction institutionalised cryptocurrency trading markets?
In recent years, many technology startups have based their growth and competitive advantage on business, technological, product, service, marketing or other innovations. Banks are reluctant to provide investment loans to emerging startups basing their growth on innovative technologies. In such a situation, innovative startups emerge and are financed through such external sources of funding as investment funds, business angels, securities issuance, crowdfunding and others. Crowdfunding will develop intensively in the future as an alternative to classic forms of external financing for business ventures. It is an alternative to the financial service offerings of financial sector institutions, particularly in the segment of financing innovative start-ups. Commercial banks operating within the framework of classic deposit and credit banking often avoid lending to innovative start-ups due to difficulties in assessing credit risk. In such a situation, crowdfunding can be a good solution to the problems of finding external funding. On the other hand, cryptocurrencies, which operate outside institutionalised and centralised financial systems, are growing in importance. Perhaps in the future, cryptocurrencies will displace traditional currency from online financial transactions between fintechs, financial institutions, innovative start-ups, online technology companies running social media portals and their customers, and between users of these online portals. In addition to this, it is becoming essential to improve the security of online financial transactions and settlements carried out through online mobile banking. In this connection, blockchain technology is developing as an application for securing online transactions and data transfer. An increasing number of large companies are announcing the creation of their own cryptocurrency. Some investment banks such as JP Morgan have announced the creation of their own cryptocurrency for settlements with key counterparties. The development and implementation of ICT information technologies, advanced data processing technologies Industry 4.0 and Internet technologies into the business activities of companies and enterprises facilitates the execution of financial operations on the Internet and ensures a high level of security of Internet data transfer. The development of technological innovations, ICT information technologies, advanced information processing technologies co-creating the current technological revolution Industry 4.0, financing through crowdfunding, securing online transactions with blockchain technology, the increase in the use of cryptocurrencies in these settlements, etc. are likely to be important determinants of the development of innovative, technological start-ups operating on the Internet and factors in the development of the knowledge economy in the years to come. Consequently, the development of open innovation is correlated with the issue of innovation and entrepreneurship development in the economy. A significant proportion of innovative startups develop their business model based on open innovation. On the other hand, in macroeconomic terms, the development of open innovation can be an important determinant of economic development in developing countries and in developed knowledge-based economies. In view of the above, research shows that the spread of open innovation and open knowledge bases is an important issue for building a sustainable economy in a technologically developed and developing country. A number of predictive studies show that cryptocurrencies will grow in importance in the future in financing various transactions and settlements carried out electronically, through the Internet, on social media, in investment banking, etc. Currently, many technology startups base their growth and competitive advantage on technological, product, service, marketing or other innovations. However, in order for the financing of new business ventures, innovative startups to develop using cryptocurrencies it is necessary to increase the scale of systemic formalisation and institutionalisation of transactions carried out using cryptocurrencies, to build formalised cryptocurrency markets and to increase the security of transactions carried out using cryptocurrencies in the future.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the planned taxation of cryptocurrency transactions be a first step for increasing the scale of systemic formalisation and institutionalisation of cryptocurrency transactions, building formalised cryptocurrency markets and increasing the future security of cryptocurrency transactions?
How to build formalised and highly secure transaction institutionalised cryptocurrency trading markets?
How can cryptocurrency transactions be formalised, institutionalised and made more secure?
What is your opinion on this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
… 
  • 882 Views
  • 16 Answers
How can ChatGPT be used to analyse the level of innovation of new business projects that new startups are planning to develop?
Question
  • Mar 2023
How can artificial intelligence such as ChatGPT and Big Data Analytics be used to analyse the level of innovation of new economic projects that new startups that are planning to develop implementing innovative business solutions, technological innovations, environmental innovations, energy innovations and other types of innovations?
The economic development of a country is determined by a number of factors, which include the level of innovativeness of economic processes, the creation of new technological solutions in research and development centres, research institutes, laboratories of universities and business entities and their implementation into the economic processes of companies and enterprises. In the modern economy, the level of innovativeness of the economy is also shaped by the effectiveness of innovation policy, which influences the formation of innovative startups and their effective development. The economic activity of innovative startups generates a high investment risk and for the institution financing the development of startups this generates a high credit risk. As a result, many banks do not finance business ventures led by innovative startups. As part of the development of systemic financing programmes for the development of start-ups from national public funds or international innovation support funds, financial grants are organised, which can be provided as non-refundable financial assistance if a startup successfully develops certain business ventures according to the original plan entered in the application for external funding. Non-refundable grant programmes can thus activate the development of innovative business ventures carried out in specific areas, sectors and industries of the economy, including, for example, innovative green business ventures that pursue sustainable development goals and are part of green economy transformation trends. Institutions distributing non-returnable financial grants should constantly improve their systems of analysing the level of innovativeness of business ventures planned to be implemented by startups described in applications for funding as innovative. As part of improving systems for verifying the level of innovativeness of business ventures and the fulfilment of specific set goals, e.g. sustainable development goals, green economy transformation goals, etc., new Industry 4.0 technologies implemented in Business Intelligence analytical platforms can be used. Within the framework of Industry 4.0 technologies, which can be used to improve systems for verifying the level of innovativeness of business ventures, machine learning, deep learning, artificial intelligence (including e.g. ChatGPT), Business Intelligence analytical platforms with implemented Big Data Analytics, cloud computing, multi-criteria simulation models, etc., can be used. In view of the above, in the situation of having at one's disposal appropriate IT equipment, including computers equipped with new generation processors characterised by high computing power, it is possible to use artificial intelligence, e.g. ChatGPT and Big Data Analytics and other Industry 4.0 technologies to analyse the level of innovativeness of new economic projects that plan to develop new start-ups implementing innovative business solutions, technological, ecological, energy and other types of innovations.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence such as ChatGPT and Big Data Analytics be used to analyse the level of innovation of new economic projects that plan to develop new startups implementing innovative business solutions, technological innovations, ecological innovations, energy innovations and other types of innovations?
What do you think?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
… 
  • 13 Answers
How should a system of institutional control of the development of advanced artificial intelligence models and algorithms be built?
Question
  • Mar 2023
How should a system of institutional control over the development of advanced artificial intelligence models and algorithms be built so that this development does not get out of hand and lead to negative consequences that are currently difficult to predict?
Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built?
Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand?
To this, should the development of artificial intelligence be under control? - the answer is probably obvious, i.e. that it should. What remains debatable, however, is how the system of institutional control of the development of advanced artificial intelligence models and algorithms should be structured so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee. Besides, if the question: should the development of artificial intelligence be controlled? - is answered in the affirmative, i.e. YES, then who should exercise this control? So, how should an institutional system of control over the development of advanced artificial intelligence models and algorithms and their applications be constructed, so that the potential and real future negative effects of dynamic and not fully controlled technological progress do not outweigh the positive ones. Well, at the end of March this year 2023, a number of new technology developers, artificial intelligence experts, besides businessmen, investors developing technology start-ups, including, among others, Apple co-founder Steve Wozniak and the founder or co-founder of such technology companies as PayPal, SpaceX, Tesla, Neuralink and the Boring Company, i.e. Elon Musk, Stability AI chief Emad Mostaque (maker of the Stable Diffusion image generator) and artificial intelligence researchers from Stanford University, Massachusetts Institute of Technology (MIT) and other AI universities and labs have called in a joint letter for at least a six-month pause in the development of artificial intelligence systems more capable than the GPT-4 published in March. The aforementioned letter acting as a kind of cautionary petition was published on the Future of Life Institute centre's website, advanced artificial intelligence could represent "a profound change in the history of life on Earth" and the development of this technology should be approached with caution. In this petition of sorts, there are warnings about the unpredictable consequences of the race to create ever more powerful models and complex algorithms that are key components of artificial intelligence technology. The aforementioned developers of leading technology companies suggest that the development of artificial intelligence should be slowed down temporarily, as the risk has now emerged that this development could slip out of human control. The aforementioned petition warns that an uncontrolled approach to AI development risks a deluge of disinformation, mass automation of work and even the replacement of humans by machines and a 'loss of control over civilisation'. In addition, the letter suggests that if the current rapid development of artificial intelligence algorithm systems gets out of hand, then the scale of disinformation on the Internet will increase significantly, the process of work automation already taking place will accelerate many times, which may lead to the loss of jobs for about 300 million people within the current decade and, as a consequence, may also lead to a kind of loss of human control over the development of civilisation. Developers of new technologies point out that advanced artificial intelligence algorithm systems should only be developed when the development of artificial intelligence is under full control, the effects of this development are positive and the potential risks are fully controllable. Developers of new technologies are calling for a temporary pause in the training of systems superior to OpenAI's recently released GPT-4 system, which, among other things, is capable of passing tests of various kinds at a level close to the best results passed by humans. The aforementioned letter also calls for the implementation of comprehensive government regulation and oversight of new models of advanced AI algorithms, so that the development of this technology does not overtake the creation of the necessary legal regulations.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Why do the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc., now call for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology takes place fully under control and does not get out of hand?
Should the development of artificial intelligence be controlled? And if so, who should exercise this control? How should an institutional control system for the development of artificial intelligence applications be built?
How should a system of institutional control of the development of advanced artificial intelligence models and algorithms be built, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee?
What do you think?
What is your opinion on the subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
… 
  • 741 Views
  • 5 Answers
Is flood forecasting for rivers in Asia a scientific operation? Can a human mechanism be created to accurately predict floods in Asia?
Question
  • May 2025
Is flood forecasting for rivers in Asia a scientific operation? Can a human mechanism be created to accurately predict floods in Asia?
Central Asia is subject to frequent disasters, including earthquakes and floods (GFDRR, 2016). Furthermore, climate change, urbanisation processes, and a growing population have contributed to an increase in the frequency and severity of losses caused by natural hazard events in the last 2 decades (Pollner et al., 2010; Yu et al., 2019; Reyer et al., 2017). The transboundary nature of many of these events requires a regional and shared approach to support, plan, and coordinate disaster risk management (DRM) and disaster risk financing and insurance (DRFI) strategies. Flood risk assessment is a fundamental tool in this framework, as it allows for the quantification of the expected losses caused by floods and the identification and prioritisation of interventions (Tsakiris, 2014; Merz et al., 2014). Flood risk assessment is defined as an evaluation of future losses caused by floods (riverine and/or coastal) using a set of tools, such as hydrological and flood models, exposure models, and vulnerability models, within a risk-based framework, which includes associating losses with levels of likelihood (Mitchell-Wallace et al., 2017). In particular, large-scale risk assessment is needed by governments and international institutions to drive national-scale policies to counter economic losses by floods and improve national resilience towards disasters caused by natural hazard events. Here, we define large-scale risk assessment as a risk evaluation study that covers an area encompassing hundreds of thousands of square kilometres, including administrative units from districts and provinces to national or plurinational scale. Large-scale flood hazard modelling and assessment are nowadays a well-established branch of flood engineering research and practice (Alfieri et al., 2014; Pappenberger et al., 2012; Schumann et al., 2016), albeit with caveats and limitations (Bates, 2022). Large-scale flood risk modelling and assessment have also gained traction in the past few years (Steinschneider et al., 2014; Ward et al., 2013) and are routinely used in commercial catastrophe risk models by insurance and reinsurance companies to price their products (Wing et al., 2020). Nevertheless, uncertainties remain large, and their evaluation is the subject of ongoing research (Figueiredo et al., 2018). A key issue for large-scale model set-up and reliability is data availability. Such models are data-demanding, since they need meteorological data, river flow observations, geomorphological data, the location and protection level of defences, and macroeconomic data, among others. Such data might not always be available or cannot be obtained easily due to data restriction policies or lack of digitalisation. In Central Asia, for example, meteorological and flow data are hard to acquire without institutional or local support. Furthermore, in this region, several flow gauges were discontinued at the time of the dissolution of the Soviet Union, and most of them were not replaced; therefore, flow records covering recent times are scarce. Another frequent limitation is the absence of post-event surveys, either of the event intensity (flood footprints) or of the physical damage and economic losses (e.g. damage data and insurance claims). In this study, a flood risk assessment model was implemented based on global, regional, and local datasets, which comprise a hazard module (assessment of the frequency and intensity of floods), a vulnerability module (assessment of the relationship between event intensity and damage/losses), and an exposure module (inventory of buildings and infrastructure). The model covers the countries of Kazakhstan, Kyrgyz Republic, Tajikistan, Turkmenistan, and Uzbekistan, in Central Asia. The model was implemented within the framework of the project “Regionally consistent risk assessment for earthquakes and floods and selective landslide scenario analysis for strengthening financial resilience and accelerating risk reduction in Central Asia” and within the implementation of the EU-funded Strengthening Financial Resilience and Accelerating Risk Reduction (SFRARR) in Central Asia programme (https://www.gfdrr.org/en/program/ SFRARR-Central-Asia, last access: 15 January 2025). The project aims to advance disaster and climate resilience in Central Asian countries. The landslide susceptibility assessment, which was part of this study, can be found in Rosi et al. (2023).The objective of this paper is twofold. First, we aim to provide guidelines for the implementation of large-scale (e.g. country-scale) flood risk models in data-scarce regions, showing that regional datasets such as reanalysis and global land maps need to be integrated with knowledge and data that can only be obtained through engagement and collaboration with local authorities and local experts through participation of stakeholders. Second, we aim to present for the five countries considered in this study the estimated levels of flood risk to support governments and decision-makers in their decisions for a more comprehensive flood risk management strategy. Currently, the availability of risk information for disaster risk management (DRM) and disaster risk financing and insurance (DRFI) activities remains variable across the region, and information has been provided by previous projects focusing on a single country. Moreover, few of these studies have quantified multi-hazard disaster risk, and, to the best of our knowledge, none have done so for the whole region using probabilistic methods applied with sufficient fidelity required to robustly inform the development of DRFI solutions. Fragmented and low-resolution flood risk assessment studies already exist in the region (CAC DRMI, 2009; GFDRR, 2016; UNDP, 2014; UNISDR, 2010; Umaraliev et al., 2020; Asian Development Bank, 2015; Saidov, 2020); however a high-resolution, homogeneous transboundary flood risk assessment such as the one presented here is unprecedented for the Central Asian region.Central Asia is highly exposed and vulnerable to a broad range of natural hazards, which frequently result in economic and human losses. Flood hazard is significant in the region, with floods being the most frequent natural disaster in the period of 1988–2007 according to a recent analysis provided by the Central Asia and Caucasus Disaster Risk Management Initiative (CAC DRMI, 2009). In the same period, floods ranked second for the number of deaths caused and the population affected (1512 and 19 % respectively). Despite the aridity of large areas in some of the target countries, natural phenomena linked to extreme precipitation can cause billions of dollars in damages every year: collectively, floods inflict the second highest overall economic losses (USD 52 million), surpassed only by earthquakes (an annual average of USD 186 million). At the local level (e.g. in Tajikistan), floods are sometimes the dominant risk in terms of economic losses (World Bank et al., 2012). Considering the deteriorated protection infrastructure and vulnerabilities in several sectors, floods can cause considerable damage to housing, infrastructure, and agriculture (Libert, 2008). Climatically, this region is characterised by strong rainfall gradient contrasts due to the diversity of climate and vegetation zones. The region is drained by large, partly snow- and glacier-fed mountain rivers, which cross or terminate in arid forelands. Central Asian countries are therefore affected by a significant river flood hazard mainly in spring and summer seasons. Land use is mainly grassland in central and southern Kazakhstan, while in most of Uzbekistan and Turkmenistan, vegetation is very sparse. Arable land is concentrated in northern Kazakhstan and in the irrigated parts of the plains of Uzbekistan and Turkmenistan. Tajikistan and the Kyrgyz Republic are mainly mountainous, while the other three countries are mostly flat. The elevation of the region is shown in Fig. 1. 3 Data and models 3.1 Global datasets In this study, a wide range of well-known and established datasets was used. Table 1 shows a complete inventory of the global data and how they were used within the model. A short description of the use of data within the study is provided, as well as their resolution and some bibliographic references. 3.2 Local datasets Table 2 shows a complete inventory of the data requested/obtained from local experts and stakeholders. Although the number of available flow gauges might appear limited given the extensive region, it is crucial to recognise that their spatial distribution effectively encompasses densely populated areas where the majority of exposed assets are located. Figure 2 shows the gauging station locations and the populated areas (in blue, population density > 1 km−2 ), including both local and global datasets. Information about the characteristics of the building stock relevant to their vulnerability to flood water was collected from local sources and from the literature. In particular, characteristics like the number of floors, presence of a basement, level of the ground floor above street level, type of building (apartment, detached, semi-detached), etc. were collected. The distribution of the number of floors in various countries was derived from Pittore et al. (2020) and Wieland et al. (2015), who established floor number ranges for different building categories through local surveys. Additionally, sources such as Pittore et al. (2011) and the World Bank (2017) were used to complement the above-mentioned information. On-site collection of unit costs for building component maintenance, removal, and replacement was facilitated by local advisors and engineers, drawing from interactions with professionals involved in building design and pricing, as well as engineering manuals and real estate catalogs (such as ENiR – uniform norms and prices for construction, installation, and repair works). The role of defensive protections is crucial in reducing fluvial flood hazard. However, availability of precise data regarding flood protection levels is very limited, as discussed earlier. To circumvent this problem, we developed a strategy to derive the hydraulic protection level of the region based on the correlation between the level of protection and the population density at any given location along the river. Initially, we identified urban agglomerates and determined their maximum population density through data from HBASE (Wang et al., 2017b) and WorldPop (Tatem, 2017). The former indicates the extension of urban areas, while the latter provides population density over 1 km2 . For example, we identify a certain cluster of a high-density population (i.e. a city) based on WorldPop, and then we define the boundaries of that city based on HBASE. Following this, we identified the river portions connected to each urban agglomerate and assumed that those stretches have a certain level of protection due to the fact that they drain towards a city. The level of protection was based on the FLOPROS database (Scussolini et al., 2016) area protection standards. In specific cases, we integrated this methodology with the available local data. This was the case, for example, for the level of protection of the two main Kazakh cities (Almaty and Astana, previously Nur Sultan), for which we utilised geospatial information provided by local stakeholders.Flood hazard assessment In this study, the flood hazard of the five Central Asian countries was assessed for the historical and climate change scenarios by means of a physically based numerical modelling toolset and a stochastic catalogue of flood footprints .Numerical modelling toolset The numerical modelling toolset is composed of two elements: the hydrological model (TOPKAPI-X) and the flood hydraulic model (CA2D). The TOPographic Kinematic APproximation and Integration (TOPKAPI) model is a fully distributed physically based hydrological model that can provide high-resolution information on the hydrological state of a catchment (Ciarapica and Todini, 2002). The TOPKAPI-X model is an advanced version of the original TOPKAPI model that includes an additional soil layer for assessing subsurface flow, an improved snow melting and accumulation module that considers terrain aspect and latitude, and a groundwater component to model the aquifer flow. The TOPKAPI-X model requires both precipitation and temperature meteorological data as input, as well as a description of the soil characteristics that can be derived from the land use (to derive crop factors and surface roughness) and soil type maps (to derive soil permeability and depth). CA2D (Dottori and Todini, 2011) is a fully physically based flood model specifically designed for highperformance computing applications, based on the cellular automata (CA) approach and the diffusive wave equations, to simulate flood inundation events involving wide areas. The model is based on the state-of-the-art of large-scale hydraulic modelling and has been tested extensively on several case studies. The CA2D model has an internal preprocessor that allows the user to provide as input only the digital elevation model and the surface roughness map. The network (comprising nodes and links) is automatically generated, and specific conditions (such as flood protections) can be included, where present. In addition, input meteorological data must be provided in the form of hydrographs at specific points and/or rainfall maps. We ran the model using the semi-inertial formulation of the momentum equation, which was developed for the LISFLOOD-FP model (Bates et al., 2010). This approach allows for high-resolution simulations at a significantly reduced computational effort, making it possible to run hydraulic simulations at the continental and global scales (Dottori and Todini, 2011).Hydrological model ERA5-Land hourly precipitation and temperature on a 0.1°× 0.1° regular grid were processed and used to drive calibrated TOPKAPI-X set-ups (one for each catchment in the region, including catchments partially overlaying neighbouring countries such as China, Afghanistan, and Russia). The output was a set of 40-year-long hourly discharge values at numerous river sections covering the whole drainage network in the region. The TOPKAPI-X model was run on a regular 1 km×1 km grid for all catchments in the region, based on a resampled digital elevation model consistent with MERIT Hydro in terms of flow direction and river network. The hydrological model uses an equal-area projected reference system. Rainfall and temperature values, which are defined on a regular 0.1° grid with geographical coordinates, were associated with each of the hydrological model grids using a simple nearest-neighbour methodology. Soil type and land use maps were also resampled to match the same grid. The model was run on an hourly time step using hourly ERA5-Land precipitation and temperature from January 1981 to December 2020. The model simulations were initiated with average soil saturation and river depth conditions, and the first year of the simulation was used as a warm-up period to reach realistic soil moisture conditions. Therefore, the year 1981 was not considered for calibration purposes or for the extreme value analysis. The main model output consisted of hourly simulated discharges at several locations of the river network across the entire region. The model simulations corresponding to locations where observations were available have been used to perform a trial-and-error calibration that could reasonably reproduce the overall behaviour of each catchment. The TOPKAPI-X model was calibrated through a trialand-error procedure adapting the initial model parameters in order to match the available observed discharge, using goodness-of-fit metrics such as correlation and percent bias to assess the model skills. We based the model calibration mainly on the historical daily data but also used the annual maxima for the areas where daily data were not available. The calibration process mainly focused on robustly reproducing the flow peaks. This is because the hydrological simulations aim to estimate the extreme discharge value distributions at river sections across the region that are used to derive the flood footprint at different return periods via the hydraulic model. Since a physically based model better reproduces the flow peaks if all the other hydrological components are well represented, we made sure that the main hydrological processes were also correctly reproduced, with particular attention to the snow accumulation/melting component, which is the driver of most of the floods in this region. The calibration was performed independently for each catchment where historical data were available. Given the distributed and physically based nature of the TOPKAPI-X hydrological model, the calibration process was not based on an automatic procedure but on the use of reasonable values of the physical parameters. This procedure allows for the identification of model parameter values that provide reasonable outputs across entire catchments, avoiding under- or overfitting any of the available historical records. We assumed two soil layers (superficial and sub-superficial), and the parameters that were calibrated included horizontal conductivity and depth for each of the two layers, vertical conductivity, potential evapotranspiration, and snowmelt rate. The calibration period varied among historical stations, with record lengths ranging from 15 to 37 years. 4.1.2 Extreme value analysis The extreme value analysis and regionalisation process were based on fitting the generalised extreme value (GEV) distribution for several locations along the drainage network to derive the peak flows with different return periods. GEV is a standard tool for modelling flood peaks using annual maximum series (Morrison and Smith, 2002; Rosbjerg and Madsen, 1995). Simulated flow annual maxima were used to derive a GEV distribution for a large number of river sections all over the river network. Where observed flow records were available, the GEV distribution was also fitted on the observed flow annual maxima, and the resulting distribution was compared with the distribution derived from simulated flow values with the objective of evaluating the model error in the extreme values. We observed that the largest hydrological model discrepancies occur on the main stem of large rivers because of the impact of large reservoirs and floodplains. Therefore, an adjustment of the simulated flows was implemented by computing the ratio between observed and simulated mean annual maximum flows and then multiplying the simulated flow by such a ratio. In other terms, the simulated annual maximum flows were increased or decreased by a coefficient based on the mean bias between the observed and simulated mean annual maximum flows. Where observed data were not available, the adjustment coefficient was computed by using an adjustment coefficient from an associated station selected based on the proximity of the location and flow accumulated area. This procedure yielded a very good fit between the observed and simulated extreme values of flow and allowed for the extrapolation of the adjustment to ungauged river sections. This adjustment was particularly useful in floodplains and downstream dams, which are features that are particularly difficult to reproduce with a hydrological model. With this procedure we obtained estimates of the extreme flow value distribution for 78 000 river sections having more than 100 km2 drainage area, from which peak flows were extracted at several levels of likelihood (1 in 5, 10, 20, 50, 100, 200, 500, 1000 years).Hydraulic model At each river section, the CA2D model was used to simulate the flood propagation of the river discharges generated in step 2, producing reach-specific water depth footprints for each of the fixed exceedance probability levels. The CA2D model was run at a 3 arcsec (∼ 90 m) spatial resolution. The simulation time step is dynamic, and it varies between 0.01 and 15 s, with the maximum allowable time step defined by the Courant–Friedrichs–Lewy (or CFL) condition (Courant et al., 1928), which is commonly adopted to preserve stability in computational fluid dynamics models. For every 1 of the 78 000 river reaches, 8 simulations were carried out using the 1-in-5, 10, 20, 50, 100, 200, 500, and 1000 flows resulting from the extreme value analysis as boundary conditions. The MERIT Hydro model was used as a source of elevation data, and GlobeLand30 was used to derive roughness values from land use classes (Arcement and Schneider, 1989). The calibration of the CA2D model primarily focused on reproducing historical event hydrographs in terms of volume and peak timing: for each river section, we built a flood hydrograph to estimate the time of concentration, i.e. the time needed for the water to flow from the most remote point in a watershed to the watershed outlet (Giandotti, 1934). We assumed a triangular hydrograph reaching the flood peak at two-thirds of the concentration time and going back to zero at twice the concentration time. We assume the bankfull discharge as the discharge at a 2-year return period. Flood protections were not explicitly accounted for in CA2D simulations. Instead, we adjusted water depth maps to reflect the presence of flood protections. Specifically, we assumed that areas with water depths below the designated level of protection would not incur any losses. 4.1.4 Flood extent map validation For a single flood event that occurred in Hamadoni, Tajikistan, in 2005, reported losses, flood footprints, and river flow time series were available (Saidov et al., 2006; JICA, 2007). The availability of such data allowed for the validation of some of the components of the present model (namely the hydraulic model and the risk model), although this was limited to only one event. Nevertheless, and despite the caveats of carrying out a validation of a model against a single observation, showing the performances of the risk model for such an event has informative value. In order to validate the hydraulic model, the observed flood extent and the inlet discharge data were extracted from the JICA report (JICA, 2007). The flood event was quite long and involved a dyke breach and extensive damage to the nearby villages. The JICA study provides both a probable inundation area that was estimated from satellite data and a flood footprint that was simulated. The following factors contribute to the uncertainty in both the JICA results and the results of our study: – Satellite data from SPOT and ASTER are only available before and significantly after the peak. This likely explains the satellite-estimated flood map underestimation of the flood extent. – The inlet discharge was estimated by the peak discharge ratio from the data recorded at a different station, which is located 80 km upstream. – The simulated water depth values are not available from the JICA study; we only have a figure that we superimposed over our flood footprint for a visual comparison. – We did not simulate the dyke breach as information on its location and the nature of the damage was not available. We built our hydrograph by taking the data estimated by JICA and using them as input to the CA2D model to get the flood footprint for the event Stochastic catalogue of flood footprints The results of the flood model simulations were the hazard maps, i.e. water depth maps at fixed return periods. While hazard maps provide the depth of inundation that can occur at a given location with a certain annual probability (or, conversely, with a certain return period), they are unable to describe the likelihood of concurrent flooding across multiple sites. This caveat limits their capability to assess risk over the full range of plausible scenarios, including the most extreme ones, which are of the highest concern to stakeholders. For this purpose, risk assessment models routinely use stochastic catalogues of events, i.e. datasets of synthetic event intensity footprints. This procedure is typically used for rainfall events (Salazar et al., 2009; Francés et al., 2011), tropical cyclones (Bloemendaal et al., 2020), drought (Guillod et al., 2018), and other perils. In this study, a flood depth hazard catalogue serves this purpose by providing a stochastic ensemble of 10 000 years of hypothetical floods that may occur in the region, with related annual frequency of occurrence. To ensure spatial coherence in the stochastic catalogue, the spatial correlation of the river flow at each gauge/station is determined by computing a cross-correlation matrix based on all the available (observed and simulated) flow time series. The methodology followed to produce a stochastic catalogue consists of the following steps: 1. Clustering. River sections are grouped into clusters under the assumption that flows at river sections in the same cluster are highly correlated random variables. The amount of correlation depends on historical simulated flows, the location of the stations, and the accumulated areas of the stations. 2. Cluster activation probability. The annual probability of activation of a cluster is computed, where activation is defined here as an instance when at least one river section in a given cluster exceeds the 5-year flow. This probability is based on the activation of clusters in the historical simulated flows. 3. Activation of river section within a cluster. The average number of active river sections for a given year and its standard deviation are computed. A station is defined to be active when the flow at that location exceeds the 5- year flow. These values are based on the activation of clusters in the historical simulated flows. 4. Generation of the stochastic catalogue. Based on all the analyses above (clusters, annual activation probability, average number of active stations) and the hazard curves at each section, a stochastic catalogue is produced with a duration equivalent to 10 000 years. Every year consists of an annual flood footprint, i.e. a map where each pixel represents the maximum water depth during a given year. 4.3 Climate change scenario The climate scenarios used in this study are detailed in Ozturk et al. (2017). The regional climate model (RCM) RegCM4.3.5 from the International Centre for Theoretical Physics (ICTP) was driven by two different Coupled Model Intercomparison Project phase 5 (CMIP5) global climate models (GCMs): the HadGEM2-ES from the UK Met Office Hadley Centre and the MPI-ESM-MR from the German Max Planck Institute for Meteorology, under two emission scenarios (Representative Concentration Pathways, RCPs, 4.5 and 8.5). Based on predictive performance, we selected the MPI-ESM-MR GCM. We chose RCP4.5 over RCP8.5 because it aligns more closely with current emission trends and future reduction pledges (Roger Pielke et al., 2022). The model was run over the Central Asian domain as defined by the Coordinated Regional Climate Downscaling Experiment (CORDEX; Giorgi et al., 2009), with the corner points at 54.76° N–11.05° E, 56.48° N–139.13° E, 18.34° N–42.41° E, and 19.39° N–108.44° E. The impact of climate change on flood hazard was accounted for by estimating change factors for the ERA5-Land precipitation and temperature based on the probability density function (PDF) comparison of the current climate and the 1971–2100 projection. Bias correcting climate projections before using them in hydrological modelling is standard practice and should always be carried out to avoid propagating the climate model biases into the hydrological model results (Shrestha et al., 2017; Teutschbein and Seibert, 2012). The methodology we used here belongs to the “delta change” family cited by Teutschbein and Seibert (2012). The literature on this methodology and its implications for hydrological model outputs is very extensive and well documented, although here we cite only a few examples (Räty et al., 2014, 2018; Mudbhatkal and Mahesha, 2018; Fang et al., 2015). It is simpler than other techniques, since it does not require us to bias correct the baseline climatology (which is still the observed climatology), although it has the disadvantage that some properties of the variable to be corrected still remain unadjusted (for example, if the precipitation from a certain climate projection is simply multiplied by a factor in order to reproduce the annual average of the reference dataset, the distribution of the original reference dataset will be maintained, and only the mean values will be corrected – this is also called “constant scaling”). However, the approach used in this paper, which adjusts the whole distribution of precipitation and temperature, not only the mean or the standard deviation, limits this disadvantage. Räty et al. (2014), among others, have discussed the advantages and disadvantages of such a technique, which combines the simplicity of the delta factor methodologies with the robustness of the quantile mapping methodologies. We used a probability density function matching technique to modify the distributions of the current ERA5-Land variables (Lafon et al., 2013). To derive the hazard maps for the time horizon 2080 (i.e. the time window 2071–2100), the entire modelling chain composed of hydrological modelling (TOPKAPI-X), extreme value analysis, and hydraulic modelling (CA2D) was fed with the modified ERA5-Land-derived meteorological input data (precipitation and temperature). Although we are fully cognisant that flood hazard and risk estimates under scenarios of climate change are affected by a very large uncertainty (Bubeck et al., 2011), it is of paramount importance that the effects of climate change be considered in any disas Flood risk assessment 5.1 Vulnerability and exposure Flood vulnerability for buildings was assessed using a component-based flood vulnerability model, called INSYDE (Dottori et al., 2016). This model accounts for different measures of the event intensity (water depth but also flow velocity, flood duration, sediment load, water quality, etc.) and different components of the building (structural, non-structural, finishing, doors/windows, systems, basement, etc.) to derive a large set of curves for each component of the damage. These curves are then combined depending on the characteristics of the building categories. Local knowledge was key in the construction of vulnerability curves for buildings in terms of defining unit costs of the components, archetype buildings, materials, etc. INSYDE is a very flexible vulnerability model, suitable for both data-rich and data-poor scenarios. In this study, a specific vulnerability function relating water depth and level of damage was set up for each of the taxonomy categories (Scaini et al., 2024a, b). Note that the categorisation is, in some cases, done based on criteria that are not relevant to flood risk (e.g. earthquake-resistant design does not affect flood damage). This was done in order to ensure compatibility with a companion earthquake risk model. Some flood-relevant parameters were not explicitly considered in the categorisation due to a lack of spatialised data, for example the presence of a basement, the number of storeys, or the height of the ground level over the surrounding terrain. These parameters were treated in a statistical way. For example, if, within a certain category, the percentage of buildings with one storey is 40 % and the percentage of buildings with two storeys is 60 %, the final vulnerability curve was obtained as the weighted average of two curves, one considering a one-storey building and the other considering a two-storey building. The distributions of such parameters were obtained from the available literature (Pittore et al., 2011, 2020; The World Bank, 2017; Wieland et al., 2015), local institutions (for example, the Kazakh Research and Design Institute of Construction and Architecture – KazNIISA), local surveys (for example, in Dushanbe, Tajikistan, from Pittore et al., 2020), and polls and consultations with local experts carried out during several workshops in 2021 and 2022 and organised by the World Bank. The component-based approach also requires unit costs for each component. These are the costs per unit (usually per metre, m2 or m3 ) of cleaning/removing/replacing each of the components. These costs have been collected on site by local advisors and engineers through inquiries with engineers and architects involved in the design and pricing of buildings and from engineering manuals or real estate catalogues (for example, the ENiR – uniform norms and prices for construction, installation, and repairing works). Local knowledge was key in the construction of vulnerability curves for buildings in terms of defining unit costs of the components, archetype buildings, materials, etc. This knowledge, together with the literature cited above and the collaboration with local institutions and experts, led to producing vulnerability curves that are highly suitable for the local context, as opposed to the common practice of transferring curves developed elsewhere without considering the local context. This approach also allowed for the production of separate curves for each country. The infrastructure vulnerability (e.g. roads, power plants, airports) was taken from the global flood depth–damage dataset developed by the European Union’s Joint Research Centre (Dottori et al., 2018; Huizinga et al., 2017) and from HAZard United States (HAZUS) (FEMA, 2018), the natural hazard analysis tool developed and freely distributed by the US Federal Emergency Management Agency (FEMA). Flood vulnerability for the two prevalent crops, cotton and wheat, was derived from the literature. The cotton curve was derived from Qian et al. (2020). The wheat curve was derived from similar crops (no specific wheat curves were found for Central Asia or Asia in general, but vulnerability curves for other cereals in Asia exist) (Baky et al., 2020; Hendrawan and Komori, 2021; Kwak et al., 2015; Molinari et al., 2019; Win et al., 2018). An exposure database for the region (Scaini et al., 2024a, b), which includes residential and non-residential buildings, transportation infrastructure, and crops, was developed by assembling available global and regional datasets with country-based information provided by local authorities and research groups, including reconstruction costs. We refer to the original paper for more information. Although this article discusses only the flood-induced losses, as mentioned before, this effort was part of a multihazard risk assessment study that also included an assessment of earthquake risk (for the earthquake risk assessment, please refer to Salgado-Gálvez et al., 2024). To compare the risk across different perils, it was necessary to use a perilagnostic assessment methodology such as that adopted in CAPRA, which uses a common representation of the disaster risk assessment components, i.e. hazard, exposure, and vulnerability. 5.2 Risk assessment The flood risk of the five Central Asian countries was assessed by means of the CAPRA risk assessment software (Reinoso et al., 2018) according to the methodology displayed in Fig. 4. The CAPRA platform (https://www.ecapra.org, last access: 15 January 2025) is an open-source and free platform for multi-hazard probabilistic and deterministic risk assess ment, which has been developed with the initial financial support of the World Bank, the Inter-American Development Bank, and the United Nations Office for Disaster Risk Reduction (UNISDR) (Reinoso et al., 2018). The CAPRA platform, which allows for multi-peril assessment (using the probabilistic methodologies described in this paper), uses geographical information (for the exposure and hazard components) and produces economic losses aligned with the risk metrics typically employed in the insurance industry. Moreover, it produces GIS-compatible geospatial data layers with metadata, describing estimated losses per administrative unit and identifying the location of key industrial sites, critical and supply infrastructures, and the corresponding hazard intensity values at those locations, either in raster or vector formats. The loss estimation module allows for the estimation of both the economic losses for the assets in the exposure datasets and the corresponding human losses for each of the possible future events in the stochastic catalogue. Economic losses for each exposed asset are determined by combining the flood depth distribution at the site with the corresponding damage function. This yields a distribution of mean damage ratios (repair cost divided by asset replacement cost) for each asset. Scaling this distribution by the total asset value generates the loss distribution caused by a flood event. Summing these losses for all exposed assets provides the total loss for the event. The flood risk model presented here is designed to provide all the loss metrics needed to devise risk mitigation strategies, including the design of an insurance product. Key outcomes of the long-term flood risk assessment are the year loss tables (YLTs) for each of the five analysed countries. These YLTs detail the expected value and the corresponding uncertainty of the economic loss, together with the event’s annual frequency of occurrence, and a timestamp (ranging from year 1 to year 10 000) for each event in the stochastic set. The YLTs can then be used to derive the loss exceedance curve (LEC), also known as the exceedance probability (EP) curve (e.g. Mitchell-Wallace et al., 2017), which encapsulates loss occurrence characteristics and informs disaster risk management activities, such as regional and national risk financing and insurance development. Additionally, the model yields estimates of key risk measures like average annual loss (AAL) and probable maximum loss (PML), commonly used for risk communication. Since the simultaneous occurrence of earthquakes and floods (at least with respect to those causing large losses of interest to stakeholders) is highly unlikely, the losses caused by the two types of events have been assumed to be independent.Risk model calibration Although all the components of the risk assessment developed in the project (hazard, exposure, and vulnerability) have been separately validated against observations to the extent possible, it is good practice to calibrate and validate the risk model as a whole. This further calibration step, when needed, often adjusts the exposure and vulnerability module to ensure better agreement among historical observations of economic losses and modelled losses. Risk model calibration and validation are typically carried out by comparing modelled and observed loss estimates for historical events and adjusting some of the model parameters or components to improve the goodness of fit. Observed loss estimates are usually obtained from post-event assessment reports and surveys. However, limited and uncertain historical flood loss data in Central Asia hinder this calibration. This scarcity of loss data limits the efficacy of the flood risk model calibration effort. Given the data limitation, it was decided to reduce the number of calibration parameters to a minimum. Hence, all the vulnerability curves were increased or decreased by the same amount; i.e. no differential calibration was carried out on vulnerability curves of different exposure classes or different countries. Furthermore, only the residential building vulnerability curves were calibrated, since residential buildings account for the majority of the exposed value. Infrastructure and crop vulnerability functions were left unadjusted, as no data were available to justify a specific calibration of such curves. Bearing in mind these data availability limitations and the objective of the present risk assessment (which is to estimate the underlying, long-term average flood risk), the model calibration was carried out as follows: 1. A list of historical events and reported losses was collected from local governments/agencies within the project. 2. The districts/regions affected by the historical floods were identified. 3. The risk model was run using the stochastic catalogue of flood footprints as input, for all the districts/regions previously identified. 4. The exceedance probability curves of all selected districts/regions were calculated based on the results of the simulations with the stochastic catalogue. 5. Based on the resulting exceedance probability curves, the return periods of the historical losses were computed (historical losses and district/region losses are comparable under the assumption that reported events are usually large floods that either affect the whole district/region or represent economic losses that are significant at the scale of the whole district/region). 6. The resulting return periods were critically analysed under the following assumptions: a. Reported events are typically large events that make the news and are therefore relatively rare. It is expected that their return period is at least 5 years. b. It is relatively unlikely that a reported flood event has a return period of more than 500–1000 years. c. If a region has more than one reported event, it is highly unlikely that all events have return periods longer than 100 years. d. In general, it is expected that most reported floods have a return period between 5 and 100 years, with very few outliers. 7. If some of the above criteria were not met, the vulnerability curves were adjusted to increase or decrease the losses and obtain a better alignment with the criteria. The reported monetary and human losses were collected from a variety of sources, including EM-DAT (EM-DAT CRED, 2010); AON’s catastrophe insight reports; Swiss Re’s Sigma explorer; and, in some cases, local sources. The international databases already provide loss figures in USD. Conversion from local currency to USD for data collected from local sources was done using the average currency exchange rate of the year the disaster happened. Loss values were then trended to account for inflation and growth since the moment the flood occurred using real gross domestic product (GDP) growth as a proxy. Although affected by large uncertainties, these are the only datasets available for model calibration, as presented below. The rationale of this methodology is that, instead of providing direct comparisons between observed and reported losses (which is not possible given the lack of available data), the calibration process tries to demonstrate that the model provides risk estimates that are in line with what has been observed in the past 20 years in terms of the frequency of the events and the severity of the economic losses. Given the objective and the limitations described above, this seems to be the most tenable strategy that both exploits the (albeit rare) data available and provides sensible loss estimates. Human vulnerability curves were calibrated based on national-scale statistics of fatalities caused by floods. Vulnerability functions were adjusted so that the average number of fatalities per year provided by the model was similar to the values obtained from the official statistics. 5.4 Risk model validation 5.4.1 Exceedance probability (EP) loss curves We compared the EP loss curves for regions where partial historical loss data were available (Table 3). This procedure consisted of verifying that the reported losses for eight historical floods were consistent with the results by region, in terms of the EP curves (i.e. reviewing that there were no systematic under- or overestimations of losses). Reported loss values were adjusted to account for price inflation and changes in exposure using GDP as a proxy (i.e. applying a factor accounting for the increase or decrease in GDP from the year the event occurred to the year the exposure database refers to). Other factors can be considered in this type of normalisation; however, given the very large uncertainty affecting the reported losses, only GDP was taken into account in order to avoid introducing further uncertainties and complexities into the observed values. Clearly, such reference values must be considered with caution and used solely as a sanity check rather than a thorough calibration. Figure 5 shows one of the results of these comparisons.The Hamadoni 2005 flood event Losses for the Hamadoni event are estimated to be of the order of USD 7 to 10 M (Saidov et al., 2006). The model presented in this paper estimated losses of USD 10 M, which shows very good agreement between the modelled and reported losses. However, the present model has been set up considering 2020 prices and exposure, while the reported losses refer to 2005 prices and exposure, and therefore this value must be corrected to account for such factors. A simple way to do this is to account for price inflation and changes in exposure using GDP as a proxy. Changes in GDP over time are indeed a measure of the changes in the economy of a country. According to the GDP deflator index estimated by the World Bank, Tajikistan’s GDP increased 3.52-fold from 2005 to 2020 (from USD 2.31 billion to 8.13 billion). However, in Tajikistan, historically high inflation has been compensated for by the loss of value of the local currency (somoni) relative to the US dollar. The somoni/USD ratio was around 0.34 in 2005 and around 0.10 in 2020, i.e. 29 % of the 2005 value. Accounting for both factors results in an adjustment factor of 1.02, which means that losses for a 2005 event should only be increased by 2 % if normalised to 2020 exposure. 6 Results 6.1 Hydrological and flood model Table 4 shows a summary of the performance metrics (correlation and percent bias on annual maximum streamflow) for the stations where observed streamflow was available. When assessing the model skills, the fact that this is a very large scale model must be taken into account. From this perspective, the results are satisfactory for most of the stations, with limited exceptions. Figure 6 compares the CA2D-simulated flood map (in yellow) with the satellite-estimated flood map (left) and the JICA study’s simulated flood map (right). 6.2 Flood hazard 6.2.1 Hazard maps The fluvial flood hazard maps for the current climate conditions have been computed over the entire Central Asian area and specifically for the five countries of Kazakhstan, Kyrgyz Republic, Uzbekistan, Tajikistan, and Turkmenistan for the selected eight return periods of 5, 10, 20, 50, 100, 200, 500, and 1000 years. Two sets of fluvial flood hazard for current climate conditions were computed, namely for the undefended and defended scenarios. Figures 7 and 8 show some of the resulting hazard maps for a return period of 100 years. A comprehensive collection of the computed results of the flood hazard model can be found here: https://datacatalog. worldbank.org/search?q=SFRARR&sort=&start=0 (last access: 15 January 2025). 6.2.2 Hazard curves for selected target cities Figure 9 shows the derived fluvial flood hazard curves for undefended conditions for five selected locations within the urban areas of the main flood-prone cities of the target countries: Turkmenabat, Tashkent, Dushanbe, Astana (previously Nur Sultan), and Bishkek. 6.3 Flood risk Risk metrics The results of the flood risk assessment are presented in terms of a loss exceedance probability (EP) curve and the year loss tables (YLTs), disaggregated at administration unit 1 (ADM1; which is equal to regional level) and administration unit 0 (ADM0; which is equal to country level). Furthermore, the return period loss estimates and average annual loss (AAL) at the ADM1 and ADM0 levels and for the whole region are provided in tabular format for the same eight return periods reported earlier, ranging from 5 to 1000 years. In addition, for preparedness and mitigation plans, it is important to estimate the possible losses (economic and human) that scenario events may cause to the current exposure. The loss results have been derived in terms of both the expected values and their confidence intervals. Finally, exposure levels to various hazard intensity thresholds have been assessed for populations, key industrial sites, critical infrastructure, and supply infrastructure. These results are available for the current conditions (year 2020) and future (2080) scenarios considering three different projections to the year 2080: the three Shared Socio-Economic Pathways, SSP1, SSP4, and SSP5, considered in the development of the exposure model (presented in detail in Scaini et al., 2024a, b). The future exposure only considers the residential sector. Losses have been calculated for physical risk (monetary) and human risk (fatalities).Probabilistic flood risk results Table 5 shows the fluvial flood risk results (undefended case) at national and regional levels in absolute and relative (to the total replacement cost of the exposure dataset) terms for the current exposure scenario. The highest absolute fluvial risk is found to be in Kazakhstan and Uzbekistan. However, when assessed in relative terms, Kazakhstan, Tajikistan, and Turkmenistan have similar risk values with an AAL above 2 ‰. The same results for the defended case shown in Fig. 10 highlight a large risk reduction, especially for Kazakhstan, Turkmenistan, and Uzbekistan. Figure 10 also shows the fluvial flood risk results at country and regional levels (undefended case) for one of the three different projected scenarios, with consideration of the effect of climate change. The aforementioned tables use the country ISO 3 codes: KGZ (Kyrgyz Republic), KAZ (Kazakhstan), TJK (Tajikistan), TKM (Turkmenistan), and UZB (Uzbekistan). The most substantial variations among the examined scenarios stem from whether flood defences are factored in or not, revealing more consistent disparities. In contrast, the influence of climate change, while noteworthy, exhibits greater variability depending on the specific geographical context. The exposure dataset used in the flood risk assessment for the 2080 projection only includes the residential sector, although in terms of absolute losses, the differences between the current scenario (that includes all lines of business) and the 2080 scenario (only residential assets) are not as large. As a general comment on the estimate of flood losses, it appears that the exceedance probability curves tend to decrease sharply with increasing frequency. There are three factors contributing to the quick decrease of the EP curves, with one related to the hazard, the second to the vulnerability, and the third to the reproduction of flood defences. First, flood depth hazard curves (i.e. relationships between flood depth and frequency) in this region often have a rather “flat” shape (i.e. the increase in flood depth with frequency is gradually smaller with high frequency); this phenomenon is typical of frequently inundated flat areas where floods are rather common, but the difference between water depths for low-intensity and high-intensity events is not that large due to the fact that large alluvial floodplains provide plenty of space for water propagation. Second, the flood vulnerability curves developed for this project typically saturate at 30 %– 50 % of the total exposed value, mainly because they represent asset classes that bundle buildings with different numbers of storeys together. This means that losses after a certain water depth increase very slowly, therefore causing a decrease of the loss vs. frequency curve. This is typical of losses calculated for assets spread out over large regions, some of which are exposed to high flood risk, while others are relatively safe. Finally, another important issue is the inclusion of flood defences in the model: a reliable representation of the flood defences in the model would necessarily lower the high-frequency losses. However, very little data were available to precisely reproduce the flood defences in the region, and therefore the results of the model are considered to be conservative, especially in the high-frequency part of the EP curve. Because of the characteristics of the region (many large fluvial plains with a small population) and the model (large-scale aggregation and unavailability of data regarding flood defences), we believe that the quick decrease of the flood loss curves is justified. Some of the relative losses computed for the current (2020) and future (2080) scenarios are plotted in Figs. 11– 13. The risk reduction immediately apparent when comparing the results of Figs. 11 and 12 is due to the inclusion of flood defences. For the undefended scenario, the largest relative AALs are found in Kazakhstan and Tajikistan, with values above 6 ‰. In the five considered countries, the largest relative AALs by sector are found for the transport and agricultural sectors (the two types of crops included in this assessment: cotton and wheat). For the case of cotton crops, the largest relative AALs are found in Kazakhstan, Turkmenistan, and Tajikistan, with values above 6‰. Regarding flood risk fatalities, the highest risk is found, as expected, for the undefended case. The largest values are found for the Akmolinskaya region in Kazakhstan and the Khatlon Province in Tajikistan. On average, at a regional level, there is a decrease of 20 % in flood fatalities’ risk in the defended case. Regarding future scenarios and considering climate change, there is a variable trend at a regional level for the flood fatality risk, although it is consistent among the considered SSPs. In general, risk values in future scenarios are increased by a factor between 1.5 and 2.0, such as in the following regions: Syr Darya in Uzbekistan, Issyk-Kul and Jalal-Abad in the Kyrgyz Republic, and Turkistan and Karagandiskaya in Kazakhstan. However, there are extreme cases, such as in the Mangistauskaya region in Kazakhstan, where the risk increases 7-fold. Conversely, there are regions, such as Lebap (Turkmenistan), the Khatlon Province (Tajikistan), Samarkand (Uzbekistan), and Batken (Kyrgyz Republic), for which decreases between 80 % and 90 % are observed for all SSPs. As expected, flood risk is lower for the defended case, although these results should be interpreted with caution due to the assumptions about the flood defences’ location and height discussed earlier. That being said, a comparison between the two cases at a regional level can be made, and a discussion is provided next. Overall, the region with the largest flood AAL is the Badakhshan Autonomous Mountainous Region in Tajikistan. The largest relative difference caused by modelling or not modelling the flood defences is found in the Batken region (Kyrgyz Republic), although for the undefended case the flood risk AAL was relatively low (0.4 ‰). A major flood risk reduction due to the inclusion of the defences is observed in the Issyk-Kul region, with a decrease of around 40 %, which is a significant decrease considering the large flood risk AAL for the undefended case. 7 Discussion This study presents the first high-resolution, regional-scale, fully probabilistic transboundary risk assessment for the area, providing decision-making aids and disaster risk man agement resources. Notably, the involvement of local stakeholders and unprecedented access to local data enhance its significance. Within the SFRARR project, multiple workshops and meetings with local stakeholders and experts were held, in particular eight capacity-building workshops devoted to the different risk assessment components, namely five countrybased workshops on exposure assessment and three regional thematic workshops on hazard, vulnerability, and risk modelling. This activity was carried out in close collaboration with local experts and representatives from all five countries. The workshops provided an opportunity for participants to learn about international best practice and the latest methodologies related to natural risk assessments. These workshops facilitated knowledge sharing with local experts and provided an opportunity for the emergence and inclusion of a greater amount of locally collected information into the analysis. Obtaining daily discharge and hydraulic protection data from local sources proved to be complex due to variability in data quality and form. Compiling comprehensive hydraulic protection data at the country level was hindered by their highly classified and confidential nature, posing challenges for acquisition. In terms of model performance, when comparing the reported losses with the EP curves (in terms of return periods), we observe that for the case of the Khatlon region in Tajikistan, the April 2006 flood is associated with a return period of approximately 650 years, and the July 2005 flood is associated with a return period of approximately 20 years, whereas floods with lower reported losses, such as the April 2014 and May 2021 floods, have a return period of approximately 2 years. The May 2010 flood corresponds to the event with the largest reported losses and, as per the EP curve calculated for this region, has a return period longer than 1000 years. Since the Khatlon region has experienced an exceptional number of reported events in the past 20 years, which is uncommon in the rest of the regions, it is reasonable to assume some of the reported events are associated with such short return periods. Furthermore, note that strictly speaking the comparison between regionwide losses and event-specific losses for the purpose of assessing the reasonability of the associated return periods is not correct. Small events only affect a portion of the region, and other events might have happened during the same year. Therefore, the yearly regional losses are, intuitively, larger than the event-specific losses that may have occurred in that year. Hence, we expect that small, localised events are associated with short regional-scale loss return periods. On the other side of the spectrum, the large May 2010 event in the Khatlon Province appears to be outside the limits of the exceedance probability curve. However, it must be noted that there is a large discrepancy among the different data sources in the reported losses for this event: Swiss Re reported a loss of around USD 200 M, whereas AON reported a loss of around USD 5 M. The overestimation of observed losses by one of the sources (or perhaps the inclusion of losses by landslides and mudslides, which are not included in this model) might be the cause of the very long estimated return period. In any case, referring to the event above, the chance that a 10 000-year loss or larger has been observed in 20 years is very small. This rare event can be explained by the discrepancy in the reported losses, but, in general, such extremely large losses associated with very long return periods are not tenable. This is the reason why we calibrated the model to eliminate such cases. After calibration, the reported event loss values have plausible return periods when compared to the modelled losses from the subnational EP curves. Hereafter, we delineate a series of strengths and limitations inherent to this risk assessment. 7.1 Strengths – A main strength of this risk assessment is that a perilagnostic methodology was used, facilitating the comparison of results across countries, sectors, and hazards (earthquakes and floods). This was achieved by using the same representation for all the key risk components and by computing the same risk metrics using the same probabilistic approach. – This is the first study in the region that disaggregates flood risk results into subnational level (regions), national level (countries), and regional level (five countries), providing a complete disaster risk estimation and results compatible with the overall objectives of the project. – The regional approach adopted for carrying out this risk assessment used consistent assumptions, modelling approaches, and treatment of uncertainties. This is key considering that the final objective of this study is the regional calculation of losses caused by floods of different types (pluvial floods not shown here) and different kinds of events (earthquakes). – This is the first project in the region that considers a complete exposure dataset for the estimation of flood risk. Besides buildings (considered in previous studies), other relevant types of assets, such as transportation infrastructure (roads and bridges) and key crops (cotton and wheat), have been included too. – Given that the software utilised to estimate the physical and human losses has a user-friendly graphical user interface and some GIS capabilities, the obtained flood risk results are expected to facilitate the capacitybuilding process in disaster risk assessment in Central Asia. – The risk results obtained in this study provide losses for floods at subnational level with a reasonable level of accuracy. This has been achieved using a sufficient amount of local data for hazard modelling and risk validation and adopting a high-resolution approach to the modelling of the hazard and exposure components.– By having developed an exposure dataset with different lines of business, all the loss results can be disaggregated by categories. This information is valuable to different stakeholders at subnational to regional levels. – The level of attention paid to most components of the flood risk model is higher than that adopted in previous studies carried out in the region. The refined approach has been complemented by the inclusion of additional lines of business in the exposure datasets, an addition that enabled the derivation of a more comprehensive picture of the flood risk in the region. 7.2 Limitations – The hydrological model calibration was done without lakes/reservoirs in the model. The influence of lakes and reservoirs was taken into account by allowing an overestimation of the flood peaks and underestimation of the low flows for downstream stations of primary reservoirs. These reservoirs effectively attenuate flood waves by storing water during peak events and releasing it during drier periods. This behaviour was not explicitly modelled due to a lack of information on the reservoir management strategies. Hence, it was anticipated that the model would not precisely replicate observed hydrographs for stations located downstream of reservoirs. This issue was dealt with by correcting the extreme value flow distributions in downstream reservoirs based on the available observations. – We were unable to reproduce the effect of alluvial plains, which, similarly to reservoirs, alter flood waves by flattening peak flows. To address this, the same approach was adopted as with reservoirs; namely we allowed the overestimation of the peaks and the underestimation of the recession curves for stations located within alluvial plains. This issue was dealt with by correcting the extreme value flow distributions in alluvial plains based on the available observations. – The hazard model is not supported by an adequate level of detail and accurate data, which are often available when developing national- and subnational-scale models in other regions. However, until more detailed analyses are performed and made available to the public, these risk estimates can certainly be used as a first-order quantification of risks. These risk estimates are certainly suitable both for raising awareness on this topic and for guiding the development of more refined analyses with the same probabilistic framework adopted here. – Catastrophe risk models always have an associated level of uncertainty even when developed for current hazard and exposure characteristics. In this project, a projection of exposure was performed to the year 2080 (for the residential sector only) for different Shared SocioEconomic Pathways (SSPs) for one climate change scenario. These results are intended to be indicative and useful for comparison purposes only. The relative results should be preferred over the absolute losses. – The risk estimates should not be used as the only support for planning and designing a specific risk management infrastructure. These applications should be informed by flood risk studies for specific areas that utilise more comprehensive sets of input data, such as confidential and highly classified datasets available to central governments. 8 Conclusions This article presented the methodological framework utilised for developing a fully probabilistic flood risk assessment for Kazakhstan, Kyrgyz Republic, Tajikistan, Turkmenistan, and Uzbekistan in Central Asia and the obtained risk estimates. The results are expressed in terms of EP curves, AALs, and specific return period losses, which are the metrics commonly used to shape different disaster risk management strategies. The risk assessment study includes several variants of the hazard (current and future, including climate change conditions) and exposure components (current, all lines of business, and future, residential line only, for different Shared Socio-Economic Pathways). The results of the risk assessment are for general use but were intended primarily to inform the World Bank’s engagement in supporting regional and national disaster risk financing and insurance applications, including traditional and parametric solutions for the structuring of a regional risk mitigation programme. These risk estimates can be used by the World Bank to initiate a policy dialogue with the governments of Kazakhstan, Kyrgyz Republic, Tajikistan, Turkmenistan, and Uzbekistan.
… 
  • 4 Views
App Store
Get it on Google Play
Company
About us
News
Careers
Support
Help Center
Business solutions
Advertising
Recruiting
© 2008-2025 ResearchGate GmbH. All rights reserved.
  • Terms
  • Privacy
  • Copyright
  • Imprint
  • Consent preferences
Join for free
Log in