ArticlePDF Available

The whole is greater than the sum of its parts: a holistic graph-based assessment approach for natural hazard risk of complex systems

Copernicus Publications on behalf of European Geosciences Union
Natural Hazards and Earth System Sciences
Authors:

Abstract and Figures

Assessing the risk of complex systems to natural hazards is an important but challenging problem. In today's intricate socio-technological world, characterized by strong urbanization and technological trends, the connections and interdependencies between exposed elements are crucial. These complex relationships call for a paradigm shift in collective risk assessments, from a reductionist approach to a holistic one. Most commonly, the risk of a system is estimated through a reductionist approach, based on the sum of the risk evaluated individually at each of its elements. In contrast, a holistic approach considers the whole system to be a unique entity of interconnected elements, where those connections are taken into account in order to assess risk more thoroughly. To support this paradigm shift, this paper proposes a holistic approach to analyse risk in complex systems based on the construction and study of a graph, the mathematical structure to model connections between elements. We demonstrate that representing a complex system such as an urban settlement by means of a graph, and using the techniques made available by the branch of mathematics called graph theory, will have at least two advantages. First, it is possible to establish analogies between certain graph metrics (e.g. authority, degree and hub values) and the risk variables (exposure, vulnerability and resilience) and leverage these analogies to obtain a deeper knowledge of the exposed system to a hazard (structure, weaknesses, etc.). Second, it is possible to use the graph as a tool to propagate the damage into the system, for not only direct but also indirect and cascading effects, and, ultimately, to better understand the risk mechanisms of natural hazards in complex systems. The feasibility of the proposed approach is illustrated by an application to a pilot study in Mexico City.
This content is subject to copyright.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
https://doi.org/10.5194/nhess-20-521-2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.
The whole is greater than the sum of its parts: a holistic graph-based
assessment approach for natural hazard risk of complex systems
Marcello Arosio1, Mario L. V. Martina1, and Rui Figueiredo1,2
1Department of Science, Technology and Society, Scuola Universitaria Superiore IUSS Pavia,
Piazza della Vittoria 15, 27100 Pavia, Italy
2CONSTRUCT-LESE, Faculty of Engineering, University of Porto, Porto, Portugal
Correspondence: Marcello Arosio (marcello.arosio@iusspavia.it)
Received: 27 September 2018 Discussion started: 8 October 2018
Revised: 16 January 2020 Accepted: 18 January 2020 Published: 24 February 2020
Abstract. Assessing the risk of complex systems to natu-
ral hazards is an important but challenging problem. In to-
day’s intricate socio-technological world, characterized by
strong urbanization and technological trends, the connections
and interdependencies between exposed elements are crucial.
These complex relationships call for a paradigm shift in col-
lective risk assessments, from a reductionist approach to a
holistic one. Most commonly, the risk of a system is esti-
mated through a reductionist approach, based on the sum of
the risk evaluated individually at each of its elements. In con-
trast, a holistic approach considers the whole system to be a
unique entity of interconnected elements, where those con-
nections are taken into account in order to assess risk more
thoroughly. To support this paradigm shift, this paper pro-
poses a holistic approach to analyse risk in complex systems
based on the construction and study of a graph, the math-
ematical structure to model connections between elements.
We demonstrate that representing a complex system such as
an urban settlement by means of a graph, and using the tech-
niques made available by the branch of mathematics called
graph theory, will have at least two advantages. First, it is
possible to establish analogies between certain graph metrics
(e.g. authority, degree and hub values) and the risk variables
(exposure, vulnerability and resilience) and leverage these
analogies to obtain a deeper knowledge of the exposed sys-
tem to a hazard (structure, weaknesses, etc.). Second, it is
possible to use the graph as a tool to propagate the damage
into the system, for not only direct but also indirect and cas-
cading effects, and, ultimately, to better understand the risk
mechanisms of natural hazards in complex systems. The fea-
sibility of the proposed approach is illustrated by an applica-
tion to a pilot study in Mexico City.
1 Introduction
We live in a complex world: today’s societies are inter-
connected in complex and dynamic socio-technological net-
works and have become more dependent on the services pro-
vided by critical facilities. Population and assets in natu-
ral hazard-prone areas are increasing, which translates into
higher economic losses (Bouwer et al., 2007). In coming
years, climate change is expected to exacerbate these trends
(Alfieri et al., 2017). In this context, natural hazard risk is
a worldwide challenge that institutions and private individ-
uals must face at both global and local scales. Today, there
is growing attention paid to the management and reduction
of natural hazard risk, as illustrated for example by the wide
adoption of the Sendai Framework for Disaster Risk Reduc-
tion (SFDRR, 2015).
1.1 Collective disaster risk assessment: traditional
approaches
The effective implementation of strategies to manage and re-
duce collective risk, i.e. the risk assembled by a collection of
elements at risk, requires support from risk assessment (RA)
studies that quantify the impacts that hazardous events may
have on the built environment, economy and society (Grossi
and Kunreuther, 2005). The research community concerned
with disaster risk reduction (DRR), particularly in the fields
Published by Copernicus Publications on behalf of the European Geosciences Union.
522 M. Arosio et al.: The whole is greater than the sum of its parts
of physical risk, has generally agreed on a common approach
for the calculation of risk (R) as a function of hazard (H),
exposure (E) and vulnerability (V): R=f (H, E, V ) (e.g.
Balbi et al., 2010; David, 1999; IPCC, 2012; Schneiderbauer
and Ehrlich, 2004). Hazard defines the potentially damaging
events and their probabilities of occurrence, exposure repre-
sents the population or assets located in hazard zones that are
therefore subject to potential loss, and vulnerability links the
intensity of a hazard to potential losses to exposed elements.
This framework has been in use by researchers and practi-
tioners in the field of seismic risk assessment for some time
(Bazzurro and Luco, 2005; Crowley and Bommer, 2006) and
has more recently also become standard practice for other
types of hazards, such as floods (Arrighi et al., 2013; Falter
et al., 2015).
Despite the consensus on the conceptual definition of
risk, different stakeholders tend to have their own specific
perspectives. For example, while insurance and reinsurance
companies may focus on physical vulnerability and potential
economic losses, international institutions and national gov-
ernments may be more interested in the social behaviour of
society or individuals in coping with or adapting to hazardous
events (Balbi et al., 2010). As such, even though this risk
formulation can be a powerful tool for RA, it has its limits.
For instance, it does not consider social conditions, commu-
nity adaptation or resilience (i.e. a system’s capacity to cope
with stress and failures and to return to its previous state).
In fact, resilience is still being debated, and there is not a
common and consolidated approach for assessing it (Bosetti
et al., 2016; Bruneau et al., 2004; Cutter et al., 2008, 2010).
To overcome some of these limits, different approaches
have been put forward in recent research. For example, Car-
reño et al. (2007a, b, 2012) have proposed including an
aggravating coefficient in the risk equation in order to re-
flect socio-economic and resilience features. Another exam-
ple can be found in the Global Earthquake Model, which
aims to assess so-called integrated risk by combining hazard
(seismic), exposure and vulnerability of structures with met-
rics of socio-economic vulnerability and resilience to seismic
risk (Burton and Silva, 2015). Multi-risk assessment studies
resulting from a combination of multiple hazards and vulner-
abilities are also receiving growing scientific attention (Eakin
et al., 2017; Gallina et al., 2016; Karagiorgos et al., 2016; Liu
et al., 2016; Markolf et al., 2018; Wahl et al., 2015; Zscheis-
chler et al., 2018). These new approaches are seen with in-
creasing international interest, particularly with regard to cli-
mate change adaptation (Balbi et al., 2010; Terzi et al., 2019).
While some research has explored the potential of an in-
tegrated approach to risk and multi-risk assessment of nat-
ural hazards, quantitative collective RA still requires further
development to consider the connections and interactions be-
tween exposed elements. Although holistic approaches are in
strong demand (Cardona, 2003; Carreño et al., 2007b; IPCC,
2012), the majority of methods and especially models devel-
oped so far are based on a reductionist paradigm, which esti-
mates the collective risk of an area as the sum of the risk of its
exposed elements individually, neglecting the links between
them. In fact, the reductionist approaches are neglecting one
of the famous conjectures attributed to Aristotle: “a whole is
greater than the sum of its parts” (384–322 BCE).
1.2 Modelling natural hazard risk in complex systems:
state of the art and limitations
Modern society increasingly relies on interconnections. The
links between elements are now crucial, especially consider-
ing current urbanization and technological trends. Complex
socio-technological networks, which increase the impact of
local events on broader crises, characterize the modern tech-
nology of present-day urban society (Pescaroli and Alexan-
der, 2016). Such aspects support the perception that collec-
tive risk assessment requires a more comprehensive approach
than the traditional reductionist one, as it needs to involve
“whole systems” and “whole life” thinking (Albano et al.,
2014). The reductionist approach, in which the “risks are an
additive product of their constituent parts” (Clark-Ginsberg
et al., 2018), contrasts with the complex nature of disas-
ters. In fact, these tend to be strongly non-linear, i.e. the ul-
timate outcomes (losses) are not proportional to the initial
event (hazard intensity and extensions) and are expressed by
emergent behaviour (i.e. macroscopic properties of the com-
plex system) that appears when the number of single entities
(agents) operate in an environment, giving rise to more com-
plex behaviours as a collective (Bergström, Uhr and Frykmer,
2016). In the last decade, many disasters have shown high
levels of complexity and the presence of non-linear paths and
emergent behaviour that have led to secondary events. Exam-
ples of such large-scale extreme events are the eruption of the
Eyjafjallajökull volcano in Iceland in 2010, which affected
Europe’s entire aviation system, the flooding in Thailand in
2011, which caused a worldwide shortage of computer com-
ponents, and the energy distribution crisis triggered by Hur-
ricane Sandy in New York in 2012.
Secondary events (or indirect losses) due to dependency
and interdependency have been thoroughly analysed in the
field of critical infrastructures such as telecommunications,
electric power systems, natural gas and oil, banking and fi-
nance, transportation, water supply systems, government ser-
vices, and emergency services (Buldyrev et al., 2010). Ri-
naldi et al. (2001), in one of the most quoted papers on this
topic, proposed a comprehensive framework for identifying,
understanding and analysing the challenges and complexi-
ties of interdependency. Since then, numerous works have
focussed on the issue of systemic vulnerability due to the in-
crease in interdependencies in modern society (e.g. Lewis,
2014; Menoni et al., 2002; Setola et al., 2016). Menoni
(2001) defines systemic risk as “the risk of having not
just statistically independent failures, but interdependent, so-
called ‘cascading’ failures in a network of N interconnected
system components.” The article also highlights that “In such
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 523
cases, a localized initial failure (‘perturbation’) could have
disastrous effects and cause, in principle, unbounded dam-
age as N goes to infinity. Ouyang (2014) reviews existing
modelling approaches of interdependent critical infrastruc-
ture systems and categorizes them into six groups: empiri-
cal, agent-based, system dynamics-based, economic-theory-
based, network-based and others. This wide range of mod-
els reflects the different levels of analysis of critical infras-
tructures (physical, functional or socio-economic). Trucco
et al. (2012) propose a functional model aimed at (i) prop-
agating impacts, within and between infrastructures in terms
of disservice due to a wide set of threats, and (ii) applying
it to a pilot study in the metropolitan area of Milan. Pant
et al. (2018) proposed a spatial network model to quantify
flood impacts on infrastructures in terms of disrupted cus-
tomer services both directly and indirectly linked to flooded
assets. These analyses could inform flood risk management
practitioners to identify and compare critical infrastructure
risks on flooded and non-flooded land, to prioritize flood pro-
tection investments, and to improve the resilience of cities.
However, this well-developed branch of research is mostly
focussed on the analysis of a single infrastructure typol-
ogy, and the aim is usually to assess the efficiency of the
infrastructure itself rather than the impact that its failure
may have on society. In particular, “representations of in-
frastructure network interdependencies in existing flood risk
assessment frameworks are mostly non-existent” (Pant et al.,
2018). These interdependencies are crucial for understanding
how the impacts of natural hazards propagate across infras-
tructures and towards society.
A full research branch analyses the complex social–
physical–technological relationships of society considering
a system-of-system (SoS) perspective, whereby systems are
merged into one interdependent system of systems. In a SoS,
people belong to and interact within many groups, such as
households, schools, workplaces, transport, healthcare sys-
tems, corporations and governments. In a SoS, the depen-
dencies are therefore distinguished between links within the
same system or between different systems (Alexoudi et al.,
2011). The relations between different systems are mod-
elled in the literature using qualitative graphs or flow dia-
grams (Kakderi et al., 2011) and by matrices (Abele and
Dunn, 2006). Tsuruta and Kataoka (2008) use matrices to de-
termine damage propagation within infrastructure networks
(e.g. electric power, waterworks, telecommunication, road)
due to interdependency, based on past earthquake data and
expert judgement. Menoni (2001) proposes a framework
showing major systems interacting in a metropolitan envi-
ronment based on observations of the Kobe earthquake. Lane
and Valerdi (2010) provide a comparison of various SoS def-
initions and concepts, while Kakderi et al. (2011) have deliv-
ered a comprehensive literature review of methodologies to
assess the vulnerability of a SoS.
1.3 Positioning and aims
The aspects of complexity and interdependency have been
investigated by various models of critical infrastructure as a
single system, or as systems of systems, which are networks
by construction (e.g. drainage system or electric power net-
work; Holmgren, 2006; Navin and Mathur, 2015). However,
the current practice related to both the single system and SoS
needs further research, in particular when it comes to mod-
elling the complexity of interconnections between individual
elements that do not explicitly constitute a network, which
tends to be neglected by traditional reductionist risk assess-
ments. In fact, although several authors have shown how to
model risk in systems which are already networks by con-
struction (Buldyrev et al., 2010; Reed et al., 2009; Rinaldi,
2004; Zio, 2016), fewer have addressed the topic of risk mod-
elling in systems where that is not the case, i.e. systems that
are not immediately and manifestly depicted as a network
(Hammond et al., 2013; Zimmerman et al., 2019). These in-
clude cities, regions or countries, which are complex systems
made of different elements (e.g. people, services, factories)
connected in different ways among each other in order to
carry out their own activities. Therefore, in this paper we
would like to promote an approach, which has previously
deserved the attention of other authors, to model the inter-
connections between the elements that constitute those sys-
tems and assess collective risk in a holistic manner. The ap-
proach involves the translation of the complex system into a
graph, i.e. a mathematical structure used to model relations
between elements. This allows modelling and assessing in-
terconnected risk (due to the complex interaction between
human, environment and technological systems) and cascad-
ing risk (which results from escalation processes). The inter-
actions between elements at risk and their influence on indi-
rect impacts are assessed within the framework of graph the-
ory, the branch of mathematics concerned with graphs. The
results can be used to support more informed DRR decision-
making (Pescaroli and Alexander, 2018).
The aims of this paper can be summarized as follows:
to call for a paradigm shift from a reductionist to a holis-
tic approach to assess natural hazard risk, supported by
the construction of a graph;
to show the potential advantages of the use of a graph,
namely (1) understanding fundamental aspects of com-
plex systems which may have relevant implications to
natural hazard risk, leveraging well-known graph prop-
erties, and (2) using the graph as a tool to model the
propagation of impacts of a natural hazard and, eventu-
ally, assess risk in complex systems;
to present the feasibility of implementing the approach
through a pilot study in Mexico City;
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
524 M. Arosio et al.: The whole is greater than the sum of its parts
to discuss the limitations, potentialities and future de-
velopments of this approach compared to other more
traditional approaches.
2 Methodology
In this section, which presents the methodology, we aim to
answer the three following questions:
1. How can a complex system be “translated”, which does
not explicitly constitute a network, into a graph?
2. Which properties of the graph could give us insights on
the risk-related properties of the system?
3. How can the impacts of a natural hazard be propagated
by means of the graph?
The answers to these questions are formulated proposing the
workflow of the graph-based approach, which is divided into
three main steps, described in Sect. 2.1, “Construction of the
graph”; Sect. 2.2.2, Analogy between graph properties and
risk variables”; and Sect. 2.3, “Hazard impact propagation
within the graph”.
The workflow is presented in Fig. 1.
2.1 Construction of the graph
The construction of a graph for systems already in the form
of a network is well developed and consolidated in the lit-
erature (e.g. Rinaldi, 2004; Setola et al., 2016). Instead, the
use of the graph theory and the exploitation of its diagnosis
tools for systems not already structurally in the form of a
network is relatively new. In this regard, in this section we
propose a procedure to build a graph for a complex system
such as a city by linking the individual elements constituting
it.
According to the objects of each specific context, the graph
construction phase starts by defining the hypothesis of the
analysis and the system boundaries according to the objects
of each specific context. In particular, it establishes the two
main objects of the graph: vertices (nodes) and edges (links)
and their characteristics.
The nodes can theoretically represent all the entities that
the analysis wants to consider: physical elements like a sin-
gle building, bridge and electric tower; suppliers of services
such as schools, hospitals and fire brigades; or beneficiaries
such as population, students or specific vulnerable groups
such as elderly people. Due to the very wide variety of ele-
ments that can be chosen, it is necessary to select the category
of nodes most relevant to the specific context of analysis. It is
also necessary to define, for each node, the operational state
that can be characterized, from the simplistic Boolean state
(functional or non-functional) to discreet states (30 %, 60%
or 100 % of service or functionality) or even a complete con-
tinuous function (similarly to vulnerability functions). In a
graph, the states of each node depend both on the states of
the adjacent nodes and on the hazard. In this paper, we use
the term node to refer to its graph characteristics and term el-
ement to refer to the entity that it represents in the real world.
The links between the nodes that create the graph can range
from physical to geographical, cyber or logical connections
(Rinaldi et al., 2001). According to the different typologies of
connections and nodes selected, it is necessary to define the
direction and weight of the links. The graph will be directed
when the direction of the connection between elements is rel-
evant, and it will be weighted if the links have a different
importance, intensity or difference capacity.
In defining the topology, it is crucial to define the level of
analysis details coherently with the scope and scale, both for
the selection of elements and for the relationship between el-
ements that need to be considered. In the case of very high
detail, for example, a node of the graph could represent a
single person within a population, and in the case of lower
resolution, it could represent a large group of people with a
specific common characteristic, such as living in the same
block or having the same hobby. In the case of analyses at a
coarser level, an entire network (e.g. electric power system)
can be modelled as a single node of another larger network
(e.g. national power system). The definition of the topology
structure of the graph also identifies immediately the system
boundaries (e.g. which hospitals to be considered in the anal-
ysis: only the potential flood area, the ones in the district or
the ones in the region). To which extent is it necessary to con-
sider elements to be nodes of the graph? The topology def-
inition is a necessary step in performing the computational
analysis and introduces approximations of the open systems
that need to be acknowledged.
Once the graph is conceptually defined, in order to ac-
tually build the graph, it is then necessary to establish the
connection between all the selected elements. The relations
described above determine the existence of connections be-
tween categories of elements, but they do not define how a
single node of one category is linked to a node of another cat-
egory. Therefore, it is necessary to define rules that establish
the connections between each single node. For the sake of
clarity, an example could be the following: the conceptual re-
lationship is defined between students and school (“students
go to school”); subsequently, it is necessary to make the link
between each student and a school in the area, applying a rule
such as “students go to the closest school”. This is an exam-
ple of geographical connection with nodes that are linked by
their spatial proximity.
The connections between the single elements can be rep-
resented either by a list of pairs of nodes or, more frequently,
by the adjacency matrix. Any graph Gwith Nnodes can be
represented in fact by its adjacency matrix A(G)with Nx N
elements Aij , whose value is Aij =Aij =1 if nodes iand j
are connected and is 0 otherwise. If the graph is weighted,
Aij =Aji can have a value between 0 and 1, expressing the
weight of the connection between the nodes. The properties
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 525
Figure 1. Workflow.
of the nodes are represented in both cases by another matrix,
with a column for each property associated with the node
(e.g. name, category, type). In practical terms, the list of all
connections or the adjacency matrix can be automatically ob-
tained via GIS analysis, in the case of geographical connec-
tions, or by database analysis, in the case of other categories
of connections. The list of nodes, together with either the list
of links or the adjacency matrix, are the inputs for building
the mathematical graph.
Once a graph has been set up and constructed, it is then
possible to compute and analyse its properties by means of
graph theory and propagate the hazard impact into the graph,
as illustrated in the following sub-sections.
2.2 Analysis of the graph properties
2.2.1 Summary of relevant graph properties
The mathematical properties of a graph can be studied us-
ing graph theory (Biggs et al., 1976), which is the branch of
mathematics that studies the properties of graphs (Barabasi,
2016). Graphs can represent networks of physical elements
in the Euclidean space (e.g. electric power grids and high-
ways) or of entities defined in an intangible space (e.g. col-
laborations between individuals; Wilson, 1996). Since its in-
ception in the 8th century (Euler, 1736), graph theory has
provided answers to questions in different sectors, such as
pipe networks, roads and the spread of epidemics. Over re-
cent decades, studies of graph concepts, connections and re-
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
526 M. Arosio et al.: The whole is greater than the sum of its parts
lationships have strongly accelerated in every area of knowl-
edge and research (from physics to information technology,
from genetics to mathematics and to building and urban de-
sign), showing the image of a strongly interconnected world
in which relationships between individual objects are often
more important than the objects themselves (Mingers and
White, 2009).
Formally, a complex network can be represented by a
graph Gwhich consists of a finite set of elements V (G)
called vertices (or nodes, in network terminology) and a set
E(G) of pairs of elements of V (G) called edges (or links, in
network terminology; Boccaletti et al., 2006). The graph can
be undirected or directed (Fig. 2a and b). In an undirected
graph, each of the links is defined by a pair of nodes iand j
and is denoted as lij . The link is said to be incident in nodes
iand jor to join the two nodes; the two nodes iand jare re-
ferred to as the end nodes of link lij . In a directed graph, the
order of the two nodes is important: lij stands for a link from
ito j, node ipoints to node jand lij 6= lj i . Two nodes joined
by a link are referred to as adjacent (Börner et al., 2007; Luce
and Perry, 1949). In addition, a graph could have edges of
different weights representing their relative importance, ca-
pacity or intensity. In this case, a real number representing
the weight of the link is associated to it, and the graph is said
to be weighted (Fig. 2c; Börner et al., 2007).
A short list of the most common set of node, edge and
graph measures used in graph theory is presented here and
summarized in Table 1 (Nepusz and Csard, 2018; Newman,
2010). There are measures that analyse the properties of
nodes or edges, local measures that describe the neighbour-
hood of a node (single part of the system) and global mea-
sures that analyse the entire graph (whole system). From a
holistic point of view, it is important to note that since some
node and edge measures require the examination of the com-
plete graph, this allows looking at the studied area as a unique
entity that results from the connections and interactions be-
tween its parts and characterizing the whole system.
The degree (or connectivity, k) of a node is the number
of edges incident with the node. If the graph is directed, the
degree of the node has two components: the number of out-
going links (referred to as the degree-out of the node) and the
number of ingoing links (referred to as the degree-in of the
node). The distribution of the degree of a graph is its most
basic topological characterization, while the node degree is
a local measure that does not take into account the global
properties of the graph. On the contrary, path lengths, close-
ness and betweenness centrality are properties that consider
the complete graph. The path length is the geodesic length
from node ito node j: in a given graph, the maximum value
of all path lengths is called diameter and the average shortest
path length is called the characteristic path length. Closeness
is the shortest path length from a node to every other node
in the network, and betweenness is defined as the number
of shortest paths between pairs of nodes that pass through a
given node.
Other relevant characteristics that are commonly analysed
in directed graphs to assess the relative importance of a node,
in terms of the global structure of the graph, are the hub and
authority properties. A node with a high hub value points
to many other nodes, while a node with a high authority
value is linked by many different hubs. Mathematically, the
authority value of a node is proportional to the sum of the
node hubs pointing to it, and the hub value of a node is
proportional to the sum of authority of nodes pointing to
it (Nepusz and Csard, 2018; Newman, 2010). In the World
Wide Web, for example, websites (nodes) with higher author-
ities contain the relevant information on a given topic (e.g.
https://www.wikipedia.org/, last access: 2 February 2020),
while websites with higher hubs point to such information
(e.g. https://www.google.com/, last access: 2 February 2020).
The mathematical properties presented above are useful
metrics for analysing the structural (i.e. network topology,
arrangement of a network) and functional (i.e. network dy-
namics, how the network status changes after perturbation)
properties of complex networks. Depending on the statisti-
cal properties of the degree distributions, there are two broad
classes of networks: homogeneous and heterogeneous (Boc-
caletti et al., 2006). Homogeneous networks show a distribu-
tion of the degree with a typically exponential and fast de-
caying tail, such as Poissonian distribution, while heteroge-
neous networks have a heavy-tailed distribution of the de-
gree, well-approximated by a power-law distribution. Many
real-world complex networks show power-law distribution of
the degree, and these are also known as scale-free networks
because power laws have the same functional form on all
scales (Boccaletti et al., 2006). Networks with highly hetero-
geneous degree distribution have few nodes linked to many
other nodes (i.e. few hubs) and a large number of poorly con-
nected elements.
The properties of the static network structure are not al-
ways appropriate for fully characterizing real-world net-
works that also display dynamic aspects. There are examples
of networks that evolve with time or according to external
environment perturbations (e.g. removal of nodes or links).
Two important properties for exploring the dynamic response
to a perturbation are percolation thresholds and fragmenta-
tion modes.
Percolation was born as the model of a porous medium but
soon became a paradigm model of statistical physics. Wa-
ter can percolate in a medium if a large number of links ex-
ists (i.e. the presence of links means the possibility of water
flowing through the medium), and this depends largely on
the fraction of links that are maintained. When the graph is
characterized by many links, there is a higher probability that
connection between two nodes may exist and, in this case, the
system percolates. Vice versa, if most links are removed, the
network becomes fragmented (Van Der Hofstad, 2009). The
percolation threshold is an important network feature result-
ing from the percolation concept, which is obtained by re-
moving vertices or edges from a graph. When a perturbation
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 527
Figure 2. Graph representation of a network. (a) Undirected. (b) Directed. (c) Weighted directed.
Table 1. Properties of a graph Gwith Nnodes defined by its adjacency matrix A(G) with N×Nelements aij , whose value is aij >0 if
nodes iand jare connected and is 0 otherwise.
Property Description Formula
Degree (k) The number of edges incident with the
node
ki=Pjaij
Diameter (D) The maximum value of all path lengths
dij
D=dij , where dij is the geodesic length from node ito node j(i.e.
path length)
Characteristic
path length (d)
The average shortest path length d=1
N·(N1)·Pi,j (i6=j )dij
Closeness (c) Shortest path length from a node to ev-
ery other node in the network
ci=1
li,where li=1
n1·Pjdi,j
Betweenness
(b)
Number of shortest paths between pairs
of nodes that pass through a given node
bi=Pj,k
nof shortest paths connecting j,k via i
nof shortest paths connecting j,k =Pj,k
njk (i )
njk
Authority (x) The value proportional to the sum of the
node hub values pointing to it
xi=α·Pjaj i yjA·AT, where αis a proportional constant
Hub (y) The value proportional to the sum of au-
thority of nodes pointing to it
yi=β·Pjaij xjAT·A, where βis a proportional constant
Percolation
threshold (pc)
The minimum value of fraction of re-
maining nodes (p) that leads to the con-
nectivity phase of the graph
For random graph pc=1
k,kis the average of degree
is simulated as a removal of nodes or links, the fraction of
nodes removed is defined as f=Nodesremoved
NodesTotal , and the proba-
bility of nodes and links present in a percolation problem is
p=1f=Nodesremaining
NodesTotal . Consequently, it is possible to de-
fine the percolation threshold (pc) as the minimum value of
pthat leads to the connectivity phase of the graph (Gao et al.,
2015). In practical terms, the percolation threshold discrim-
inates between the connected and fragmented phases of the
network. In a random network (i.e. network with Nnodes
where each node pair is connected with probability p), for
example, pc=1/k, where kis the mean of degree k(Bunde
and Havlin, 1991).
The second property that investigates dynamic evolution
is the fragmentation (i.e. number and size of the portions of
the network that become disconnected). The number and the
size of the sub-networks obtained after removing the ver-
tices and edges provide useful information. In the case of
a so-called giant component fragmentation, the network re-
tains a high level of global connectivity even after a large
amount of nodes have been removed, while in the case of to-
tal fragmentation, the network collapses into small isolated
portions. For this reason, “keeping track of the fragmenta-
tion evolution permits the determination of critical fractions
of removed components (i.e. fraction of component deletion
at which the network becomes disconnected), as well as the
determination of the effect that each removed component has
on network response” (Dueñas-Osorio et al., 2004).
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
528 M. Arosio et al.: The whole is greater than the sum of its parts
2.2.2 Analogy between graph properties and risk
variables
The proposed graph properties can be used to more thor-
oughly characterize systems of exposed elements. In fact, the
traditional conceptual skeleton to describe risk can still be
adopted within the framework of the proposed graph-based
approach. The properties calculated from a graph consist of a
new layer of information for some of those risk variables that
go beyond their traditional interpretations within the reduc-
tionist paradigm. In particular, they provide a more compre-
hensive characterization of the single nodes (deriving from
their relationships with other nodes) as well as of the sys-
tem as a whole. As such, from the risk variables presented
in Sect. 1, the hazard preserves its traditional definition as
an event that can impact such systems, or part(s) of it, with
certain intensities and associated probabilities of occurrence.
For the three other variables, namely, exposure, vulnerabil-
ity and resilience, below we propose and provide an innova-
tive and original discussion on their analogies with the graph
properties presented in the previous sub-section. The analo-
gies are summarized in Table 2.
Exposure
Analogous to the traditional approach but at the same time
extending its concept, the value of each exposed element can
be estimated as the relative importance that is given to it by
the graph, which is measured by the network itself by means
of the connections that point to each node. In graph theory,
this relative importance among elements, based on standard-
ized values, can be investigated through the authority analy-
sis. A high authority value of a node indicates that there are
many other nodes (or otherwise some hubs) that provide ser-
vices (i.e. providers or suppliers) to that node. In other words,
the system privileges it compared with others according to
their connections with the provider nodes. For example, a
factory settled in an industrial district may receive more ser-
vices (e.g. electric power, roads for heavy vehicles, logistic
systems) than a factory located in the old quarter of a city; in
this case, the former is structurally privileged by the system
compared with the latter.
Vulnerability
In the reductionist approach, vulnerability is the propensity
of an asset to be damaged because of a hazardous event. By
adopting a graph perspective, the vulnerability can be esti-
mated both for the single node as well as for the system as a
whole.
In the first case, the vulnerability depends on the relation-
ship that the node has with the others. In particular, the close-
ness represents the likelihood of a node to be affected indi-
rectly by a hazard event due to the lack of services provided
by other nodes. A lower value of closeness, i.e. the shortest
path length from a node to every other node in the network,
means a higher probability of a node of being impacted by a
hazard event. On the other hand, a high value of closeness,
i.e. a longer path length from a node to every other node in
the network, means a low probability of being impacted.
In the second case, the vulnerability can be defined as
the propensity of the network to be split into isolated parts
due to a hazardous event. In that condition, an isolated part
is unable to provide and receive services, which can trans-
late into indirect losses. The system vulnerability, therefore,
can be evaluated by means of the following graph properties:
hubs, betweenness and degree-out distribution. The presence
of nodes with high hub values indicates a propensity of the
network to be indirectly affected more extensively by a haz-
ard event, since a large number of nodes are connected with
the hubs. A network that has nodes with high betweenness
values has a higher tendency to be fragmented because it has
a strong aptitude to generate isolated sub-networks. Finally,
the degree distribution, which expresses network connectiv-
ity of the whole system (i.e. the existence of paths leading to
pairs of vertices), has a strong influence on network vulner-
ability after a perturbation. The shape of the degree distribu-
tion determines the class of a network: heterogeneous graphs
(power-law distribution and scale-free network) are more re-
sistant to random failure, but they are also more vulnerable
to intentional attack (Schwarte et al., 2002). As emphasized
above, scale-free networks have few nodes linked to many
nodes (i.e. few hubs) and a large number of poorly connected
elements. In the case of random failure, there is a low proba-
bility of removing a hub, but if an intentional attack hits the
hub, the consequences for the network could be catastrophic.
Resilience
Resilience differentiates from vulnerability in terms of dy-
namic features of the system as a whole. The properties and
functions used to model vulnerability are static character-
istics that do not consider any time evolution or, using the
words of Sapountzaki (2007), “vulnerability is a state, while
resilience is a process”; in fact the definition of resilience im-
plies a time evolution of the characteristics of the whole sys-
tem. In addition, Lhomme et al. (2013) underline “the need
to move beyond reductionist approaches, trying, instead, to
understand the behaviour of a system as a whole”. These two
features, the dynamic aspect and whole system, make vulner-
ability different from resilience and further clarify the need
to develop an approach that it is able to consider the dynamic
of the system to be whole.
In this context, the study of the percolation threshold (pc)
can be used to explain the resilience of the network after a
perturbation. The pcvalue distinguishes between the con-
nectivity phase (above pc) and the fragmented phase (below
pc). In the connectivity phase, the network can lose nodes
without losing the capacity to cope with the perturbation as a
network, while in the fragmented phase, the network does not
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 529
Table 2. Analogy of risk variables with graph properties.
Risk variables Analogy with graph properties
Exposure The authority represents how the system privileges the nodes, conferring them more or less
importance compared with others, according to the connections established in the system.
Vulnerability The propensity of parts of the network to be isolated because of hazard events. The closeness of
a node is a measure of the single node vulnerability within the system, while degree distribution,
hub and betweenness are measures of vulnerability of the system as a whole.
Resilience The percolation threshold together with the network fragmentation analysis explain the re-
silience of the network after a perturbation.
actually exist anymore and the remaining nodes are unable to
cope with the disruption alone.
This critical behaviour is a common feature also observed
in disasters induced by natural hazards. In some cases, the
exposed elements withstand some damage and loss, but the
overall system maintains its structure. However, there are
events in which the amount of loss (affected nodes) is so rele-
vant that the system loses the overall network structure. In the
first case, the system has the capacity to cope independently
and tackle the event, while in the second case, the system is
unable to cope.
The dynamic responses are characterized by the network
fragmentation property, which describes the performance of
a network when its components are removed (Dueñas-Osorio
and Vemuru, 2009). For instance, the so-called giant compo-
nent fragmentation (the largest connected sub-network) and
the total fragmentation describe network connectivity and de-
termine the failure mechanism (Dueñas-Osorio et al., 2004).
Keeping track of fragmentation evolution makes it possi-
ble to determine both the critical fraction of components re-
moved (i.e. the smallest component deletion that disconnects
the network) and the effect that each component removed has
on the network response.
For these reasons, we consider percolation threshold and
network fragmentation to be good indicators of resilience,
also because they are able to show the emergent behaviour of
the whole system beyond just considering the single parts of
the network (e.g. node).
2.3 Hazard impact propagation within the graph
While the literature of the impact propagation or cascading
effects for critical infrastructures is large (e.g. Pant et al.,
2018; Trucco et al., 2012), applications on the risk quantifi-
cation of natural hazards including the cascading effects are
scarce. Besides the considerable amount of information that
can be obtained by analysing graph properties from the view-
point of natural hazard risk, the graph itself also provides
an optimal structure for propagating the impacts of a haz-
ard throughout an affected system. Indeed, the use of a graph
allows estimating, besides direct losses to elements directly
affected (such as elements within a flooded area), also indi-
rect losses to elements outside the affected area that rely on
services provided by directly hit elements, which may have
lost some capacity to provide those services as a result. The
propagation and quantification of impacts through a graph
allows understanding the risk mechanisms of the system and
identifying weaknesses that can translate into larger indirect
consequences. It also enables the possibility of quantitatively
estimating risk considering those indirect consequences.
Figure 3 depicts this process through a conceptual
flowchart. In order to propagate the impacts by means of
the graph and quantify indirect losses resulting from second-
order and cascading effects, the modelled graph must first be
integrated with hazard data. These data must include hazard
footprints that allow establishing the hazard intensity (e.g.
water depth) at the location of each element. The direct and
indirect impacts can then be computed according to the pro-
posed methodology, based on three levels of vulnerability:
Level I is the physical vulnerability of a directly affected
element in its traditional definition. The hazard intensity
is the input variable for computing the direct damage of
the element.
Level II is the vulnerability associated to the link be-
tween an affected element and its receivers. The direct
damage as obtained by vulnerability level 1 is the in-
put for computing the loss of service provided by the
directly damaged element to the elements that receive
it.
Level III is the vulnerability of the service-receiving el-
ement. The loss of service as obtained by vulnerability
level 2 is the input for estimating the indirect loss of the
element that receives the service.
These vulnerabilities can be represented by vulnerability
functions analogous to the ones adopted within the traditional
risk assessment approach and can be different for each cate-
gory of element and service.
By computing impacts for hazard scenarios with different
probabilities of occurrence, and adopting the three levels of
vulnerability functions, a quantitative estimate of risk can be
obtained. An illustrative example of propagation of impacts
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
530 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 3. Risk framework.
is presented in Sect. 2.4, and more detailed information on
the propagation of impacts through the graph and the estima-
tion of impact is presented in the pilot study in Sect. 3.3.
2.4 Illustrative example
In order to illustrate the application of the graph-based ap-
proach in the characterization of a system exposed to nat-
ural hazards, in Fig. 4 we present an example of a hypo-
thetical city comprising various elements of different types
which provide services. Specifically, our example includes
20 elements: nine blocks of residential buildings, one hos-
pital, two fire stations, three schools, three fuel stations and
two bridges. Blocks are intended to represent the population,
which receives services from the other nodes. Bridges pro-
vide a transportation service, fire stations provide a recovery
service, hospitals provide a healthcare service, schools pro-
vide an education service and fuel stations provide a power
service. Figure 4a shows how the elements are connected in
a graph. The authority and hub values have been computed
using the R graph package (http://igraph.org/r/, last access:
2 February 2020). The full library of functions adopted are
available in Nepusz and Csard (2018).
In Fig. 4b, the size of the elements is proportional to their
authority values. Blocks 6, 18, 19 and 20 have higher author-
ity values than the other elements of this typology because
they receive a service from the hospital (node 16), which is
an important hub. Fire Station 5 and School 9 have high val-
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 531
Figure 4. (a) Map of the various elements of a hypothetical municipality in a flood-prone area. (b) Same as (a), with node sizes proportional
to authority values. (c) Same as (a), with node sizes proportional to hub values. (d) Same as (a), with flood area and nodes directly impacted
highlighted with red cross. (e) Same as (a), with also the nodes indirectly impacted highlighted with black cross.
ues of authority because they are serviced by Bridge 3, which
is also an important hub. The importance of a node in graph
theory is closely connected with the concept of topological
centrality. Referring to the illustrative example, Block 6 has
the highest authority value; if a flood hit it, it would there-
fore affect the most central node of the network, or in other
words, the node which is implicitly more privileged by the
system.
In Fig. 4c, the major hubs are the elements with the largest
diameters: Hospital 16, Bridge 3, School 7 and Fuel Station
15. Bridge 3 is an important hub, since it provides its service
to Block 6, which has the highest authority value, and to Fire
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
532 M. Arosio et al.: The whole is greater than the sum of its parts
Station 5 and School 9. Fuel Station 15 and School 7 are also
important hubs because they provide services to Block 6. The
elements in the south-eastern part of the network inherited a
relative importance (i.e. authority) from the most important
hub in that area (i.e. Hospital 16). Bridge 3 is an exception to
this aspect; in fact, this bridge connects the southern part (i.e.
Block 6) with the northern part of the city (i.e. Fire Station 5
and School 9). A flood event in the south-eastern part of the
network would likely generate a major indirect impact on the
whole system compared to other parts of the network.
We assume that these elements are located in a flood-
prone area and that Bridge 3 and Block 6 are directly flooded
(Fig. 4d). Since those elements are directly damaged, it is
possible to follow the cascading effects following the direc-
tion of the service within the graph from providers to re-
ceivers. In this artificial example, the transportation service
provided from by the bridge is lost, and this has an indirect
consequence to Hospital 16, which is not directly damaged
but cannot provide healthcare services, since people cannot
reach the hospital anymore. The graph allows extending the
impact not only to the elements directly hit by the hazard
but also to all elements that receive services from elements
directly or indirectly affected by the hazard.
Note that similar analyses could be carried out for other
properties of the graph (e.g. betweenness) in order to obtain
additional insight into the properties of the system, which
could be useful for the purpose of a risk assessment. For the
sake of brevity, such analyses have not been included here.
A complete study of all relevant graph properties discussed
above and a more realistic hazard scenario are presented in
the following section.
3 Pilot study: Mexico City
Floods, landslides, subsidence, volcanism and earthquakes
make Mexico City one of the most hazard-prone cities in the
world. Mexico is one of the most seismological active re-
gions on earth (Santos-Reyes et al., 2014); floods and storms
are recorded in indigenous documents, and the Popocatépetl
volcano has erupted intermittently for at least 500 000 years.
At present, people settle in hazardous areas such as scarps,
steep slopes, ravines and next to stream channels.
The Mexico City metropolitan area (MCMA) is one of the
largest urban agglomerations in the world (Campillo et al.,
2011). This pilot study focuses on Mexico City (also called
the federal district MCFD), where approximately 8.8 mil-
lion people live. The choice of MCFD as a pilot case allows
showing the importance of modelling connections and inter-
dependencies in a complex urban environment.
Tellman et al. (2018) show how the risk in Mexico City’s
history has become interconnected and reinforced. In fact,
as cities expand spatially and become more interconnected,
the risk becomes endogenous. Urbanization increases the de-
mand for water and land. The urbanized areas inhibit aquifer
recharge, and the increase in water demand exacerbates sub-
sidence due to an increase in pumping activity out of the
aquifer. Subsidence alters the slope of drainage pipes, de-
creasing the efficiency of built infrastructure and the capacity
of the system to both remove water from the basin in floods
as well as deliver drinking water to consumers. This exacer-
bates both water scarcity and flood risk.
3.1 Construction of the graph
Given the very large scale of the city, certain simplifica-
tions and hypotheses had to be assumed for conceptualiz-
ing the network. Furthermore, the choice of element typolo-
gies, the connections between them and the definition of rules
were also made considering the availability of data provided
by the UNAM Institute of Engineering for this study case.
While these data are only partially representative of the en-
tirety of the exposed assets in MCFD (with the exclusion of
three districts for which the data were not available: Álvaro
Obregón, Milpa Alta and Xochimilco), we consider it suit-
able for the specific purpose of this work, which is to illus-
trate the proposed approach and highlight its potential. Note
that the boundaries of the system are defined by the selection
of typologies, connections and the studied geographical area.
These simplifications and hypotheses of real open-ended sys-
tems, while necessary to enable the computational analysis,
should be recognized and taken into account when evaluating
the results of the analysis (Clark-Ginsberg et al., 2018).
Among the possible exposed elements, we selected six ty-
pologies that are representative of both the emergency man-
agement phase (e.g. fire stations) and long-term impacts (e.g.
schools). The typologies of elements considered in this pi-
lot case, which provide and/or receive services reciprocally,
are fire stations, fuel stations, hospitals, schools, blocks and
crossroads. The fire station represents the node type from
which the recovery service is provided to all the other ele-
ments present in the area (except crossroads). The fuel sta-
tion represents the node type that provides the power service,
the hospital provides the healthcare service and the school
provides the education service; the elements with these three
typologies deliver their respective services to all the blocks.
The block is the node type defined as the proxy for the pop-
ulation, which receives services from all the other consid-
ered elements. The simulation uses blocks instead of popula-
tion, as this enables a reduction in computational demand by
lowering the number of nodes from 8 million to a few tens
of thousands. Finally, the analysis considers 17 crossroads,
which provide the transportation service to all the other ele-
ments. The crossroads were identified by selecting the major
intersections between the main highways present in the road
network of MCFD. All the typologies, numbers of elements
and the connections between them are presented in the con-
ceptual graph in Table 3, and Fig. 5 presents the GIS repre-
sentation of the providers and the services that are provided
between them.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 533
Table 3. List of nodes adopted in the network conceptualization.
The link between two elements of two different typologies
was set up based on the geographical proximity rule: each
specific service is received by the nearest provider (e.g. a
block receives the education service from the closest school,
and the school receives the recovery service from the clos-
est fire station). This simple assumption is due to the lack of
data available at this stage; in case of more data, it will be
possible to define this relation more accurately (e.g. school
offers education service to its zoning) but without changing
the general validity of the method. Note that this hypothesis
does not consider the redundancy that might exist between
some services, which would necessarily influence the prop-
agation of cascade effects. The service provided by the road
network was modelled while considering that each element
in the area receives a transportation service from the closest
crossroad among the 17 that were identified. This approach
does not aim to be representative of the complete behaviour
of the road network system, particularly the paths between
nodes or possible alternative paths, but it does allow consid-
ering the transportation network in the analysis in a simpli-
fied manner.
The list of nodes, which contains all the elements of all
typologies, together with the list of links between them, both
obtained according to the hypothesis presented above, are
the inputs for building the mathematical graph. As for the
illustrative example, the graph was obtained using the open-
source igraph package for network analysis of the R environ-
ment.
3.2 Analysis of the graph properties
The following paragraphs present the results from the graph
analysis and show how the properties of the single elements
and the whole system are assessed, from both provider (or
supplier) and receiver (or consumer) perspectives.
3.2.1 Vulnerability of the single elements
As described in Sect. 2.2.2, the systemic vulnerability of a
node is the aptitude to remain isolated from the whole sys-
tem when the graph is perturbed. The tendency to observe
isolated parts is analysed here by the closeness property,
which measures the mean distance from a vertex to other ver-
tices; Fig. 6 shows the geographical distribution values of the
closeness in value of the blocks.
In accordance with the model conceptualization, the
blocks increase their distance to the network if their providers
are not connected to each other. For example, if a school and
a hospital provide services to a block, the closeness in value
of this block will be higher if the school and the hospital re-
ceive the transportation service from same crossroad and this
crossroad also serves the block. In this specific case, where
the nodes are more interconnected, the distance between the
block node and the whole network is lower, and by definition
its closeness in value is higher.
Figure 6 shows that the region with the majority of blocks
with the highest values of the closeness in value is in the
south-eastern part of MCFD. This area is the part of the
city that is surrounded by few providers, which are the ma-
jor hubs, as illustrated in the next section and in Fig. 9.
The presence of few providers forces them to exchange ser-
vices between themselves and to serve all the receivers of the
area, meaning that the blocks have a lower distance to the
providers and can therefore be more vulnerable.
3.2.2 Vulnerability of the whole system
The analysis in this section shows the structural properties
of the whole network (i.e. network topology, arrangement of
a network) and investigates how the network, as a unique
entity, is vulnerable to a potential external perturbation (e.g.
hazardous event).
As mentioned in Sect. 2.2, there are two types of net-
works, heterogeneous or homogeneous, depending on if the
degree distribution is respectively heavy tailed or not. Het-
erogeneous networks have few hubs that appear as outliers
in the degree distribution; this feature can represent a po-
tential weakness of a system because if one of the hubs is
affected by an event, it will propagate the impacts more ex-
tensively than other nodes. Note that this is not an indication
of risk per se, which is a function of not only the exposed sys-
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
534 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 5. Map of nodes and services provided among them. For readability, blocks are not included. OpenStreetMap contributors 2019.
Distributed under a Creative Commons BY-SA License.)
tem but also the hazard. However, it may be used to evaluate
the vulnerability of the system as a whole, similarly to how
single-site vulnerability analyses assess the potential impact
of an event regardless of its actual likelihood.
There is an objective way to estimate if the degree dis-
tribution is heavy tailed by means of its statistical proper-
ties: a distribution is defined as heavy tailed if its tail is not
bounded by the exponential distribution. In order to verify
if the degree distribution of a network is heavy tailed, one
can infer the generalized Pareto distribution (GPD) on the
observation and analyse the shape parameter (Beirlant et al.,
1999; Scarrott and Macdonald, 2012). If the shape param-
eter of the GPD is equal to zero, the tail of distribution is
exponential. Instead, if the shape parameter is greater than
zero, the tail of the distribution if fatter than the exponen-
tial one, and therefore the distribution is heavy tailed. How-
ever, in order to fit the GPD to the data, it is first necessary
to select a threshold value and consider only the exceeding
values. There are different techniques for selecting the right
threshold value (Coles, 2001). Figure 7 shows the values of
the shape parameter (sp) for the degree-out distribution of
the Mexico City network for different values of threshold in
terms of data percentile. The shape parameter εis positive
for any value below 0.8; over that value, the degree distribu-
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 535
Figure 6. Geographical distribution of the block closeness in value. OpenStreetMap contributors 2019. Distributed under a Creative
Commons BY-SA License.)
tion is meaningless and does not represent the whole network
anymore but only the extreme values that are the only still
above the threshold. For this reason, we can assert that the
degree-out distribution is heavy tailed. This confirms that the
network built for Mexico City is strongly non-homogeneous,
with few hubs (providers) that are linked to many elements.
According to these results, if an hazard event hit one of these
few nodes with high value of hub, the consequences for the
network could be catastrophic due to the central role of the
hub.
3.2.3 Cascade effects
The analysis of the topological structure of the providers in
the network shows their relative relevance to the system, ac-
cording to their connections with the receivers. In particu-
lar, we propose a comparison between providers through the
analysis of two properties: hub analysis of all nodes that pro-
vide service to the population and betweenness analysis of
the crossroads.
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
536 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 7. Parameter estimation (sp) against thresholds for degree-
out data (SD: standard deviation).
Providers: role of hubs
The importance of a node in directed graphs, within the pur-
pose of providers that deliver a service, is closely connected
with the concept of topological centrality: the capacity of a
node to influence, or be influenced by, other nodes by virtue
of its connectivity. In graph theory, the influence of a node
in a network can be provided by the eigenvector centrality,
of which the hub and authority measures are a natural gen-
eralization (Koenig and Battiston, 2009). A node with a high
hub value points to many nodes, while a node with a high
authority value is linked by many different hubs.
The hub analysis considers all the elements in the graph
that provide services; for this reason, blocks are excluded
from this analysis. Figure 8 reveals outliers that are useful for
identifying the elements in the graph that, in case of poten-
tial failure, could have a large impact on the network due to,
for instance, their role as major hubs. In particular, one hos-
pital has the hub value equal to 1, which by definition is the
highest, immediately followed by a crossroad, with a value
around 0.85, while some schools, fuel stations and fire sta-
tions have hub values around 0.5. The ranking of elements
according to their hub values can be a very useful for pri-
oritizing intervention actions and maximizing the mitigation
effects for the whole network. If an external perturbation hit
an element with very high hub value, the cascading effects
on the network would be more relevant due to its central role
in the system. On the other hand, a mitigation measure ap-
plied to the elements with higher hub values would produce
a higher benefit in the whole network.
The hub outliers in Fig. 8 are associated to the elements
of the network that are geographically located mainly in the
south-eastern part of Mexico City; as shown in Fig. 9, the
biggest icons are in this part of the city. Based on the avail-
able data, the density of elements that provide services in
south-eastern part is much lower compared to the other areas
of the city; as such, the few providers existing in this part
become important hubs for the whole system.
This part of the city has few providers that are central hubs
of the city and blocks with very high closeness. Together,
these two aspects underline the need for additional providers
in this area. This would reduce the respective number of re-
ceivers, decreasing the hub values of providers and reducing
the number of blocks depending on each of them.
Crossroads: betweenness analysis
As described in Sect. 2.2.2, a network that has nodes with
high betweenness values has a higher tendency to be frag-
mented because it has a strong aptitude for generating iso-
lated sub-networks. In this case study, transportation is the
only service that allows the analysis of the betweenness val-
ues of the nodes. In fact, vehicles (e.g. fire trucks, family
cars) need to pass through crossroads to go from point A to
point B (e.g. fire trucks going from a fire station to an af-
fected location; a family car going from a block to a school).
The betweenness analysis presented here shows the number
of shortest paths between pairs of nodes that pass through the
selected crossroads. As mentioned previously, the few cross-
roads considered in this pilot study are not intended to repro-
duce the very complex road network of Mexico City but to
present some highlights of the betweenness property.
Figure 10 shows the crossroads adopted in the analysis,
where the dimension of the icons is proportional to the value
of betweenness. It can be observed that the crossroads in the
ring road around the city centre have higher values of be-
tweenness, which is due to the fact that they connect the very
large suburb areas and the city centre. In particular, the cross-
roads in the south have the highest values because the number
of nodes in the south is greater than that in the north of the
city. Instead, the crossroads in the city centre connect mostly
the nodes that are inside the ring road, and for this reason
they have lower values of betweenness.
The betweenness value shows which crossroad is more
central, or more important and influent in the network, based
on shortest paths between the nodes. For example, in case a
crossroad is flooded, it will reduce or completely interrupt its
transportation service. A crossroad with higher betweenness
will influence a higher number of nodes, and as such, if its
functionality is affected, this will have a higher impact on the
network compared to a crossroad with lower betweenness.
3.2.4 Exposure: which elements have higher centrality
in the system?
Regarding the analysis from the receivers’ point of view, we
explore how the system privileges some receivers compared
with others according to their connections with the providers.
In particular, we propose a comparison between receivers
through the authority analysis.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 537
Figure 8. Boxplots of hub values for different typologies of service providers.
Figure 11 shows that the authority of the nodes tends to
be clustered around certain values, presenting discontinuities
between them. This results from the fact that all blocks re-
ceive exactly ve services from five providers (i.e. degree-in
is 5), and as such, they have the same values of authority
when they receive services from the same provider nodes.
Nodes with similar authority values should therefore be geo-
graphically located close to one another. This is confirmed
in Fig. 12, where the blocks are represented in space and
coloured according to their authority values.
Figure 12 shows a clear pattern from low values in the
north-west to higher values in the south-eastern part of
MCFD. The blocks with higher authority values are located
in the part of the city that is surrounded by the providers with
highest hub values, as illustrated in Fig. 9. In contrast, the
blocks in the city centre and in the north-west have the low-
est values of authority. In fact, this part of the city has the
highest density of providers, which decreases the number of
receivers for each provider and, consequently, their hub val-
ues. Note that this aspect likely results from the assumption
of not considering redundancy, meaning that each node can
only receive a certain service from its nearest provider. Oth-
erwise, if redundancy were considered, the blocks in the city
centre would receive the same service from many different
providers due to the higher density of such nodes.
According to these results, if a hazardous event hits the
blocks in the south-eastern part of the city, this will impact
the whole system more heavily because there will be more re-
quests to the same few hubs. Such hubs, which are potentially
more overburdened in an ordinary situation due to the high
number of services they provide, can put a considerable part
of the network in crisis after an external perturbation. The
strong correlation between hubs and authority explains the
results described above. However, it is necessary to under-
line that these outputs also reflect the assumption of the rules
of proximity adopted in this model, where the network has
no redundancy by construction. The redundancy can change
the values of hub and authority of the nodes and therefore in-
fluence the magnitude of cascade impacts that are presented
in the next section.
3.3 Flood impact propagation within the graph
In this section, we present a preliminary analysis of a flood
scenario in the case of Mexico City according to the proposed
graph-based approach. The aim is to show the potential of the
approach to highlight the impacts of a hazard over the whole
system, including indirect consequences to elements outside
the flooded area, based on a graph built for this specific pur-
pose.
The adopted hazard scenario is based on the development
of a simplified model that explicitly integrates the drainage
system and the surface runoff for the estimation of flood area
extension for different return periods, under the condition of
possible failure of the pumping system in the drainage sys-
tem (Arosio et al., 2018). Note that a detailed hazard analy-
sis is not the main goal of this article; therefore, the adopted
flood modelling approach does not intend to be as detailed
as possible but instead to represent an adequate compromise
between accuracy and simplicity. The hydrological and hy-
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
538 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 9. Map of providers. Icon dimensions are proportional to the hub values. OpenStreetMap contributors 2019. Distributed under a
Creative Commons BY-SA License.)
draulic simulations are based on the EPA’s Stormwater Man-
agement Model (SWMM; Rossman, 2015) and implemented
on the primary deep drainage system (almost 200 km of net-
work, 14 main channels and 108 manholes). As for the rain-
fall, patterns associated with different return periods were
obtained through the uniform intensity duration frequency
(IDF) curve for the entire MCFD (Amaro, 2005). In particu-
lar, Chicago hyetographs with a duration of 6h (Artina et al.,
1997) and an intensity peak at 2.1 h were constructed starting
from the IDF curve. For each return period, the flooded ar-
eas are computed based on the volume spilled out of each of
the main manholes of the drainage system. For each drainage
catchment, assumed hydraulically independent from the oth-
ers, a water depth–area relationship extracted from the digital
terrain model (DTM) is used to compute the flood extension
and depth. Figure 13a shows the flooded areas for a return
period of 100 years. The majority of water depth values are
between 0 and 1 m (lighter blues), and only a few raster cells
(darker blue) have higher values that reach up to 9.83m in
some low-lying areas.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 539
Figure 10. Map of crossroads. Icon dimensions are proportional to the betweenness values. OpenStreetMap contributors 2019. Distributed
under a Creative Commons BY-SA License.)
Some provider elements are located within the flood area,
as seen in Fig. 13a. These elements provide services to other
elements located both inside and outside flooded areas, as
shown in Fig. 13b. Even if some of these receiver elements
are not directly damaged, they can potentially experience in-
direct consequences due to the reduction or interruption of
services from the providers that are directly affected. Using
the hub analysis between the providers that are flooded, it is
possible to identify the nodes that have more central role and
can generate a potentially larger cascade effect for this flood
scenario.
Figure 14 shows the values of hub values between the 17
providers inside the flood area. By integrating the informa-
tion of the hazard scenario (i.e. flood area for specific a return
period) with the hub and authority analysis of the network,
it is possible to qualitatively assess that the red zone of the
city has a relatively higher risk compared with the rest of the
city. This zone is characterized by few providers with high
hub values, which serve many blocks that have high values
of authority as a result. This result shows the need for new
additional providers in the red zone around the flooded area
in order to reduce the flood impact. As a matter of fact, this
would reduce the number of receivers per provider, reducing
the hub values of flooded providers. Consequently, the num-
ber of affected blocks outside the flood footprint would be
reduced.
For this pilot study, the estimation of the direct impact
of the nodes is obtained adopting simplified binary vulner-
ability functions. According to this assumption, zero dam-
age then occurs in case of no flood, full damage occurs in
case of flood regardless of its intensity (vulnerability level
I), impacted nodes fully lose their capacity to provide ser-
vices (vulnerability level II) and receiver elements are fully
affected when even a single service is dismissed (vulnera-
bility level III). Despite of the availability of many vulner-
ability functions, for the purpose of this study we prefer to
adopt such a simplified assumption, since it does not affect
conceptually the correctness of the process. As a matter of
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
540 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 11. Boxplot of authority values for different provider services.
fact, the focus here is on the mechanism of the propagation
of the impacts through the graph rather than the correct quan-
tification of them. Thus, the cascading effects are propagated
through the graph by accounting for the nodes indirectly im-
pacted, i.e. those that have lost at least one service from their
providers. By using the graph properties this task is straight-
forward. A new graph (G1) is generated by removing the
nodes directly impacted by the flood from the original graph
(G0). After that, the degree-in of each node in G1, represent-
ing the number of incoming services, is compared with the
corresponding degree-in in G0. All the nodes with a reduc-
tion of degree-in are removed, and a new graph G2is gen-
erated. This process is repeated until there are no more af-
fected nodes in the graph and we obtain the final graph with
the maximum impact extension that can be compared with
the original graph.
Figure 15 shows the number of directly (blue) and indi-
rectly (red) impacted elements due to the flood with a 100-
year return period. The total number of elements affected
is about 31 000, with more than 4000 directly and almost
27 000 indirectly affected. These results, even acknowledg-
ing the relevance of the hypothesis adopted (i.e. no service
redundancy and binomial vulnerability function), show that
indirect damage represents a significant part of the total dam-
age. Furthermore, in Fig. 15 the hypothetical direct and indi-
rect impact curves are also plotted for illustrative purposes,
as they could result in computing the results for other return
periods.
The adoption of the graph adds, to the traditional reduc-
tionist risk assessment, the opportunity to explore the loss
not only also in term of services and not only in terms of ele-
ments. In fact, comparing the original graph (G0) with the fi-
nal graph obtained after the impact propagation, the approach
allows also computing the services lost. Figure 16 shows the
number of services lost after the impact propagation sepa-
rated within the categories of the elements and between the
services lost due to the dismissal of providers (brown) and
receivers (green) nodes. In terms of nodes there is no dif-
ference between those affected because of loss of received
service and those affected because of loss of demanded ser-
vice. Instead, in terms of services (i.e. links) there is a differ-
ence between those dismissed because of loss of a provider
and those dismissed because of loss of a receiver. This differ-
ence can be important in evaluating the relative importance
of these two different cases.
We acknowledge that these results are affected by the two
important assumptions highlighted above and also due to the
fact that the services are provided only by the elements in-
side the MCFD (as elements outside this area are not con-
sidered). Changing these assumptions could result in differ-
ent cascading impacts. Regardless, the framework illustrated
here shows the potentiality to quantitatively assess indirect
impacts, which can subsequently be integrated into collec-
tive risk assessments.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 541
Figure 12. Geographical distribution of the block authority value. OpenStreetMap contributors 2019. Distributed under a Creative Com-
mons BY-SA License.)
4 Discussion and final considerations
In this paper we looked at the problem of risk assessment
of natural hazards in a holistic perspective, focussing on the
“system” as a whole. We used system as a general term to
identify the set of the different entities, assets and parts of a
mechanism connected to each other in order to operate as, for
instance, an organism, an organization or a city. Most of such
systems are complex because of the high number of elements
and the large variety of connections linking them. Neverthe-
less, our society is structured in these complex systems which
are widespread everywhere. How can the risk of such com-
plex systems be assessed? We believe that a reductionist ap-
proach that separates the parts of a system, computes the risk
(losses, impacts, etc.) for each of them and then sums them
up to come up with a total estimate of risk is not adequate.
Most of the research on natural hazards and their risk adopted
implicitly the reductionist approach (i.e. “split the problem
into small parts and solve it”). However, we mentioned also
emerging literature which adopts a different approach (“keep
the system as a whole”), a holistic approach.
How can the system be represented as a single, intact and
entire entity? And how can all the connections of its parts be
represented? We believe, as other authors do, that the best ap-
proximation for representing a complex system is the graph.
Many authors have already used the graph to model systems
already organized as networks by construction (e.g. electric
power network) and assess the risk of natural hazards in such
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
542 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 13. (a) Flooded area for T=100 and flooded providers; (b) blocks connected to the flooded providers. OpenStreetMap contribu-
tors 2019. Distributed under a Creative Commons BY-SA License.)
Figure 14. Hub and authority values of flooded nodes. Open-
StreetMap contributors 2019. Distributed under a Creative Com-
mons BY-SA License.)
a manner. Fewer authors have used the graph to model sys-
tems not immediately and manifestly depicted as physical
networks and proceed in this manner to model the risk. Once
the effort to “translate” a system with all its components into
a graph is made, there are several advantages and benefits.
Figure 15. The full coloured bar reports the computed direct and
indirect impacted elements at T=100 years; shadow bars repre-
sent conceptually the impacts for other return period to visualize
the complete risk curve.
First of all, there is a mature theory of mathematics, the
graph theory, that already studies the properties of a graph.
Are these graph properties telling us something useful for
assessing the risk of natural hazards affecting these complex
systems? We showed that some of the graph properties can
disclose some relevant characteristics of the system related
to the risk assessment. What is the vulnerability and expo-
sure of the system? We proposed new analogies between
some graph properties such as authority, hub, betweenness
and degree-out values and the “systemic” exposure and vul-
nerability. The adoption of these analogies is supported by
the recent work published by Clark-Ginsberg et al. (2018):
despite having a different scope, they also use certain graph
properties to assess the hazards of the companies operating
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 543
Figure 16. Services impacted at T=100 years.
in the case study and promote a network representation of
the risk. In Sects. 2.2 and 3.2 we highlight the importance,
before quantifying the risk, of looking at the single risk com-
ponents from the systemic lens provided by the graph prop-
erties. This information could support more informed DRR
decision-making by strategically suggesting how to prioritize
intervention in order to minimize exposure and vulnerability
from a system point of view.
A second advantage is that the graph can be used as a tool
to propagate the impacts throughout the system from wher-
ever the hazard hit it, including indirect or cascading effects.
The links between nodes allow passing from the direct physi-
cal damage to broader economic and social indirect impacts.
The indirect impact suffered by a certain node may be de-
fined as a function of two factors: (1) the direct damage sus-
tained by one or more of its parent nodes (i.e. traditional im-
pact) and (2) the loss of service the latter provide to the for-
mer (i.e. vulnerability function). The integration of indirect
impact quantification within the graph-based framework has
been addressed in the pilot study using a simplified binary
vulnerability function.
Despite the two advantages in adopting this system’s per-
spective in risk assessment, Clark-Ginsberg et al. (2018)
highlights that there are “questions about the validity of
such assessment” regarding the ontological foundations of
networked risk, the non-linearity and emergent phenomena
that characterize system phenomena. In fact, the emergence
of the risk system demonstrates that the risk will never be
completely knowable, and for this reason the “unknown un-
knowns are an inseparable part of a risk networks”; in fact,
the boundary definition of open systems is by nature artifi-
cial.
The application to the case of urban flooding in Mexico
City it is a first attempt to demonstrate the feasibility of the
proposed approach, and it is also the first example in liter-
ature that tries to quantitatively analyse the propagation of
impact into a network of individual elements that do not
explicitly constitute a network. In this study, the complex-
ity of Mexico City is depicted by modelling certain selected
typologies of elements of the urban system and by assum-
ing simplified rules of connection between them. Further-
more, the system complexity acknowledged in this study is
restricted to the elements inside the MCFD and neglects any
potential contributions from outside elements. The definition
of a geographical boundary condition, which is a straight-
forward assumption in the traditional reduction approaches,
can be controversial in the holistic approaches that aim to
model the emergent characteristics of open-ended systems.
However, the flexibility of our approach allows for a graph to
be designed with any intended level of detail, depending on
the purpose of each specific application and the availability
of data. For instance, if a more comprehensive characteriza-
tion of the road network were required, the graph could be
expanded to include additional elements other than the ma-
jor crossroads. Another example takes into consideration the
rules of connections adopted in this study, which do not al-
low for redundancy, as each node is considered to receive
its services from the nearest provider only. A more detailed
graph could include, for example, influencing areas for each
service, which would allow considering multiple providers
for some of them, provided that the required data were avail-
able. Adopting different rules (e.g. a provider could deliver
its service to as many elements as inside a defined distance)
would allow a degree of redundancy of the network, which
could significantly change the impact of a hazardous event.
We adopted a simple flood scenario to illustrate how some of
the measures of a graph can be used in the context of natu-
ral hazard risk assessment. However, within our framework,
additional potentially relevant information can be obtained.
For example, here we presented the results of the structural
analysis of the graph without looking into functional prop-
erties such as the percolation threshold, which characterizes
the resilience of a network and can therefore provide valu-
able information for practical applications. Another possible
extension consists of studying how the network evolves with
time, following external perturbations at different return pe-
riods.
Furthermore, the proposed approach could introduce a
common base for future research on both multi-hazard and
integrated risk assessment. Since the graph properties are
hazard independent, it is possible to integrate these properties
with the characteristics of the single node, such as the phys-
ical vulnerability of a building with respect to earthquakes
or flooding (adopted by reductionist approaches), and anal-
yse multiple hazards using the same graph. Besides this, the
use of this approach can be applied to physical as well as so-
cial or integrated risk. In the former case, the graph has only
physical elements (e.g. buildings); in the latter case the graph
has nodes that reflect also social aspects (e.g. population, age,
education, etc.).
Further research will aim to fully implement and integrate
the graph-based approach in quantitative risk assessments,
both at the scenario and probabilistic level. One of the chal-
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
544 M. Arosio et al.: The whole is greater than the sum of its parts
lenges that will need to be addressed is related to data re-
quirements and availability. Currently, most exposure and
vulnerability databases focus on the properties of single el-
ements and tend to contain little to no information on the
connections between them. As we have discussed, this in-
formation is key for more thoroughly understanding and as-
sessing the risk of a system. For this reason, developing and
collecting data with information related to the connections
between the elements is paramount. To promote this perspec-
tive, it is necessary consider shifting the RA from using tra-
ditional relational databases to so-called graph databases. In
such databases, each node contains, in addition to the tradi-
tional characteristics, also a list of relationship records which
represent its connections with other nodes. The information
on these links is organized by type and direction and may
hold additional attributes.
Finally, the introduction of the graph-based approach into
the RA for collective disaster risk aims, in the long term, to
be a first step for future developments of agent-based mod-
els and complex adaptive systems in collective risk assess-
ment. In this perspective, the nodes of the network are agents,
with a defined state (e.g. level of damage), and the interac-
tion between the other agents is controlled by specific rules
(e.g. vulnerability and functional functions) inside the envi-
ronment they live in (e.g. natural hazard phenomena).
Data availability. The geospatial vector input data for the Mexico
City case study are available in the Supplement.
Supplement. The supplement related to this article is available on-
line at: https://doi.org/10.5194/nhess-20-521-2020-supplement.
Author contributions. MA, MLVM and RF made substantial con-
tributions to the conception and design, acquisition, analysis, and
interpretation of data. All authors participated in drafting the arti-
cle and revising it critically for important intellectual content. All
authors give final approval of the published version.
Competing interests. The authors declare that they have no conflict
of interest.
Acknowledgements. This research was partly funded by Fon-
dazione Cariplo under the project “NEWFRAME: NEtWork-based
Flood Risk Assessment and Management of Emergencies”, and it
has been developed within the framework of the project “Diparti-
menti di Eccellenza”, funded by the Italian Ministry of Education,
University and Research at IUSS Pavia.
Financial support. This research was partly funded by Fondazione
Cariplo under the project “NEWFRAME: NEtWork-based Flood
Risk Assessment and Management of Emergencies”, and it has been
developed within the framework of the project “Dipartimenti di Ec-
cellenza”, funded by the Italian Ministry of Education, University
and Research at IUSS Pavia.
Review statement. This paper was edited by Bruno Merz and re-
viewed by three anonymous referees.
References
Abele, W. I. and Dunn, M.: International CIIP handbook 2006
(Vol. I) An inventory of 20 national and 6 international critical
information infrastructure protection polices, Zurich, Switzer-
land, 2006.
Albano, R., Sole, A., Adamowski, J., and Mancusi, L.: A GIS-based
model to estimate flood consequences and the degree of accessi-
bility and operability of strategic emergency response structures
in urban areas, Nat. Hazards Earth Syst. Sci., 14, 2847–2865,
https://doi.org/10.5194/nhess-14-2847-2014, 2014.
Alexoudi, M. N., Kakderi, K. G., and Pitilakis, K. D.: Seismic risk
and hierarchy importance of interdependent lifelines. Methodol-
ogy and important issues, in: 11th ICASP International Confer-
ence on Application of Statistics and Probability in Civil Engi-
neering, 1–4 August 2011, Zurich, Switzerland, 2011.
Alfieri, L., Bisselink, B., Dottori, F., Naumann, G., de Roo, A.,
Salamon, P., Wyser, K., and Feyen, L.: Global projections of
river flood risk in a warmer world, Earth’s Future, 5, 171–182,
https://doi.org/10.1002/2016EF000485, 2017.
Amaro, P.: Proposal of an approach for estimating relationships I-d-
Tr from 24 hours rainfalls, Universidad Nacional Autónoma de
México (UNAM), M.C., 2005.
Arosio, M., Martina, M. L. V, Carboni, E., and Creaco, E.: Sim-
plified pluvial flood risk assessment in a complex urban envi-
ronment by means of a dynamic coupled hydrological-hydraulic
model: case study of Mexico City, in: Proc. ofthe 5th IAHR Eu-
rope Congress New Challenges in Hydraulic Research and Engi-
neering, edited by: Armanini, A. and Nucci, E., 5th IAHR Europe
Congress Organizers, Trento, 429–430, 2018.
Arrighi, C., Brugioni, M., Castelli, F., Franceschini, S., and Maz-
zanti, B.: Urban micro-scale flood risk estimation with parsi-
monious hydraulic modelling and census data, Nat. Hazards
Earth Syst. Sci., 13, 1375–1391, https://doi.org/10.5194/nhess-
13-1375-2013, 2013.
Artina, S., Calenda, G., Calomino, F., Loggia, G. La, Modica, C.,
Paoletti, A., Papiri, S., Rasulo, G., and Veltri, P.: Sistemi di
fognatura. Manuale di progettazione, edited by: Hoepli, Milan,
1997.
Balbi, S., Giupponi, C., Gain, A., Mojtahed, V., Gallina, V., Torre-
san, S. and Marcomini, A.: The KULTURisk Framework (KR-
FWK): A conceptual framework for comprehensive assessment
of risk prevention measures Project deliverable 1.6., FP7-ENV-
2010 |Project 265280, 2010.
Barabasi, A. L.: Network Science, edited by: Cambridge Uni-
versity Press, Cambridge, available at: http://barabasi.com/
networksciencebook/ (last access: 10 February 2020), 2016.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 545
Bazzurro, P. and Luco, N.: Accounting for uncertainty and corre-
lation in earthquake loss estimation, 9th Int. Conf. Struct. Saf.
Reliab., Millpress, Rotterdam, 2687–2694, 2005.
Beirlant, J., Dierckx, G., Goegebeur, Y., and Matthys, G.: Tail Index
Estimation and an Exponential Regression Model, Extremes, 2,
177–200, 1999.
Bergström, J., Uhr, C., and Frykmer, T.: A Complexity Framework
for Studying Disaster Response Management, J. Conting. Crisis
Man., 24, 124–135, https://doi.org/10.1111/1468-5973.12113,
2016.
Biggs, N. L., Lloyd, E. K., and Wilson, R. J.: Graph Theory 1736-
1936, edited by: Clarendon Press, Oxford, 1976.
Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., and Hwang, D.
U.: Complex networks: Structure and dynamics, Phys. Rep., 424,
175–308, https://doi.org/10.1016/j.physrep.2005.10.009, 2006.
Börner, K., Soma, S., and Vespignani, A.: Network Science, in: An-
nual Review of Information & Technology, vol. 41, edited by:
Medford 537–607, asis&t, New Jersey, 2007.
Bosetti, L., Ivanovic, A., and Menaal, M.: Fragility, Risk,
and Resilience: A Review of Existing Frameworks, avail-
able at: http://i.unu.edu/media/cpr.unu.edu/attachment/2232/
Assessing-Fragility-Risk-and-Resilience-Frameworks.pdf (last
access: 10 February 2020), 2016.
Bouwer, L. M., Crompton, R. P., Faust, E., Höppe, P., and
Pielke, R. A.: Confronting disaster losses, Science, 318, 753,
https://doi.org/10.1126/science.1149628, 2007.
Bruneau, M., Chang, S. E., Eguchi, R. T., Lee, G. C., O’Rourke,
D., Reinhorn, A. M., Shinozuka, M., Tierney, K., Wallace, W.
A., and von Winterfeldt, D.: A framework to quantitatively as-
sess and enhance the seismic resilience of communites, in: 13th
World Conference on Earthquake Engineering, EarthquakeSpec-
tra, Vancouver, Canada, 2004.
Buldyrev, S. V, Parshani, R., Paul, G., Stanley, H. E., and Havlin,
S.: Catastrophic cascade of failures in interdependent networks,
Nature, 464, 1025–1028, https://doi.org/10.1038/nature08932,
2010.
Bunde, A. and Havlin, S.: Fractals and Disordered Systems,
Springer-Verlag, Berlin Heidelberg, 1991.
Burton, C. G. and Silva, V.: Assessing Integrated Earth-
quake Risk in OpenQuake with an Application to
Mainland Portugal, Earthq. Spectra, 32, 1383–1403,
https://doi.org/10.1193/120814EQS209M, 2015.
Campillo, G., Dickson, E., Leon, C., and Goicoechea, A.: Urban
risk assessment Mexico City metropolitan area, Understanding
urban risk: an approach for assessing disaster and climate risk in
cities, Mexico, 2011.
Cardona, O. D.: The Need for Rethinking the Concepts of Vulnera-
bility and Risk from a Holistic Perspective: A Necessary Review
and Criticism for Effective Risk Management, in Mapping vul-
nerability: Disasters, development and people, vol. 3, Earthscan
Publishers, London, 37–51, 2003.
Carreño, M. L., Cardona, O., and Barbat, A.: A disaster risk
management performance index, Nat. Hazards, 41, 1–20,
https://doi.org/10.1007/s11069-006-9008-y, 2007a.
Carreño, M. L., Cardona, O. D., and Barbat, A. H.: Urban seismic
risk evaluation: A holistic approach, Nat. Hazards, 40, 137–172,
https://doi.org/10.1007/s11069-006-0008-8, 2007b.
Carreño, M. L., Cardona, O. D., and Barbat, A. H.: New methodol-
ogy for urban seismic risk assessment from a holistic perspective,
B. Earthq. Eng., 10, 547–565, https://doi.org/10.1007/s10518-
011-9302-2, 2012.
Clark-Ginsberg, A., Abolhassani, L., and Rahmati, E. A.: Com-
paring networked and linear risk assessments: From the-
ory to evidence, Int. J. Disast. Risk Re., 30, 216–224,
https://doi.org/10.1016/j.ijdrr.2018.04.031, 2018.
Coles, S.: An Introduction to Statistical Modeling of Extreme Val-
ues, Springer-Verlag, London, 2001.
Crowley, H. and Bommer, J. J.: Modelling seismic hazard in earth-
quake loss models with spatially distributed exposure, B. Earthq.
Eng., 4, 249–273, https://doi.org/10.1007/s10518-006-9009-y,
2006.
Cutter, S. L., Barnes, L., Berry, M., Burton, C., Evans, E., Tate,
E., and Webb, J.: A place-based model for understanding com-
munity resilience to natural disasters, Global Environ. Chang.,
18, 598–606, https://doi.org/10.1016/j.gloenvcha.2008.07.013,
2008.
Cutter, S. L., Burton, C. G., and Emrich, C. T.: Journal of Homeland
Security and Disaster Resilience Indicators for Benchmarking
Baseline Conditions Disaster Resilience Indicators for Bench-
marking Baseline Conditions, J. Homel. Secur. Emerg., 7, 51,
https://doi.org/10.2202/1547-7355.1732, 2010.
David, C.: The Risk Triangle, available at: https://www.ilankelman.
org/crichton/1999risktriangle.pdf (last access: 1 January 2020),
1999.
Dueñas-Osorio, L. and Vemuru, S. M.: Cascading failures in
complex infrastructure systems, Struct. Saf., 31, 157–167,
https://doi.org/10.1016/j.strusafe.2008.06.007, 2009.
Dueñas-Osorio, L., Craig, J. I., and Goodno, B. J.: Probabilistic re-
sponse of interdependent infrastructure networks, in 2nd annual
meeting of the Asian-pacific network of centers for earthquake
engineering research (ANCER), Honolulu, Hawaii, 2004.
Eakin, H., Bojórquez-Tapia, L. A., Janssen, M. A., Georgescu,
M., Manuel-Navarrete, D., Vivoni, E. R., Escalante, A.
E., Baeza-Castro, A., Mazari-Hiriart, M., and Lerner, A.
M.: Urban resilience efforts must consider social and po-
litical forces, P. Natl. Acad. Sci. USA, 114, 186–189,
https://doi.org/10.1073/pnas.1620081114, 2017.
Euler, L.: Solutio problematis ad geometrian situs pertinentis, Com-
mentarii Academiae Scientiarum Imperialis Petropolitanae, 8,
128–140, 1736.
Falter, D., Schröter, K., Dung, N. V., Vorogushyn, S., Kreibich,
H., Hundecha, Y., Apel, H., and Merz, B.: Spatially coherent
flood risk assessment based on long-term continuous simula-
tion with a coupled model chain, J. Hydrol., 524, 182–193,
https://doi.org/10.1016/j.jhydrol.2015.02.021, 2015.
Gallina, V., Torresan, S., Critto, A., Sperotto, A., Glade, T.,
and Marcomini, A.: A review of multi-risk methodologies for
natural hazards: Consequences and challenges for a climate
change impact assessment, J. Environ. Manage., 168, 123–132,
https://doi.org/10.1016/j.jenvman.2015.11.011, 2016.
Gao, J., Liu, X., Li, D., and Havlin, S.: Recent progress on
the resilience of complex networks, Energies, 8, 12187–12210,
https://doi.org/10.3390/en81012187, 2015.
Grossi, P. and Kunreuther, H.: Catastrophe Modeling: A New
Aproach to Managing Risk Catastrophe Modeling, Springer Sci-
ence + Bussiness Media, Inc., Boston, 2005.
Hammond, M. J., Chen, A. S., Djordjevi´
c, S., Butler,
D., and Mark, O.: Urban flood impact assessment:
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
546 M. Arosio et al.: The whole is greater than the sum of its parts
A state-of-the-art review, Urban Water J., 12, 14–29,
https://doi.org/10.1080/1573062X.2013.857421, 2013.
Holmgren, Å. J.: Using Graph Models to Analyze the Vulnera-
bility of Electric Power Networks, Risk Anal., 26, 955–969,
https://doi.org/10.1111/j.1539-6924.2006.00791.x, 2006.
IPCC: Managing the Risks of Extreme Events and Disasters to Ad-
vance Climate Change Adaptation. A Special Report of Work-
ing Groups I and II of the Intergovernmental Panel on Climate
Change, edited by: Field, C. B., Barros, V., Stocker, T. F., Qin,
D., Dokken, D. J., Ebi, K. L., Mastrandrea, M. D., Mach, K. J.,
Plattner, G.-K., Allen, S. K., Tignor, M., and Midgley, P. M.,
Cambridge University Press, Cambridge, UK, and New York,
NY, USA, 582 pp., 2012.
Kakderi, K., Argyroudis, S., and Pitilakis, K.: State of the art lit-
erature review of methodologies to assess the vulnerability of
a “system of systems” Project deliverable D2.9., 2011.
Karagiorgos, K., Thaler, T., Hübl, J., Maris, F., and Fuchs, S.: Multi-
vulnerability analysis for flash flood risk management, Nat.
Hazards, 82, 63–87, https://doi.org/10.1007/s11069-016-2296-y,
2016.
Koenig, M. D. and Battiston, S.: From Graph Theory to Models
of Economic Networks. A Tutorial, in: Networks, Topology and
Dynamics, Springer-Verlag, Berlin, 23–63, 2009.
Lane, J. A. and Valerdi, R.: Accelerating system of sys-
tems engineering understanding and optimization through
lean enterprise principles, 2010 IEEE Int. Syst. Conf.
Proceedings, SysCon 2010, Management of Environ-
mental Quality: An International Journal, 196–201,
https://doi.org/10.1109/SYSTEMS.2010.5482339, 2010.
Lewis, T. G.: Critical Infrastructure Protection in Homeland Secu-
rity: Defending a Networked Nation, John Wiley & Sons, 2014.
Lhomme, S., Serre, D., Diab, Y., and Laganier, R.: Analyzing re-
silience of urban networks: a preliminary step towards more
flood resilient cities, Nat. Hazards Earth Syst. Sci., 13, 221–230,
https://doi.org/10.5194/nhess-13-221-2013, 2013.
Liu, B., Siu, Y. L., and Mitchell, G.: Hazard interaction analysis for
multi-hazard risk assessment: a systematic classification based
on hazard-forming environment, Nat. Hazards Earth Syst. Sci.,
16, 629–642, https://doi.org/10.5194/nhess-16-629-2016, 2016.
Luce, R. D. and Perry, A. D.: A method of matrix anal-
ysis of group structure, Psychometrika, 14, 95–116,
https://doi.org/10.1007/BF02289146, 1949.
Markolf, S. A., Chester, M. V., Eisenberg, D. A., Iwaniec, D.
M., Davidson, C. I., Zimmerman, R., Miller, T. R., Ruddell, B.
L., and Chang, H.: Interdependent Infrastructure as Linked So-
cial, Ecological, and Technological Systems (SETSs) to Address
Lock-in and Enhance Resilience, Earth’s Futur., 6, 1638–1659,
https://doi.org/10.1029/2018EF000926, 2018.
Menoni, S.: Chains of damages and failures in a metropolitan envi-
ronment?: some observations on the Kobe earthquake in 1995, J.
Hazard. Mater., 86, 101–119, 2001.
Menoni, S., Pergalani, F., Boni, M., and Petrini, V.: Lifelines earth-
quake vulnerability assessment: a systemic approach, Soil Dyn.
Earthq. Eng., 22, 1199–1208, https://doi.org/10.1016/S0267-
7261(02)00148-3, 2002.
Mingers, J. and White, L.: A Review of the Recent Contribution
of Systems Thinking to Operational Research and Management
Science Working paper series n. 197, University of Bristol,
Bristol, 2009.
Navin, P. K. and Mathur, Y. P.: Application of graph theory for op-
timal sewer layout generation, Discovery, 40, 151–157, 2015.
Nepusz, T. and Csard, G.: Network Analysis and Visualization Au-
thor, available at: https://cran.r-project.org/web/packages/igraph/
igraph.pdf (last access: 1 February 2020), 2018.
Newman, M. E. J.: Networks An Introduction, Oxford, New York,
2010.
Ouyang, M.: Review on modeling and simulation of interdependent
critical infrastructure systems, Reliab. Eng. Syst. Safe, 121, 43–
60, https://doi.org/10.1016/j.ress.2013.06.040, 2014.
Pant, R., Thacker, S., Hall, J. W., Alderson, D., and Barr, S.: Critical
infrastructure impact assessment due to flood exposure, J. Flood
Risk Manag., 11, 22–33, https://doi.org/10.1111/jfr3.12288,
2018.
Pescaroli, G. and Alexander, D.: Critical infrastructure, panarchies
and the vulnerability paths of cascading disasters, Nat. Hazards,
82, 175–192, https://doi.org/10.1007/s11069-016-2186-3, 2016.
Pescaroli, G. and Alexander, D.: Understanding Com-
pound, Interconnected, Interacting, and Cascading Risks:
A Holistic Framework, Risk Anal., 38, 2245–2257,
https://doi.org/10.1111/risa.13128, 2018.
Reed, D. A., Kapur, K. C., and Christie, R. D.: Methodology for as-
sessing the resilience of networked infrastructure, IEEE Syst. J.,
3, 174–180, https://doi.org/10.1109/JSYST.2009.2017396, 2009.
Rinaldi, S. M.: Modeling and simulating critical infrastructures
and their interdependencies, Big Island, HI, USA IEEE, 8 pp.,
https://doi.org/10.1109/hicss.2004.1265180, 2004.
Rinaldi, S. M., Peerenboom, J. P., and Kelly, T. K.: Iden-
tifying, understanding, and analyzing critical infrastructure
interdependencies, IEEE Contr. Syst. Mag., 21, 11–25,
https://doi.org/10.1109/37.969131, 2001.
Rossman, L. A.: Storm Water Management Model User’s
Manual, EPA United States ENviromental Protection
Agency, available at: http://www.epa.gov/water-research/
storm-water-management-model-swmm (last access: 1 Febru-
ary 2020), 2015.
Santos-Reyes, J., Gouzeva, T., and Santos-Reyes, G.: Earthquake
risk perception and Mexico City’s public safety, Procedia Engi-
neer, 84, 662–671, https://doi.org/10.1016/j.proeng.2014.10.484,
2014.
Sapountzaki, K.: Social resilience to environmental risks: A mech-
anism of vulnerability transfer?, Manag. Environ. Qual. An Int.
J., 18, 274–297, https://doi.org/10.1108/14777830710731743,
2007.
Scarrott, C. and Macdonald, A.: A review of extreme value thresh-
olds estimation and uncertainty quantification, Stat. J., 10, 33–60,
2012.
Schneiderbauer, S. and Ehrlich, D.: Risk, hazard and people’s
vulnerability to natural hazards: A review of definitions, con-
cepts and data, Eur. Comm. Jt. Res. Centre. EUR, 21410, 40,
https://doi.org/10.1007/978-3-540-75162-5_7, 2004.
Schwarte, N., Cohen, R., Ben-Avraham, D., Barabási, A. L., and
Havlin, S.: Percolation in directed scale-free networks, Phys.
Rev. E, 66, 1–4, https://doi.org/10.1103/PhysRevE.66.015104,
2002.
Setola, R., Rosato, V., Kyriakides, E., and Rome, E.: Managing the
Complexity of Critical Infrastructures, Springer Nature, Poland,
2016.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020 www.nat-hazards-earth-syst-sci.net/20/521/2020/
M. Arosio et al.: The whole is greater than the sum of its parts 547
SFDRR: Sendai Framework for Disaster Risk Reduction
2015–2030, available at: https://www.unisdr.org/we/inform/
publications/43291 (last access: 18 February 2020), 2015.
Tellman, B., Bausch, J. C., Eakin, H., Anderies, J. M., Mazari-
hiriart, M., and Manuel-navarrete, D.: Adaptive pathways and
coupled infrastructure: seven centuries of adaptation to water risk
and the production of vulnerability in Mexico City, Ecol. Soc.,
23, 1, https://doi.org/10.5751/ES-09712-230101, 2018.
Terzi, S., Torresan, S., Schneiderbauer, S., Critto, A., Zebisch,
M., and Marcomini, A.: Multi-risk assessment in moun-
tain regions?: A review of modelling approaches for cli-
mate change adaptation, J. Environ. Manage., 232, 759–771,
https://doi.org/10.1016/j.jenvman.2018.11.100, 2019.
Trucco, P., Cagno, E., and De Ambroggi, M.: Dynamic func-
tional modelling of vulnerability and interoperability of Crit-
ical Infrastructures, Reliab. Eng. Syst. Safe., 105, 51–63,
https://doi.org/10.1016/j.ress.2011.12.003, 2012.
Tsuruta M. Shoji Y., Kataoka S., G. Y.: Damage propagation caused
by interdependency among critical infrastructures, 14th World
Conf. Earthq. Eng., 8, 2008.
Van Der Hofstad, R.: Percolation and Random Graphs, in: New Per-
spectives in Stochastic Geometry, edited by: Kendall, W. S. and
Molchanov, I., Eindhoven University of Technology, Eindhoven,
the Netherlands, 2009.
Wahl, T., Jain, S., Bender, J., Meyers, S. D., and Luther, M. E.:
Increasing risk of compound flooding from storm surge and
rainfall for major US cities, Nat. Clim. Change, 5, 1093–1097,
https://doi.org/10.1038/nclimate2736, 2015.
Wilson, R. J.: Introduct to Graph Theory, Oliver & Boyd, Edin-
burgh, 1996.
Zimmerman, R., Foster, S., González, J. E., Jacob, K., Kunreuther,
H., Petkova, E. P., and Tollerson, E.: New York City Panel on
Climate Change 2019 Report Chapter 7: Resilience Strategies for
Critical Infrastructures and Their Interdependencies, Ann. NY
Acad. Sci., 1439, 174–229, https://doi.org/10.1111/nyas.14010,
2019.
Zio, E.: Challenges in the vulnerability and risk analysis of crit-
ical infrastructures, Reliab. Eng. Syst. Safe., 152, 137–150,
https://doi.org/10.1016/j.ress.2016.02.009, 2016.
Zscheischler, J., Westra, S., Van Den Hurk, B. J. J. M., Senevi-
ratne, S. I., Ward, P. J., Pitman, A., Aghakouchak, A., Bresch,
D. N., Leonard, M., Wahl, T., and Zhang, X.: Future climate
risk from compound events, Nat. Clim. Change, 8, 469–477,
https://doi.org/10.1038/s41558-018-0156-3, 2018.
www.nat-hazards-earth-syst-sci.net/20/521/2020/ Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
... Originating at an elevation of around 2000 m a.s.l. in the Cantabrian Mountains, the river flows from the northwest to the southeast, ultimately draining into the Mediterranean Sea between the cities of Barcelona and Valencia (Almazán-Gómez et al., 2019;Romaní et al., 2011). The Ebro River basin can be divided into three sub-basins: the upper Ebro, extending from Cantabria (limited by the Iberian Range and the Pyrenees) to Miranda de Ebro; the middle Ebro, representing the largest sub-basin from Haro to Mequinenza; and the lower Ebro, measuring 115 km in length, which serves as the confluence point for tributaries of the Ebro flowing from the Cinca-Serge system to the delta into the Mediterranean Sea (Balasch et al., 2019). ...
... Overall, the climate is Mediterranean, with some continental characteristics and a semi-arid climate in the central part of the basin. On average, the annual precipitation was estimated to be 622 mm, averaged from 1920 to 2000(Balasch et al., 2019. The upper Ebro experiences milder temperatures and higher precipitation, ranging between 1000-1500 mm annually. ...
... In the middle Ebro, the average precipitation is lower, varying from 400 to 700 mm annually. Last, the lower Ebro receives less than 400 mm (Balasch et al., 2019). The upper Ebro hydrological regime highly relies on snowfall and snow retention. ...
Article
Full-text available
Climate change increases the risk of wildfires and floods in the Mediterranean region. Yet, wildfire hazards are often overlooked in flood risk assessments and treated in isolation, despite their potential to amplify floods. Indeed, by altering the hydrological response of burnt areas, wildfires can lead to increased runoff and amplifying effects. This study aims to comprehensively assess flood risk using a multi-hazard approach, considering the effect of wildfires on flood risk, and integrating diverse socio-economic indicators with hydrological properties. More specifically, this study investigates current and future flood risks in the Ebro River basin in Spain for the year 2100 under the Shared Socioeconomic Pathway 1-2.6 (SSP1-2.6) and SSP5-8.5 scenarios, taking into account projected socio-economic conditions and the effect of wildfires. An analytical hierarchy process (AHP) approach is employed to assign weights to various indicators and components of flood risk based on insights gathered from interviews with seven experts specializing in natural hazards. Results show that the influence of wildfires on the baseline flood risk is not apparent. Under the SSP1-2.6 scenario, regions with high flood risk are expected to experience a slight risk reduction, regardless of the presence of wildfires, due to expected substantial development in adaptive capacity. The highest flood risk, almost double compared to the baseline, is projected to occur in the SSP5-8.5 scenario, especially when considering the effect of wildfires. Therefore, this study highlights the importance of adopting a multi-hazard risk management approach, as reliance solely on single-risk analyses may lead to underestimating the compound and cascading effects of multi-hazards.
... The holistic imperative considers all factors significant and seeks to cross the sectoral boundaries and scales to link multiple hazards and geographies. 18,19 However, this increases the number of variables and interrelations that need explaining (i.e., the degrees of freedom), complicating interpretation and requiring more comprehensive data. In addition, as analyses become more sophisticated, they demand more comprehensive global and local data, which are often lacking. ...
... The limitations imposed by reductionism are better known (e.g., incapacity to explain emergence 5,6 ). In the Anthropocene, new consequences of reductionism come forth: hiding vulnerabilities behind overtly general conclusions, 20 obfuscating significant yet secondary effects, 18 or delimiting the analyzed hazards short of their impacts' extent. 9 As risks cascade across society and the environment, their effects may become more prominent in certain social groups, 12 as seen during COVID-19, when lowincome communities suffered more from the disease and social isolation policies. ...
... 25 Ultimately, reductionism has failed to provide fair and adequate policy advice for hazard mitigation 27 because it creates systemic gaps in research and data, 9 fails to address second-order effects of hazards, 20,28 and erroneously defines independent units of analysis that are conversely empirically entwined. 6,18 The shortcomings of the holistic imperative and reductionism, thus, highlight the significance of integrative approaches. First, integrating social and environmental empirical evidence shows that climate, health, and social crises increasingly interact and amplify their adverse impacts. ...
Article
Full-text available
Summary: Despite the intense hazard interactions in the Anthropocene, risk research is often limited by disciplinary approaches and single-sector or scale analyses, skewing policy advice toward biased, misguided, and unfair outcomes. Research has been locked in a trade-off between reductionism, ignoring the often-conflictive local contexts, and the holistic imperative, which has been a complex and intractable problem. Here, we provide a framework that embraces the complexities of integrating mixed methods, societal sectors, and analytical scales by using a translator agent-based model. This approach innovates by treating the informational transfers explicitly and dialoguing with different disciplines. We implement it to analyze COVID-19 in Brazil, and our mixed top-down and bottom-up evidence markedly differentiates exposure and vulnerability across social classes. This framework overcomes disciplinary siloing, accounts for cross-sectoral losses, and tracks feedback between environmental and social factors. These innovations are key for promoting evidence-based and context-sensitive policies essential for fairer and more effective adaptation.
... Applying this thesis to the semantic field of hazards, we can say that although science tends to study hazards-their drivers and their effects-one by one, in reality, territories suffer them as a whole. The need to take a holistic approach in multi-hazard assessments is widely discussed in the literature because of the complex interactions and interdependencies among hazards reflected in compound and nonlinear effects [2][3][4]. Indeed, hazards can occur over time in a simultaneous, cascading or cumulative way, as reported in the definition provided by the United Nations Office for Disaster Risk Reduction (UNDRR) [5]. Although, worldwide, many regions are affected by complex hazard landscapes characterised by their omnipresence, hazards are typically studied in isolation [6]. ...
Article
Full-text available
Many regions worldwide are exposed to multiple omnipresent hazards occurring in complex interactions. However, multi-hazard assessments are not yet fully integrated into current planning tools, particularly when referring to transboundary areas. This work aims to enable spatial planners to include multi-hazard assessments in their climate change adaptation measures using available data. We focus on a set of hazards (e.g., extreme heat, drought, landslide) and propose a four-step methodology to (i) harmonise existing data from different databases and scales for multi-hazard assessment and mapping and (ii) to read identified multi-hazard bundles in homogeneous territorial areas. The methodology, whose outputs are replicable in other EU contexts, is applied to the illustrative case of Northeast Italy. The results show a significant difference between hazards with a ‘dichotomous’ spatial behaviour (shocks) and those with a more complex and nuanced one (stresses). The harmonised maps for the single hazards represent a new piece of knowledge for our territory since, to date, there are no comparable maps with this level of definition to understand hazards’ spatial distribution and interactions between transboundary areas. This study does present some limitations, including putting together data with a remarkable difference in definition for some hazards.
... Complex dynamics characterize socio-environmental and climate risk: applications may underestimate impacts if they do not 75 take into account the compounding, cascading and amplifying interactions of hazards and their effect on vulnerability and exposure factors. In fact, (i) compounding hazards (co-occurring in the same location and at the same time) can lead to impacts which may be substantially higher than the sum of the single events taken in isolation (Arosio et al., 2020;Zscheischler et al., 2018), (ii) the occurrence of one hazard itself can modify vulnerability or resilience of the system, exposing assets or communities to higher risks, such as in the case of consecutive hazards (de Ruiter & van Loon, 2022), and (iii) impacts and 80 risks can propagate across multiple scales and sectors, extending far beyond the area initially hit and affecting whole systems (Arosio et al., 2021;Pescaroli & Alexander, 2018), such as in the case of high-impact and low-probability events (Linkov et al., 2022). For these reasons, the international community (Intergovernmental Panel on Climate Change (IPCC), 2023; UNDRR, 2020) has recently pledged for a paradigm shift from single hazard towards a more comprehensive understanding of multiple and interconnected climatic risks (AghaKouchak et al., 2020;De Angeli et al., 2022;Gallina et al., 2020;Šakić 85 Trogrlić et al., 2024;Terzi et al., 2019;Tilloy et al., 2019;Ward et al., 2022). ...
Preprint
Full-text available
In recent years, interest in data-driven methods, such as machine learning and multivariate statistics for multi-hazard and multi-risk assessment has surged, due to their ability to integrate vast amounts of data in modelling complex non-linear relationships between hazard and risk factors. This review explores data-driven methods in climate multi-hazard and risk analysis, focusing on four themes: (i) data processing and collection; (ii) hazard identification, prediction and analysis; (iii) risk analysis; and (iv) future risk scenarios under climate change. Key findings highlight the extensive use of machine learning to combine Earth observations and climate data for downscaling and land use and land cover characterisation; the application of deep learning for hazard prediction; the use of ensemble methods for risk analysis; and the growing emphasis on explainable AI frameworks. Training of supervised machine learning approaches on past impacts to model future risk through climate projections also emerged as a significant area. Future research should prioritize multi-hazard interactions, particularly triggering and cascading effects, integrate dynamic vulnerability and exposure factors, and address uncertainties associated with using machine learning for extrapolation. Advancements in Earth observations and textual data integration, alongside the development of open-access disaster catalogues, will be crucial for improving multi-risk analyses and supporting AI-driven early warning systems tailored to regional needs.
... There has been a growing recognition over the last 15 years that natural hazards can interact and occur in conjunction with each other, leading to a potential compounding effect that is greater than the sum of the single-hazard impacts (Kappes et al., 2012;Arosio et al., 2020;Terzi et al., 2019). While the global prevalence of cascading hazards specifically is difficult to quantify reliably, there are increasing calls for effective multi-hazard risk assessments (e.g. Ward et al., 2022). ...
Article
Full-text available
This study introduces a new approach to multi-hazard risk assessment, leveraging hypergraph theory to model the interconnected risks posed by cascading natural hazards. Traditional single-hazard risk models fail to account for the complex interrelationships and compounding effects of multiple simultaneous or sequential hazards. By conceptualising risks within a hypergraph framework, our model overcomes these limitations, enabling efficient simulation of multi-hazard interactions and their impacts on infrastructure. We apply this model to the 2015 Mw 7.8 Gorkha earthquake in Nepal as a case study, demonstrating its ability to simulate the primary and secondary effects of the earthquake on buildings and roads across the whole earthquake-affected area. The model predicts the overall pattern of earthquake-induced building damage and landslide impacts, albeit with a tendency towards over-prediction. Our findings underscore the potential of the hypergraph approach for multi-hazard risk assessment, offering advances in rapid computation and scenario exploration for cascading geo-hazards. This approach could provide valuable insights for disaster risk reduction and humanitarian contingency planning, where the anticipation of large-scale trends is often more important than the prediction of detailed impacts.
... This phenomenon is especially pronounced in strategic sectors like energy, telecommunications, and transportation, where a disruption within one segment of an infrastructure network can swiftly trigger far-reaching consequences, cascading across the network and potentially spilling over into other interconnected systems. For instance, the floods in Thailand in 2010 triggered a worldwide scarcity of computer components, highlighting the interconnectedness of these systems (Arosio et al., 2020). It can be argued that critical infrastructure can operate as a "vulnerability magnifier," whereby its spatial ...
Article
Purpose This study aimed to address the underexplored domain of organisational vulnerability, with a specific focus on understanding how vulnerability is understood in organisations and the underlying pathways leading to vulnerability. Design/methodology/approach This study utilised a narrative literature review methodology, using Google Scholar as the primary source, to analyse the concepts of organisational vulnerability in the context of disaster risk studies. The review focused on relevant documents published between the years 2000 and 2022. Findings The analysis highlights the multifaceted nature of organisational vulnerability, which arises from both inherent weaknesses within the organisation and external risks that expose it to potential hazards. The inherent weaknesses are rooted in internal vulnerability pathways such as organisational culture, managerial ignorance, human resources, and communication weaknesses that compromise the organisation’s resilience. The external dimension of vulnerability is found in cascading vulnerability pathways, e.g. critical infrastructure, supply chains, and customer relationships. Originality/value As the frequency and severity of disasters continue to increase, organisations of all sizes face heightened vulnerability to unforeseen disruptions and potential destruction. Acknowledging and comprehending organisational vulnerability is a crucial initial step towards enhancing risk management effectiveness, fostering resilience, and promoting sustainable success in an interconnected global environment and an evolving disaster landscape.
Chapter
Natural hazards do not occur isolated in an area as they can trigger other events due to hazard interrelations, which may result in severe disasters and consequences for a community. With this in mind, a geodatabase of devastating hazards has been constructed to store the locations and dates of occurrence for all types of hazards in Greece since 2014 using the local administrative boundaries, while each registered event is ascribed to the corresponding hazard contained in the UNDRR-ISC report for hazard definitions. Thereafter, a literature review of hazard interactions is conducted in order to identify the cascading effects, which are likely to happen in every particular area. The objective of this research is to propose a method to create a geodatabase of natural hazards and to identify those that were triggered as cascades by other disasters in the past utilizing historical data. For instance, considering that wildfires increase the probability of landslides and floods, this relationship can be detected in the geodatabase because in the Municipality of Sithonia, a wildfire was created on June 15, 2014, and several months later landslides and floods were revealed in the same municipality on October 24, 2014, and October 25, 2014.
Article
Flood hazards are increasing as a result of climate change and growing urbanization. Research has shown that people who are socially vulnerable are more exposed to flood risk. Flood disadvantage that exists today is projected to continue in the future: it is stubborn. We present a bold new research agenda for exploring how different physical, social, institutional, and natural resources interact in urban areas to influence social opportunities over time. These interactions must be modeled to understand how the flood‐affected area is degraded, in terms of the functions it provides, as a result of varying levels and frequency of flood exposure. Nesting flood exposure within a wider functional view of urban environments enables a place‐based, systems approach to resilience. It reveals the underlying mechanisms—and potential remediations—for spatial inequalities. In doing so resilience strategies can be designed to transform the current trajectory of stubborn disadvantage to flooding. This article is categorized under: Engineering Water > Planning Water
Conference Paper
Full-text available
This work presents the case study of Mexico City (MC), which could observe an increase of pluvial flood due to climate change, urbanization and subsidence. It is presented the development of a simplified model that explicitly integrate the drainage system and the surface runoff for the estimation of both hazard and impacts on population and buildings.
Article
Full-text available
Traditional infrastructure adaptation to extreme weather events (and now climate change) has typically been techno-centric and heavily grounded in robustness—the capacity to prevent or minimize disruptions via a risk-based approach that emphasizes control, armoring, and strengthening (e.g., raising the height of levees). However, climate and nonclimate challenges facing infrastructure are not purely technological. Ecological and social systems also warrant consideration to manage issues of overconfidence, inflexibility, interdependence, and resource utilization—among others. As a result, techno-centric adaptation strategies can result in unwanted tradeoffs, unintended consequences, and underaddressed vulnerabilities. Techno-centric strategies that lock-in today's infrastructure systems to vulnerable future design, management, and regulatory practices may be particularly problematic by exacerbating these ecological and social issues rather than ameliorating them. Given these challenges, we develop a conceptual model and infrastructure adaptation case studies to argue the following: (1) infrastructure systems are not simply technological and should be understood as complex and interconnected social, ecological, and technological systems (SETSs); (2) infrastructure challenges, like lock-in, stem from SETS interactions that are often overlooked and underappreciated; (3) framing infrastructure with a SETS lens can help identify and prevent maladaptive issues like lock-in; and (4) a SETS lens can also highlight effective infrastructure adaptation strategies that may not traditionally be considered. Ultimately, we find that treating infrastructure as SETS shows promise for increasing the adaptive capacity of infrastructure systems by highlighting how lock-in and vulnerabilities evolve and how multidisciplinary strategies can be deployed to address these challenges by broadening the options for adaptation.
Article
Full-text available
In recent years, there has been a gradual increase in research literature on the challenges of interconnected, compound, interacting, and cascading risks. These concepts are becoming ever more central to the resilience debate. They aggregate elements of climate change adaptation, critical infrastructure protection, and societal resilience in the face of complex, high‐impact events. However, despite the potential of these concepts to link together diverse disciplines, scholars and practitioners need to avoid treating them in a superficial or ambiguous manner. Overlapping uses and definitions could generate confusion and lead to the duplication of research effort. This article gives an overview of the state of the art regarding compound, interconnected, interacting, and cascading risks. It is intended to help build a coherent basis for the implementation of the Sendai Framework for Disaster Risk Reduction (SFDRR). The main objective is to propose a holistic framework that highlights the complementarities of the four kinds of complex risk in a manner that is designed to support the work of researchers and policymakers. This article suggests how compound, interconnected, interacting, and cascading risks could be used, with little or no redundancy, as inputs to new analyses and decisional tools designed to support the implementation of the SFDRR. The findings can be used to improve policy recommendations and support tools for emergency and crisis management, such as scenario building and impact trees, thus contributing to the achievement of a system‐wide approach to resilience.
Article
Full-text available
Floods, wildfires, heatwaves and droughts often result from a combination of interacting physical processes across multiple spatial and temporal scales. The combination of processes (climate drivers and hazards) leading to a significant impact is referred to as a ‘compound event’. Traditional risk assessment methods typically only consider one driver and/or hazard at a time, potentially leading to underestimation of risk, as the processes that cause extreme events often interact and are spatially and/or temporally dependent. Here we show how a better understanding of compound events may improve projections of potential high-impact events, and can provide a bridge between climate scientists, engineers, social scientists, impact modellers and decision-makers, who need to work closely together to understand these complex events.
Article
Full-text available
Infrastructure development is central to the processes that abate and produce vulnerabilities in cities. Urban actors, especially those with power and authority, perceive and interpret vulnerability and decide when and how to adapt. When city managers use infrastructure to reduce urban risk in the complex, interconnected city system, new fragilities are introduced because of inherent system feedbacks. We trace the interactions between system dynamics and decision-making processes over 700 years of Mexico City’s adaptations to water risks, focusing on the decision cycles of public infrastructure providers (in this case, government authorities). We bring together two lenses in examining this history: robustness-vulnerability trade-offs to explain the evolution of systemic risk dynamics mediated by feedback control, and adaptation pathways to focus on the evolution of decision cycles that motivate significant infrastructure investments. Drawing from historical accounts, archeological evidence, and original research on water, engineering, and cultural history, we examine adaptation pathways of humans settlement, water supply, and flood risk. Mexico City’s history reveals insights that expand the theory of coupled infrastructure and lessons salient to contemporary urban risk management: (1) adapting by spatially externalizing risks can backfire: as cities expand, such risks become endogenous; (2) over time, adaptation pathways initiated to address specific risks may begin to intersect, creating complex trade-offs in risk management; and (3) city authorities are agents of risk production: even in the face of new exogenous risks (climate change), acknowledging and managing risks produced endogenously may prove more adaptive. History demonstrates that the very best solutions today may present critical challenges for tomorrow, and that collectively people have far more agency in and influence over the complex systems we live in than is often acknowledged.
Article
Full-text available
Improving urban resilience could help cities better cope with natural disasters, such as neighborhood flood events in Mexico City pictured here. Data source: Unidad Tormenta, Sistema de Aguas de la Ciudad de Mexico.
Article
Full-text available
Rising global temperature has put increasing pressure on understanding the linkage between atmospheric warming and the occurrence of natural hazards. While the Paris Agreement has set the ambitious target to limiting global warming to 1.5°C compared to preindustrial levels, scientists are urged to explore scenarios for different warming thresholds and quantify ranges of socioeconomic impact. In this work, we present a framework to estimate the economic damage and population affected by river floods at global scale. It is based on a modeling cascade involving hydrological, hydraulic and socioeconomic impact simulations, and makes use of state-of-the-art global layers of hazard, exposure and vulnerability at 1-km grid resolution. An ensemble of seven high-resolution global climate projections based on Representative Concentration Pathways 8.5 is used to derive streamflow simulations in the present and in the future climate. Those were analyzed to assess the frequency and magnitude of river floods and their impacts under scenarios corresponding to 1.5°C, 2°C, and 4°C global warming. Results indicate a clear positive correlation between atmospheric warming and future flood risk at global scale. At 4°C global warming, countries representing more than 70% of the global population and global gross domestic product will face increases in flood risk in excess of 500%. Changes in flood risk are unevenly distributed, with the largest increases in Asia, U.S., and Europe. In contrast, changes are statistically not significant in most countries in Africa and Oceania for all considered warming levels.
Article
Climate change has already led to a wide range of impacts on our society, the economy and the environment. According to future scenarios, mountain regions are highly vulnerable to climate impacts, including changes in the water cycle (e.g. rainfall extremes, melting of glaciers, river runoff), loss of biodiversity and ecosystems services, damages to local economy (drinking water supply, hydropower generation, agricultural suitability) and human safety (risks of natural hazards). This is due to their exposure to recent climate warming (e.g. temperature regime changes, thawing of permafrost) and the high degree of specialization of both natural and human systems (e.g. mountain species, valley population density, tourism-based economy). These characteristics call for the application of risk assessment methodologies able to describe the complex interactions among multiple hazards, biophysical and socio-economic systems, towards climate change adaptation. Current approaches used to assess climate change risks often address individual risks separately and do not fulfil a comprehensive representation of cumulative effects associated to different hazards (i.e. compound events). Moreover, pioneering multi-layer single risk assessment (i.e. overlapping of single-risk assessments addressing different hazards) is still widely used, causing misleading evaluations of multi-risk processes. This raises key questions about the distinctive features of multi-risk assessments and the available tools and methods to address them. Here we present a review of five cutting-edge modelling approaches (Bayesian networks, agent-based models, system dynamic models, event and fault trees, and hybrid models), exploring their potential applications for multi-risk assessment and climate change adaptation in mountain regions. The comparative analysis sheds light on advantages and limitations of each approach, providing a roadmap for methodological and technical implementation of multi-risk assessment according to distinguished criteria (e.g. spatial and temporal dynamics, uncertainty management, cross-sectoral assessment, adaptation measures integration, data required and level of complexity). The results show limited applications of the selected methodologies in addressing the climate and risks challenge in mountain environments. In particular, system dynamic and hybrid models demonstrate higher potential for further applications to represent climate change effects on multi-risk processes for an effective implementation of climate adaptation strategies.
Article
Disaster risk has long been conceptualized as a complex and non-linear set of interactions. Instead of evaluating risks as isolated entities, ‘networked’ risk assessment methods are being developed to capture interactions between hazards and vulnerabilities. In this article, we address three challenges to networked risk assessments: the limited attention paid to the role of vulnerability in shaping risk networks, the unclear value of networked assessments compared to linear ones, and the potential conflict in linear and networked assessments at theoretical level. We do so by providing one of the first comparisons between linear and networked assessments in an empirical case, the risks faced by businesses operating in Iran's Razavi Khorasan Province. We find that risk rankings vary depending on whether risks are assessed using linear or networked techniques, and that vulnerabilities feature prominently in networked risk results. We argue that although networked and linear techniques rest on fundamentally different ontological conceptualizations of the world, approaches are complementary and reflect different dimensions of risk, and can be used in conjunction to provide a more comprehensive view of risk.