ArticlePDF Available

Abstract and Figures

Assessing the risk of complex systems to natural hazards is an important but challenging problem. In today's intricate socio-technological world, characterized by strong urbanization and technological trends, the connections and interdependencies between exposed elements are crucial. These complex relationships call for a paradigm shift in collective risk assessments, from a reductionist approach to a holistic one. Most commonly, the risk of a system is estimated through a reductionist approach, based on the sum of the risk evaluated individually at each of its elements. In contrast, a holistic approach considers the whole system to be a unique entity of interconnected elements, where those connections are taken into account in order to assess risk more thoroughly. To support this paradigm shift, this paper proposes a holistic approach to analyse risk in complex systems based on the construction and study of a graph, the mathematical structure to model connections between elements. We demonstrate that representing a complex system such as an urban settlement by means of a graph, and using the techniques made available by the branch of mathematics called graph theory, will have at least two advantages. First, it is possible to establish analogies between certain graph metrics (e.g. authority, degree and hub values) and the risk variables (exposure, vulnerability and resilience) and leverage these analogies to obtain a deeper knowledge of the exposed system to a hazard (structure, weaknesses, etc.). Second, it is possible to use the graph as a tool to propagate the damage into the system, for not only direct but also indirect and cascading effects, and, ultimately, to better understand the risk mechanisms of natural hazards in complex systems. The feasibility of the proposed approach is illustrated by an application to a pilot study in Mexico City.
Content may be subject to copyright.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.
The whole is greater than the sum of its parts: a holistic graph-based
assessment approach for natural hazard risk of complex systems
Marcello Arosio1, Mario L. V. Martina1, and Rui Figueiredo1,2
1Department of Science, Technology and Society, Scuola Universitaria Superiore IUSS Pavia,
Piazza della Vittoria 15, 27100 Pavia, Italy
2CONSTRUCT-LESE, Faculty of Engineering, University of Porto, Porto, Portugal
Correspondence: Marcello Arosio (
Received: 27 September 2018 – Discussion started: 8 October 2018
Revised: 16 January 2020 – Accepted: 18 January 2020 – Published: 24 February 2020
Abstract. Assessing the risk of complex systems to natu-
ral hazards is an important but challenging problem. In to-
day’s intricate socio-technological world, characterized by
strong urbanization and technological trends, the connections
and interdependencies between exposed elements are crucial.
These complex relationships call for a paradigm shift in col-
lective risk assessments, from a reductionist approach to a
holistic one. Most commonly, the risk of a system is esti-
mated through a reductionist approach, based on the sum of
the risk evaluated individually at each of its elements. In con-
trast, a holistic approach considers the whole system to be a
unique entity of interconnected elements, where those con-
nections are taken into account in order to assess risk more
thoroughly. To support this paradigm shift, this paper pro-
poses a holistic approach to analyse risk in complex systems
based on the construction and study of a graph, the math-
ematical structure to model connections between elements.
We demonstrate that representing a complex system such as
an urban settlement by means of a graph, and using the tech-
niques made available by the branch of mathematics called
graph theory, will have at least two advantages. First, it is
possible to establish analogies between certain graph metrics
(e.g. authority, degree and hub values) and the risk variables
(exposure, vulnerability and resilience) and leverage these
analogies to obtain a deeper knowledge of the exposed sys-
tem to a hazard (structure, weaknesses, etc.). Second, it is
possible to use the graph as a tool to propagate the damage
into the system, for not only direct but also indirect and cas-
cading effects, and, ultimately, to better understand the risk
mechanisms of natural hazards in complex systems. The fea-
sibility of the proposed approach is illustrated by an applica-
tion to a pilot study in Mexico City.
1 Introduction
We live in a complex world: today’s societies are inter-
connected in complex and dynamic socio-technological net-
works and have become more dependent on the services pro-
vided by critical facilities. Population and assets in natu-
ral hazard-prone areas are increasing, which translates into
higher economic losses (Bouwer et al., 2007). In coming
years, climate change is expected to exacerbate these trends
(Alfieri et al., 2017). In this context, natural hazard risk is
a worldwide challenge that institutions and private individ-
uals must face at both global and local scales. Today, there
is growing attention paid to the management and reduction
of natural hazard risk, as illustrated for example by the wide
adoption of the Sendai Framework for Disaster Risk Reduc-
tion (SFDRR, 2015).
1.1 Collective disaster risk assessment: traditional
The effective implementation of strategies to manage and re-
duce collective risk, i.e. the risk assembled by a collection of
elements at risk, requires support from risk assessment (RA)
studies that quantify the impacts that hazardous events may
have on the built environment, economy and society (Grossi
and Kunreuther, 2005). The research community concerned
with disaster risk reduction (DRR), particularly in the fields
Published by Copernicus Publications on behalf of the European Geosciences Union.
522 M. Arosio et al.: The whole is greater than the sum of its parts
of physical risk, has generally agreed on a common approach
for the calculation of risk (R) as a function of hazard (H),
exposure (E) and vulnerability (V): R=f (H, E, V ) (e.g.
Balbi et al., 2010; David, 1999; IPCC, 2012; Schneiderbauer
and Ehrlich, 2004). Hazard defines the potentially damaging
events and their probabilities of occurrence, exposure repre-
sents the population or assets located in hazard zones that are
therefore subject to potential loss, and vulnerability links the
intensity of a hazard to potential losses to exposed elements.
This framework has been in use by researchers and practi-
tioners in the field of seismic risk assessment for some time
(Bazzurro and Luco, 2005; Crowley and Bommer, 2006) and
has more recently also become standard practice for other
types of hazards, such as floods (Arrighi et al., 2013; Falter
et al., 2015).
Despite the consensus on the conceptual definition of
risk, different stakeholders tend to have their own specific
perspectives. For example, while insurance and reinsurance
companies may focus on physical vulnerability and potential
economic losses, international institutions and national gov-
ernments may be more interested in the social behaviour of
society or individuals in coping with or adapting to hazardous
events (Balbi et al., 2010). As such, even though this risk
formulation can be a powerful tool for RA, it has its limits.
For instance, it does not consider social conditions, commu-
nity adaptation or resilience (i.e. a system’s capacity to cope
with stress and failures and to return to its previous state).
In fact, resilience is still being debated, and there is not a
common and consolidated approach for assessing it (Bosetti
et al., 2016; Bruneau et al., 2004; Cutter et al., 2008, 2010).
To overcome some of these limits, different approaches
have been put forward in recent research. For example, Car-
reño et al. (2007a, b, 2012) have proposed including an
aggravating coefficient in the risk equation in order to re-
flect socio-economic and resilience features. Another exam-
ple can be found in the Global Earthquake Model, which
aims to assess so-called integrated risk by combining hazard
(seismic), exposure and vulnerability of structures with met-
rics of socio-economic vulnerability and resilience to seismic
risk (Burton and Silva, 2015). Multi-risk assessment studies
resulting from a combination of multiple hazards and vulner-
abilities are also receiving growing scientific attention (Eakin
et al., 2017; Gallina et al., 2016; Karagiorgos et al., 2016; Liu
et al., 2016; Markolf et al., 2018; Wahl et al., 2015; Zscheis-
chler et al., 2018). These new approaches are seen with in-
creasing international interest, particularly with regard to cli-
mate change adaptation (Balbi et al., 2010; Terzi et al., 2019).
While some research has explored the potential of an in-
tegrated approach to risk and multi-risk assessment of nat-
ural hazards, quantitative collective RA still requires further
development to consider the connections and interactions be-
tween exposed elements. Although holistic approaches are in
strong demand (Cardona, 2003; Carreño et al., 2007b; IPCC,
2012), the majority of methods and especially models devel-
oped so far are based on a reductionist paradigm, which esti-
mates the collective risk of an area as the sum of the risk of its
exposed elements individually, neglecting the links between
them. In fact, the reductionist approaches are neglecting one
of the famous conjectures attributed to Aristotle: “a whole is
greater than the sum of its parts” (384–322 BCE).
1.2 Modelling natural hazard risk in complex systems:
state of the art and limitations
Modern society increasingly relies on interconnections. The
links between elements are now crucial, especially consider-
ing current urbanization and technological trends. Complex
socio-technological networks, which increase the impact of
local events on broader crises, characterize the modern tech-
nology of present-day urban society (Pescaroli and Alexan-
der, 2016). Such aspects support the perception that collec-
tive risk assessment requires a more comprehensive approach
than the traditional reductionist one, as it needs to involve
“whole systems” and “whole life” thinking (Albano et al.,
2014). The reductionist approach, in which the “risks are an
additive product of their constituent parts” (Clark-Ginsberg
et al., 2018), contrasts with the complex nature of disas-
ters. In fact, these tend to be strongly non-linear, i.e. the ul-
timate outcomes (losses) are not proportional to the initial
event (hazard intensity and extensions) and are expressed by
emergent behaviour (i.e. macroscopic properties of the com-
plex system) that appears when the number of single entities
(agents) operate in an environment, giving rise to more com-
plex behaviours as a collective (Bergström, Uhr and Frykmer,
2016). In the last decade, many disasters have shown high
levels of complexity and the presence of non-linear paths and
emergent behaviour that have led to secondary events. Exam-
ples of such large-scale extreme events are the eruption of the
Eyjafjallajökull volcano in Iceland in 2010, which affected
Europe’s entire aviation system, the flooding in Thailand in
2011, which caused a worldwide shortage of computer com-
ponents, and the energy distribution crisis triggered by Hur-
ricane Sandy in New York in 2012.
Secondary events (or indirect losses) due to dependency
and interdependency have been thoroughly analysed in the
field of critical infrastructures such as telecommunications,
electric power systems, natural gas and oil, banking and fi-
nance, transportation, water supply systems, government ser-
vices, and emergency services (Buldyrev et al., 2010). Ri-
naldi et al. (2001), in one of the most quoted papers on this
topic, proposed a comprehensive framework for identifying,
understanding and analysing the challenges and complexi-
ties of interdependency. Since then, numerous works have
focussed on the issue of systemic vulnerability due to the in-
crease in interdependencies in modern society (e.g. Lewis,
2014; Menoni et al., 2002; Setola et al., 2016). Menoni
(2001) defines systemic risk as “the risk of having not
just statistically independent failures, but interdependent, so-
called ‘cascading’ failures in a network of N interconnected
system components.” The article also highlights that “In such
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 523
cases, a localized initial failure (‘perturbation’) could have
disastrous effects and cause, in principle, unbounded dam-
age as N goes to infinity.” Ouyang (2014) reviews existing
modelling approaches of interdependent critical infrastruc-
ture systems and categorizes them into six groups: empiri-
cal, agent-based, system dynamics-based, economic-theory-
based, network-based and others. This wide range of mod-
els reflects the different levels of analysis of critical infras-
tructures (physical, functional or socio-economic). Trucco
et al. (2012) propose a functional model aimed at (i) prop-
agating impacts, within and between infrastructures in terms
of disservice due to a wide set of threats, and (ii) applying
it to a pilot study in the metropolitan area of Milan. Pant
et al. (2018) proposed a spatial network model to quantify
flood impacts on infrastructures in terms of disrupted cus-
tomer services both directly and indirectly linked to flooded
assets. These analyses could inform flood risk management
practitioners to identify and compare critical infrastructure
risks on flooded and non-flooded land, to prioritize flood pro-
tection investments, and to improve the resilience of cities.
However, this well-developed branch of research is mostly
focussed on the analysis of a single infrastructure typol-
ogy, and the aim is usually to assess the efficiency of the
infrastructure itself rather than the impact that its failure
may have on society. In particular, “representations of in-
frastructure network interdependencies in existing flood risk
assessment frameworks are mostly non-existent” (Pant et al.,
2018). These interdependencies are crucial for understanding
how the impacts of natural hazards propagate across infras-
tructures and towards society.
A full research branch analyses the complex social–
physical–technological relationships of society considering
a system-of-system (SoS) perspective, whereby systems are
merged into one interdependent system of systems. In a SoS,
people belong to and interact within many groups, such as
households, schools, workplaces, transport, healthcare sys-
tems, corporations and governments. In a SoS, the depen-
dencies are therefore distinguished between links within the
same system or between different systems (Alexoudi et al.,
2011). The relations between different systems are mod-
elled in the literature using qualitative graphs or flow dia-
grams (Kakderi et al., 2011) and by matrices (Abele and
Dunn, 2006). Tsuruta and Kataoka (2008) use matrices to de-
termine damage propagation within infrastructure networks
(e.g. electric power, waterworks, telecommunication, road)
due to interdependency, based on past earthquake data and
expert judgement. Menoni (2001) proposes a framework
showing major systems interacting in a metropolitan envi-
ronment based on observations of the Kobe earthquake. Lane
and Valerdi (2010) provide a comparison of various SoS def-
initions and concepts, while Kakderi et al. (2011) have deliv-
ered a comprehensive literature review of methodologies to
assess the vulnerability of a SoS.
1.3 Positioning and aims
The aspects of complexity and interdependency have been
investigated by various models of critical infrastructure as a
single system, or as systems of systems, which are networks
by construction (e.g. drainage system or electric power net-
work; Holmgren, 2006; Navin and Mathur, 2015). However,
the current practice related to both the single system and SoS
needs further research, in particular when it comes to mod-
elling the complexity of interconnections between individual
elements that do not explicitly constitute a network, which
tends to be neglected by traditional reductionist risk assess-
ments. In fact, although several authors have shown how to
model risk in systems which are already networks by con-
struction (Buldyrev et al., 2010; Reed et al., 2009; Rinaldi,
2004; Zio, 2016), fewer have addressed the topic of risk mod-
elling in systems where that is not the case, i.e. systems that
are not immediately and manifestly depicted as a network
(Hammond et al., 2013; Zimmerman et al., 2019). These in-
clude cities, regions or countries, which are complex systems
made of different elements (e.g. people, services, factories)
connected in different ways among each other in order to
carry out their own activities. Therefore, in this paper we
would like to promote an approach, which has previously
deserved the attention of other authors, to model the inter-
connections between the elements that constitute those sys-
tems and assess collective risk in a holistic manner. The ap-
proach involves the translation of the complex system into a
graph, i.e. a mathematical structure used to model relations
between elements. This allows modelling and assessing in-
terconnected risk (due to the complex interaction between
human, environment and technological systems) and cascad-
ing risk (which results from escalation processes). The inter-
actions between elements at risk and their influence on indi-
rect impacts are assessed within the framework of graph the-
ory, the branch of mathematics concerned with graphs. The
results can be used to support more informed DRR decision-
making (Pescaroli and Alexander, 2018).
The aims of this paper can be summarized as follows:
to call for a paradigm shift from a reductionist to a holis-
tic approach to assess natural hazard risk, supported by
the construction of a graph;
to show the potential advantages of the use of a graph,
namely (1) understanding fundamental aspects of com-
plex systems which may have relevant implications to
natural hazard risk, leveraging well-known graph prop-
erties, and (2) using the graph as a tool to model the
propagation of impacts of a natural hazard and, eventu-
ally, assess risk in complex systems;
to present the feasibility of implementing the approach
through a pilot study in Mexico City; Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
524 M. Arosio et al.: The whole is greater than the sum of its parts
to discuss the limitations, potentialities and future de-
velopments of this approach compared to other more
traditional approaches.
2 Methodology
In this section, which presents the methodology, we aim to
answer the three following questions:
1. How can a complex system be “translated”, which does
not explicitly constitute a network, into a graph?
2. Which properties of the graph could give us insights on
the risk-related properties of the system?
3. How can the impacts of a natural hazard be propagated
by means of the graph?
The answers to these questions are formulated proposing the
workflow of the graph-based approach, which is divided into
three main steps, described in Sect. 2.1, “Construction of the
graph”; Sect. 2.2.2, “Analogy between graph properties and
risk variables”; and Sect. 2.3, “Hazard impact propagation
within the graph”.
The workflow is presented in Fig. 1.
2.1 Construction of the graph
The construction of a graph for systems already in the form
of a network is well developed and consolidated in the lit-
erature (e.g. Rinaldi, 2004; Setola et al., 2016). Instead, the
use of the graph theory – and the exploitation of its diagnosis
tools – for systems not already structurally in the form of a
network is relatively new. In this regard, in this section we
propose a procedure to build a graph for a complex system
such as a city by linking the individual elements constituting
According to the objects of each specific context, the graph
construction phase starts by defining the hypothesis of the
analysis and the system boundaries according to the objects
of each specific context. In particular, it establishes the two
main objects of the graph: vertices (nodes) and edges (links)
and their characteristics.
The nodes can theoretically represent all the entities that
the analysis wants to consider: physical elements like a sin-
gle building, bridge and electric tower; suppliers of services
such as schools, hospitals and fire brigades; or beneficiaries
such as population, students or specific vulnerable groups
such as elderly people. Due to the very wide variety of ele-
ments that can be chosen, it is necessary to select the category
of nodes most relevant to the specific context of analysis. It is
also necessary to define, for each node, the operational state
that can be characterized, from the simplistic Boolean state
(functional or non-functional) to discreet states (30 %, 60%
or 100 % of service or functionality) or even a complete con-
tinuous function (similarly to vulnerability functions). In a
graph, the states of each node depend both on the states of
the adjacent nodes and on the hazard. In this paper, we use
the term node to refer to its graph characteristics and term el-
ement to refer to the entity that it represents in the real world.
The links between the nodes that create the graph can range
from physical to geographical, cyber or logical connections
(Rinaldi et al., 2001). According to the different typologies of
connections and nodes selected, it is necessary to define the
direction and weight of the links. The graph will be directed
when the direction of the connection between elements is rel-
evant, and it will be weighted if the links have a different
importance, intensity or difference capacity.
In defining the topology, it is crucial to define the level of
analysis details coherently with the scope and scale, both for
the selection of elements and for the relationship between el-
ements that need to be considered. In the case of very high
detail, for example, a node of the graph could represent a
single person within a population, and in the case of lower
resolution, it could represent a large group of people with a
specific common characteristic, such as living in the same
block or having the same hobby. In the case of analyses at a
coarser level, an entire network (e.g. electric power system)
can be modelled as a single node of another larger network
(e.g. national power system). The definition of the topology
structure of the graph also identifies immediately the system
boundaries (e.g. which hospitals to be considered in the anal-
ysis: only the potential flood area, the ones in the district or
the ones in the region). To which extent is it necessary to con-
sider elements to be nodes of the graph? The topology def-
inition is a necessary step in performing the computational
analysis and introduces approximations of the open systems
that need to be acknowledged.
Once the graph is conceptually defined, in order to ac-
tually build the graph, it is then necessary to establish the
connection between all the selected elements. The relations
described above determine the existence of connections be-
tween categories of elements, but they do not define how a
single node of one category is linked to a node of another cat-
egory. Therefore, it is necessary to define rules that establish
the connections between each single node. For the sake of
clarity, an example could be the following: the conceptual re-
lationship is defined between students and school (“students
go to school”); subsequently, it is necessary to make the link
between each student and a school in the area, applying a rule
such as “students go to the closest school”. This is an exam-
ple of geographical connection with nodes that are linked by
their spatial proximity.
The connections between the single elements can be rep-
resented either by a list of pairs of nodes or, more frequently,
by the adjacency matrix. Any graph Gwith Nnodes can be
represented in fact by its adjacency matrix A(G)with Nx N
elements Aij , whose value is Aij =Aij =1 if nodes iand j
are connected and is 0 otherwise. If the graph is weighted,
Aij =Aji can have a value between 0 and 1, expressing the
weight of the connection between the nodes. The properties
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 525
Figure 1. Workflow.
of the nodes are represented in both cases by another matrix,
with a column for each property associated with the node
(e.g. name, category, type). In practical terms, the list of all
connections or the adjacency matrix can be automatically ob-
tained via GIS analysis, in the case of geographical connec-
tions, or by database analysis, in the case of other categories
of connections. The list of nodes, together with either the list
of links or the adjacency matrix, are the inputs for building
the mathematical graph.
Once a graph has been set up and constructed, it is then
possible to compute and analyse its properties by means of
graph theory and propagate the hazard impact into the graph,
as illustrated in the following sub-sections.
2.2 Analysis of the graph properties
2.2.1 Summary of relevant graph properties
The mathematical properties of a graph can be studied us-
ing graph theory (Biggs et al., 1976), which is the branch of
mathematics that studies the properties of graphs (Barabasi,
2016). Graphs can represent networks of physical elements
in the Euclidean space (e.g. electric power grids and high-
ways) or of entities defined in an intangible space (e.g. col-
laborations between individuals; Wilson, 1996). Since its in-
ception in the 8th century (Euler, 1736), graph theory has
provided answers to questions in different sectors, such as
pipe networks, roads and the spread of epidemics. Over re-
cent decades, studies of graph concepts, connections and re- Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
526 M. Arosio et al.: The whole is greater than the sum of its parts
lationships have strongly accelerated in every area of knowl-
edge and research (from physics to information technology,
from genetics to mathematics and to building and urban de-
sign), showing the image of a strongly interconnected world
in which relationships between individual objects are often
more important than the objects themselves (Mingers and
White, 2009).
Formally, a complex network can be represented by a
graph Gwhich consists of a finite set of elements V (G)
called vertices (or nodes, in network terminology) and a set
E(G) of pairs of elements of V (G) called edges (or links, in
network terminology; Boccaletti et al., 2006). The graph can
be undirected or directed (Fig. 2a and b). In an undirected
graph, each of the links is defined by a pair of nodes iand j
and is denoted as lij . The link is said to be incident in nodes
iand jor to join the two nodes; the two nodes iand jare re-
ferred to as the end nodes of link lij . In a directed graph, the
order of the two nodes is important: lij stands for a link from
ito j, node ipoints to node jand lij 6= lj i . Two nodes joined
by a link are referred to as adjacent (Börner et al., 2007; Luce
and Perry, 1949). In addition, a graph could have edges of
different weights representing their relative importance, ca-
pacity or intensity. In this case, a real number representing
the weight of the link is associated to it, and the graph is said
to be weighted (Fig. 2c; Börner et al., 2007).
A short list of the most common set of node, edge and
graph measures used in graph theory is presented here and
summarized in Table 1 (Nepusz and Csard, 2018; Newman,
2010). There are measures that analyse the properties of
nodes or edges, local measures that describe the neighbour-
hood of a node (single part of the system) and global mea-
sures that analyse the entire graph (whole system). From a
holistic point of view, it is important to note that since some
node and edge measures require the examination of the com-
plete graph, this allows looking at the studied area as a unique
entity that results from the connections and interactions be-
tween its parts and characterizing the whole system.
The degree (or connectivity, k) of a node is the number
of edges incident with the node. If the graph is directed, the
degree of the node has two components: the number of out-
going links (referred to as the degree-out of the node) and the
number of ingoing links (referred to as the degree-in of the
node). The distribution of the degree of a graph is its most
basic topological characterization, while the node degree is
a local measure that does not take into account the global
properties of the graph. On the contrary, path lengths, close-
ness and betweenness centrality are properties that consider
the complete graph. The path length is the geodesic length
from node ito node j: in a given graph, the maximum value
of all path lengths is called diameter and the average shortest
path length is called the characteristic path length. Closeness
is the shortest path length from a node to every other node
in the network, and betweenness is defined as the number
of shortest paths between pairs of nodes that pass through a
given node.
Other relevant characteristics that are commonly analysed
in directed graphs to assess the relative importance of a node,
in terms of the global structure of the graph, are the hub and
authority properties. A node with a high hub value points
to many other nodes, while a node with a high authority
value is linked by many different hubs. Mathematically, the
authority value of a node is proportional to the sum of the
node hubs pointing to it, and the hub value of a node is
proportional to the sum of authority of nodes pointing to
it (Nepusz and Csard, 2018; Newman, 2010). In the World
Wide Web, for example, websites (nodes) with higher author-
ities contain the relevant information on a given topic (e.g., last access: 2 February 2020),
while websites with higher hubs point to such information
(e.g., last access: 2 February 2020).
The mathematical properties presented above are useful
metrics for analysing the structural (i.e. network topology,
arrangement of a network) and functional (i.e. network dy-
namics, how the network status changes after perturbation)
properties of complex networks. Depending on the statisti-
cal properties of the degree distributions, there are two broad
classes of networks: homogeneous and heterogeneous (Boc-
caletti et al., 2006). Homogeneous networks show a distribu-
tion of the degree with a typically exponential and fast de-
caying tail, such as Poissonian distribution, while heteroge-
neous networks have a heavy-tailed distribution of the de-
gree, well-approximated by a power-law distribution. Many
real-world complex networks show power-law distribution of
the degree, and these are also known as scale-free networks
because power laws have the same functional form on all
scales (Boccaletti et al., 2006). Networks with highly hetero-
geneous degree distribution have few nodes linked to many
other nodes (i.e. few hubs) and a large number of poorly con-
nected elements.
The properties of the static network structure are not al-
ways appropriate for fully characterizing real-world net-
works that also display dynamic aspects. There are examples
of networks that evolve with time or according to external
environment perturbations (e.g. removal of nodes or links).
Two important properties for exploring the dynamic response
to a perturbation are percolation thresholds and fragmenta-
tion modes.
Percolation was born as the model of a porous medium but
soon became a paradigm model of statistical physics. Wa-
ter can percolate in a medium if a large number of links ex-
ists (i.e. the presence of links means the possibility of water
flowing through the medium), and this depends largely on
the fraction of links that are maintained. When the graph is
characterized by many links, there is a higher probability that
connection between two nodes may exist and, in this case, the
system percolates. Vice versa, if most links are removed, the
network becomes fragmented (Van Der Hofstad, 2009). The
percolation threshold is an important network feature result-
ing from the percolation concept, which is obtained by re-
moving vertices or edges from a graph. When a perturbation
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 527
Figure 2. Graph representation of a network. (a) Undirected. (b) Directed. (c) Weighted directed.
Table 1. Properties of a graph Gwith Nnodes defined by its adjacency matrix A(G) with N×Nelements aij , whose value is aij >0 if
nodes iand jare connected and is 0 otherwise.
Property Description Formula
Degree (k) The number of edges incident with the
Diameter (D) The maximum value of all path lengths
D=dij , where dij is the geodesic length from node ito node j(i.e.
path length)
path length (d)
The average shortest path length d=1
N·(N1)·Pi,j (i6=j )dij
Closeness (c) Shortest path length from a node to ev-
ery other node in the network
li,where li=1
Number of shortest paths between pairs
of nodes that pass through a given node
nof shortest paths connecting j,k via i
nof shortest paths connecting j,k =Pj,k
njk (i )
Authority (x) The value proportional to the sum of the
node hub values pointing to it
xi=α·Pjaj i yjA·AT, where αis a proportional constant
Hub (y) The value proportional to the sum of au-
thority of nodes pointing to it
yi=β·Pjaij xjAT·A, where βis a proportional constant
threshold (pc)
The minimum value of fraction of re-
maining nodes (p) that leads to the con-
nectivity phase of the graph
For random graph pc=1
k,kis the average of degree
is simulated as a removal of nodes or links, the fraction of
nodes removed is defined as f=Nodesremoved
NodesTotal , and the proba-
bility of nodes and links present in a percolation problem is
NodesTotal . Consequently, it is possible to de-
fine the percolation threshold (pc) as the minimum value of
pthat leads to the connectivity phase of the graph (Gao et al.,
2015). In practical terms, the percolation threshold discrim-
inates between the connected and fragmented phases of the
network. In a random network (i.e. network with Nnodes
where each node pair is connected with probability p), for
example, pc=1/k, where kis the mean of degree k(Bunde
and Havlin, 1991).
The second property that investigates dynamic evolution
is the fragmentation (i.e. number and size of the portions of
the network that become disconnected). The number and the
size of the sub-networks obtained after removing the ver-
tices and edges provide useful information. In the case of
a so-called giant component fragmentation, the network re-
tains a high level of global connectivity even after a large
amount of nodes have been removed, while in the case of to-
tal fragmentation, the network collapses into small isolated
portions. For this reason, “keeping track of the fragmenta-
tion evolution permits the determination of critical fractions
of removed components (i.e. fraction of component deletion
at which the network becomes disconnected), as well as the
determination of the effect that each removed component has
on network response” (Dueñas-Osorio et al., 2004). Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
528 M. Arosio et al.: The whole is greater than the sum of its parts
2.2.2 Analogy between graph properties and risk
The proposed graph properties can be used to more thor-
oughly characterize systems of exposed elements. In fact, the
traditional conceptual skeleton to describe risk can still be
adopted within the framework of the proposed graph-based
approach. The properties calculated from a graph consist of a
new layer of information for some of those risk variables that
go beyond their traditional interpretations within the reduc-
tionist paradigm. In particular, they provide a more compre-
hensive characterization of the single nodes (deriving from
their relationships with other nodes) as well as of the sys-
tem as a whole. As such, from the risk variables presented
in Sect. 1, the hazard preserves its traditional definition as
an event that can impact such systems, or part(s) of it, with
certain intensities and associated probabilities of occurrence.
For the three other variables, namely, exposure, vulnerabil-
ity and resilience, below we propose and provide an innova-
tive and original discussion on their analogies with the graph
properties presented in the previous sub-section. The analo-
gies are summarized in Table 2.
Analogous to the traditional approach but at the same time
extending its concept, the value of each exposed element can
be estimated as the relative importance that is given to it by
the graph, which is measured by the network itself by means
of the connections that point to each node. In graph theory,
this relative importance among elements, based on standard-
ized values, can be investigated through the authority analy-
sis. A high authority value of a node indicates that there are
many other nodes (or otherwise some hubs) that provide ser-
vices (i.e. providers or suppliers) to that node. In other words,
the system privileges it compared with others according to
their connections with the provider nodes. For example, a
factory settled in an industrial district may receive more ser-
vices (e.g. electric power, roads for heavy vehicles, logistic
systems) than a factory located in the old quarter of a city; in
this case, the former is structurally privileged by the system
compared with the latter.
In the reductionist approach, vulnerability is the propensity
of an asset to be damaged because of a hazardous event. By
adopting a graph perspective, the vulnerability can be esti-
mated both for the single node as well as for the system as a
In the first case, the vulnerability depends on the relation-
ship that the node has with the others. In particular, the close-
ness represents the likelihood of a node to be affected indi-
rectly by a hazard event due to the lack of services provided
by other nodes. A lower value of closeness, i.e. the shortest
path length from a node to every other node in the network,
means a higher probability of a node of being impacted by a
hazard event. On the other hand, a high value of closeness,
i.e. a longer path length from a node to every other node in
the network, means a low probability of being impacted.
In the second case, the vulnerability can be defined as
the propensity of the network to be split into isolated parts
due to a hazardous event. In that condition, an isolated part
is unable to provide and receive services, which can trans-
late into indirect losses. The system vulnerability, therefore,
can be evaluated by means of the following graph properties:
hubs, betweenness and degree-out distribution. The presence
of nodes with high hub values indicates a propensity of the
network to be indirectly affected more extensively by a haz-
ard event, since a large number of nodes are connected with
the hubs. A network that has nodes with high betweenness
values has a higher tendency to be fragmented because it has
a strong aptitude to generate isolated sub-networks. Finally,
the degree distribution, which expresses network connectiv-
ity of the whole system (i.e. the existence of paths leading to
pairs of vertices), has a strong influence on network vulner-
ability after a perturbation. The shape of the degree distribu-
tion determines the class of a network: heterogeneous graphs
(power-law distribution and scale-free network) are more re-
sistant to random failure, but they are also more vulnerable
to intentional attack (Schwarte et al., 2002). As emphasized
above, scale-free networks have few nodes linked to many
nodes (i.e. few hubs) and a large number of poorly connected
elements. In the case of random failure, there is a low proba-
bility of removing a hub, but if an intentional attack hits the
hub, the consequences for the network could be catastrophic.
Resilience differentiates from vulnerability in terms of dy-
namic features of the system as a whole. The properties and
functions used to model vulnerability are static character-
istics that do not consider any time evolution or, using the
words of Sapountzaki (2007), “vulnerability is a state, while
resilience is a process”; in fact the definition of resilience im-
plies a time evolution of the characteristics of the whole sys-
tem. In addition, Lhomme et al. (2013) underline “the need
to move beyond reductionist approaches, trying, instead, to
understand the behaviour of a system as a whole”. These two
features, the dynamic aspect and whole system, make vulner-
ability different from resilience and further clarify the need
to develop an approach that it is able to consider the dynamic
of the system to be whole.
In this context, the study of the percolation threshold (pc)
can be used to explain the resilience of the network after a
perturbation. The pcvalue distinguishes between the con-
nectivity phase (above pc) and the fragmented phase (below
pc). In the connectivity phase, the network can lose nodes
without losing the capacity to cope with the perturbation as a
network, while in the fragmented phase, the network does not
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 529
Table 2. Analogy of risk variables with graph properties.
Risk variables Analogy with graph properties
Exposure The authority represents how the system privileges the nodes, conferring them more or less
importance compared with others, according to the connections established in the system.
Vulnerability The propensity of parts of the network to be isolated because of hazard events. The closeness of
a node is a measure of the single node vulnerability within the system, while degree distribution,
hub and betweenness are measures of vulnerability of the system as a whole.
Resilience The percolation threshold together with the network fragmentation analysis explain the re-
silience of the network after a perturbation.
actually exist anymore and the remaining nodes are unable to
cope with the disruption alone.
This critical behaviour is a common feature also observed
in disasters induced by natural hazards. In some cases, the
exposed elements withstand some damage and loss, but the
overall system maintains its structure. However, there are
events in which the amount of loss (affected nodes) is so rele-
vant that the system loses the overall network structure. In the
first case, the system has the capacity to cope independently
and tackle the event, while in the second case, the system is
unable to cope.
The dynamic responses are characterized by the network
fragmentation property, which describes the performance of
a network when its components are removed (Dueñas-Osorio
and Vemuru, 2009). For instance, the so-called giant compo-
nent fragmentation (the largest connected sub-network) and
the total fragmentation describe network connectivity and de-
termine the failure mechanism (Dueñas-Osorio et al., 2004).
Keeping track of fragmentation evolution makes it possi-
ble to determine both the critical fraction of components re-
moved (i.e. the smallest component deletion that disconnects
the network) and the effect that each component removed has
on the network response.
For these reasons, we consider percolation threshold and
network fragmentation to be good indicators of resilience,
also because they are able to show the emergent behaviour of
the whole system beyond just considering the single parts of
the network (e.g. node).
2.3 Hazard impact propagation within the graph
While the literature of the impact propagation or cascading
effects for critical infrastructures is large (e.g. Pant et al.,
2018; Trucco et al., 2012), applications on the risk quantifi-
cation of natural hazards including the cascading effects are
scarce. Besides the considerable amount of information that
can be obtained by analysing graph properties from the view-
point of natural hazard risk, the graph itself also provides
an optimal structure for propagating the impacts of a haz-
ard throughout an affected system. Indeed, the use of a graph
allows estimating, besides direct losses to elements directly
affected (such as elements within a flooded area), also indi-
rect losses to elements outside the affected area that rely on
services provided by directly hit elements, which may have
lost some capacity to provide those services as a result. The
propagation and quantification of impacts through a graph
allows understanding the risk mechanisms of the system and
identifying weaknesses that can translate into larger indirect
consequences. It also enables the possibility of quantitatively
estimating risk considering those indirect consequences.
Figure 3 depicts this process through a conceptual
flowchart. In order to propagate the impacts by means of
the graph and quantify indirect losses resulting from second-
order and cascading effects, the modelled graph must first be
integrated with hazard data. These data must include hazard
footprints that allow establishing the hazard intensity (e.g.
water depth) at the location of each element. The direct and
indirect impacts can then be computed according to the pro-
posed methodology, based on three levels of vulnerability:
Level I is the physical vulnerability of a directly affected
element in its traditional definition. The hazard intensity
is the input variable for computing the direct damage of
the element.
Level II is the vulnerability associated to the link be-
tween an affected element and its receivers. The direct
damage as obtained by vulnerability level 1 is the in-
put for computing the loss of service provided by the
directly damaged element to the elements that receive
Level III is the vulnerability of the service-receiving el-
ement. The loss of service as obtained by vulnerability
level 2 is the input for estimating the indirect loss of the
element that receives the service.
These vulnerabilities can be represented by vulnerability
functions analogous to the ones adopted within the traditional
risk assessment approach and can be different for each cate-
gory of element and service.
By computing impacts for hazard scenarios with different
probabilities of occurrence, and adopting the three levels of
vulnerability functions, a quantitative estimate of risk can be
obtained. An illustrative example of propagation of impacts Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
530 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 3. Risk framework.
is presented in Sect. 2.4, and more detailed information on
the propagation of impacts through the graph and the estima-
tion of impact is presented in the pilot study in Sect. 3.3.
2.4 Illustrative example
In order to illustrate the application of the graph-based ap-
proach in the characterization of a system exposed to nat-
ural hazards, in Fig. 4 we present an example of a hypo-
thetical city comprising various elements of different types
which provide services. Specifically, our example includes
20 elements: nine blocks of residential buildings, one hos-
pital, two fire stations, three schools, three fuel stations and
two bridges. Blocks are intended to represent the population,
which receives services from the other nodes. Bridges pro-
vide a transportation service, fire stations provide a recovery
service, hospitals provide a healthcare service, schools pro-
vide an education service and fuel stations provide a power
service. Figure 4a shows how the elements are connected in
a graph. The authority and hub values have been computed
using the R graph package (, last access:
2 February 2020). The full library of functions adopted are
available in Nepusz and Csard (2018).
In Fig. 4b, the size of the elements is proportional to their
authority values. Blocks 6, 18, 19 and 20 have higher author-
ity values than the other elements of this typology because
they receive a service from the hospital (node 16), which is
an important hub. Fire Station 5 and School 9 have high val-
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 531
Figure 4. (a) Map of the various elements of a hypothetical municipality in a flood-prone area. (b) Same as (a), with node sizes proportional
to authority values. (c) Same as (a), with node sizes proportional to hub values. (d) Same as (a), with flood area and nodes directly impacted
highlighted with red cross. (e) Same as (a), with also the nodes indirectly impacted highlighted with black cross.
ues of authority because they are serviced by Bridge 3, which
is also an important hub. The importance of a node in graph
theory is closely connected with the concept of topological
centrality. Referring to the illustrative example, Block 6 has
the highest authority value; if a flood hit it, it would there-
fore affect the most central node of the network, or in other
words, the node which is implicitly more privileged by the
In Fig. 4c, the major hubs are the elements with the largest
diameters: Hospital 16, Bridge 3, School 7 and Fuel Station
15. Bridge 3 is an important hub, since it provides its service
to Block 6, which has the highest authority value, and to Fire Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
532 M. Arosio et al.: The whole is greater than the sum of its parts
Station 5 and School 9. Fuel Station 15 and School 7 are also
important hubs because they provide services to Block 6. The
elements in the south-eastern part of the network inherited a
relative importance (i.e. authority) from the most important
hub in that area (i.e. Hospital 16). Bridge 3 is an exception to
this aspect; in fact, this bridge connects the southern part (i.e.
Block 6) with the northern part of the city (i.e. Fire Station 5
and School 9). A flood event in the south-eastern part of the
network would likely generate a major indirect impact on the
whole system compared to other parts of the network.
We assume that these elements are located in a flood-
prone area and that Bridge 3 and Block 6 are directly flooded
(Fig. 4d). Since those elements are directly damaged, it is
possible to follow the cascading effects following the direc-
tion of the service within the graph from providers to re-
ceivers. In this artificial example, the transportation service
provided from by the bridge is lost, and this has an indirect
consequence to Hospital 16, which is not directly damaged
but cannot provide healthcare services, since people cannot
reach the hospital anymore. The graph allows extending the
impact not only to the elements directly hit by the hazard
but also to all elements that receive services from elements
directly or indirectly affected by the hazard.
Note that similar analyses could be carried out for other
properties of the graph (e.g. betweenness) in order to obtain
additional insight into the properties of the system, which
could be useful for the purpose of a risk assessment. For the
sake of brevity, such analyses have not been included here.
A complete study of all relevant graph properties discussed
above and a more realistic hazard scenario are presented in
the following section.
3 Pilot study: Mexico City
Floods, landslides, subsidence, volcanism and earthquakes
make Mexico City one of the most hazard-prone cities in the
world. Mexico is one of the most seismological active re-
gions on earth (Santos-Reyes et al., 2014); floods and storms
are recorded in indigenous documents, and the Popocatépetl
volcano has erupted intermittently for at least 500 000 years.
At present, people settle in hazardous areas such as scarps,
steep slopes, ravines and next to stream channels.
The Mexico City metropolitan area (MCMA) is one of the
largest urban agglomerations in the world (Campillo et al.,
2011). This pilot study focuses on Mexico City (also called
the federal district – MCFD), where approximately 8.8 mil-
lion people live. The choice of MCFD as a pilot case allows
showing the importance of modelling connections and inter-
dependencies in a complex urban environment.
Tellman et al. (2018) show how the risk in Mexico City’s
history has become interconnected and reinforced. In fact,
as cities expand spatially and become more interconnected,
the risk becomes endogenous. Urbanization increases the de-
mand for water and land. The urbanized areas inhibit aquifer
recharge, and the increase in water demand exacerbates sub-
sidence due to an increase in pumping activity out of the
aquifer. Subsidence alters the slope of drainage pipes, de-
creasing the efficiency of built infrastructure and the capacity
of the system to both remove water from the basin in floods
as well as deliver drinking water to consumers. This exacer-
bates both water scarcity and flood risk.
3.1 Construction of the graph
Given the very large scale of the city, certain simplifica-
tions and hypotheses had to be assumed for conceptualiz-
ing the network. Furthermore, the choice of element typolo-
gies, the connections between them and the definition of rules
were also made considering the availability of data provided
by the UNAM Institute of Engineering for this study case.
While these data are only partially representative of the en-
tirety of the exposed assets in MCFD (with the exclusion of
three districts for which the data were not available: Álvaro
Obregón, Milpa Alta and Xochimilco), we consider it suit-
able for the specific purpose of this work, which is to illus-
trate the proposed approach and highlight its potential. Note
that the boundaries of the system are defined by the selection
of typologies, connections and the studied geographical area.
These simplifications and hypotheses of real open-ended sys-
tems, while necessary to enable the computational analysis,
should be recognized and taken into account when evaluating
the results of the analysis (Clark-Ginsberg et al., 2018).
Among the possible exposed elements, we selected six ty-
pologies that are representative of both the emergency man-
agement phase (e.g. fire stations) and long-term impacts (e.g.
schools). The typologies of elements considered in this pi-
lot case, which provide and/or receive services reciprocally,
are fire stations, fuel stations, hospitals, schools, blocks and
crossroads. The fire station represents the node type from
which the recovery service is provided to all the other ele-
ments present in the area (except crossroads). The fuel sta-
tion represents the node type that provides the power service,
the hospital provides the healthcare service and the school
provides the education service; the elements with these three
typologies deliver their respective services to all the blocks.
The block is the node type defined as the proxy for the pop-
ulation, which receives services from all the other consid-
ered elements. The simulation uses blocks instead of popula-
tion, as this enables a reduction in computational demand by
lowering the number of nodes from 8 million to a few tens
of thousands. Finally, the analysis considers 17 crossroads,
which provide the transportation service to all the other ele-
ments. The crossroads were identified by selecting the major
intersections between the main highways present in the road
network of MCFD. All the typologies, numbers of elements
and the connections between them are presented in the con-
ceptual graph in Table 3, and Fig. 5 presents the GIS repre-
sentation of the providers and the services that are provided
between them.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 533
Table 3. List of nodes adopted in the network conceptualization.
The link between two elements of two different typologies
was set up based on the geographical proximity rule: each
specific service is received by the nearest provider (e.g. a
block receives the education service from the closest school,
and the school receives the recovery service from the clos-
est fire station). This simple assumption is due to the lack of
data available at this stage; in case of more data, it will be
possible to define this relation more accurately (e.g. school
offers education service to its zoning) but without changing
the general validity of the method. Note that this hypothesis
does not consider the redundancy that might exist between
some services, which would necessarily influence the prop-
agation of cascade effects. The service provided by the road
network was modelled while considering that each element
in the area receives a transportation service from the closest
crossroad among the 17 that were identified. This approach
does not aim to be representative of the complete behaviour
of the road network system, particularly the paths between
nodes or possible alternative paths, but it does allow consid-
ering the transportation network in the analysis in a simpli-
fied manner.
The list of nodes, which contains all the elements of all
typologies, together with the list of links between them, both
obtained according to the hypothesis presented above, are
the inputs for building the mathematical graph. As for the
illustrative example, the graph was obtained using the open-
source igraph package for network analysis of the R environ-
3.2 Analysis of the graph properties
The following paragraphs present the results from the graph
analysis and show how the properties of the single elements
and the whole system are assessed, from both provider (or
supplier) and receiver (or consumer) perspectives.
3.2.1 Vulnerability of the single elements
As described in Sect. 2.2.2, the systemic vulnerability of a
node is the aptitude to remain isolated from the whole sys-
tem when the graph is perturbed. The tendency to observe
isolated parts is analysed here by the closeness property,
which measures the mean distance from a vertex to other ver-
tices; Fig. 6 shows the geographical distribution values of the
closeness in value of the blocks.
In accordance with the model conceptualization, the
blocks increase their distance to the network if their providers
are not connected to each other. For example, if a school and
a hospital provide services to a block, the closeness in value
of this block will be higher if the school and the hospital re-
ceive the transportation service from same crossroad and this
crossroad also serves the block. In this specific case, where
the nodes are more interconnected, the distance between the
block node and the whole network is lower, and by definition
its closeness in value is higher.
Figure 6 shows that the region with the majority of blocks
with the highest values of the closeness in value is in the
south-eastern part of MCFD. This area is the part of the
city that is surrounded by few providers, which are the ma-
jor hubs, as illustrated in the next section and in Fig. 9.
The presence of few providers forces them to exchange ser-
vices between themselves and to serve all the receivers of the
area, meaning that the blocks have a lower distance to the
providers and can therefore be more vulnerable.
3.2.2 Vulnerability of the whole system
The analysis in this section shows the structural properties
of the whole network (i.e. network topology, arrangement of
a network) and investigates how the network, as a unique
entity, is vulnerable to a potential external perturbation (e.g.
hazardous event).
As mentioned in Sect. 2.2, there are two types of net-
works, heterogeneous or homogeneous, depending on if the
degree distribution is respectively heavy tailed or not. Het-
erogeneous networks have few hubs that appear as outliers
in the degree distribution; this feature can represent a po-
tential weakness of a system because if one of the hubs is
affected by an event, it will propagate the impacts more ex-
tensively than other nodes. Note that this is not an indication
of risk per se, which is a function of not only the exposed sys- Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
534 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 5. Map of nodes and services provided among them. For readability, blocks are not included. (© OpenStreetMap contributors 2019.
Distributed under a Creative Commons BY-SA License.)
tem but also the hazard. However, it may be used to evaluate
the vulnerability of the system as a whole, similarly to how
single-site vulnerability analyses assess the potential impact
of an event regardless of its actual likelihood.
There is an objective way to estimate if the degree dis-
tribution is heavy tailed by means of its statistical proper-
ties: a distribution is defined as heavy tailed if its tail is not
bounded by the exponential distribution. In order to verify
if the degree distribution of a network is heavy tailed, one
can infer the generalized Pareto distribution (GPD) on the
observation and analyse the shape parameter (Beirlant et al.,
1999; Scarrott and Macdonald, 2012). If the shape param-
eter of the GPD is equal to zero, the tail of distribution is
exponential. Instead, if the shape parameter is greater than
zero, the tail of the distribution if fatter than the exponen-
tial one, and therefore the distribution is heavy tailed. How-
ever, in order to fit the GPD to the data, it is first necessary
to select a threshold value and consider only the exceeding
values. There are different techniques for selecting the right
threshold value (Coles, 2001). Figure 7 shows the values of
the shape parameter (sp) for the degree-out distribution of
the Mexico City network for different values of threshold in
terms of data percentile. The shape parameter εis positive
for any value below 0.8; over that value, the degree distribu-
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 535
Figure 6. Geographical distribution of the block closeness in value. (© OpenStreetMap contributors 2019. Distributed under a Creative
Commons BY-SA License.)
tion is meaningless and does not represent the whole network
anymore but only the extreme values that are the only still
above the threshold. For this reason, we can assert that the
degree-out distribution is heavy tailed. This confirms that the
network built for Mexico City is strongly non-homogeneous,
with few hubs (providers) that are linked to many elements.
According to these results, if an hazard event hit one of these
few nodes with high value of hub, the consequences for the
network could be catastrophic due to the central role of the
3.2.3 Cascade effects
The analysis of the topological structure of the providers in
the network shows their relative relevance to the system, ac-
cording to their connections with the receivers. In particu-
lar, we propose a comparison between providers through the
analysis of two properties: hub analysis of all nodes that pro-
vide service to the population and betweenness analysis of
the crossroads. Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
536 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 7. Parameter estimation (sp) against thresholds for degree-
out data (SD: standard deviation).
Providers: role of hubs
The importance of a node in directed graphs, within the pur-
pose of providers that deliver a service, is closely connected
with the concept of topological centrality: the capacity of a
node to influence, or be influenced by, other nodes by virtue
of its connectivity. In graph theory, the influence of a node
in a network can be provided by the eigenvector centrality,
of which the hub and authority measures are a natural gen-
eralization (Koenig and Battiston, 2009). A node with a high
hub value points to many nodes, while a node with a high
authority value is linked by many different hubs.
The hub analysis considers all the elements in the graph
that provide services; for this reason, blocks are excluded
from this analysis. Figure 8 reveals outliers that are useful for
identifying the elements in the graph that, in case of poten-
tial failure, could have a large impact on the network due to,
for instance, their role as major hubs. In particular, one hos-
pital has the hub value equal to 1, which by definition is the
highest, immediately followed by a crossroad, with a value
around 0.85, while some schools, fuel stations and fire sta-
tions have hub values around 0.5. The ranking of elements
according to their hub values can be a very useful for pri-
oritizing intervention actions and maximizing the mitigation
effects for the whole network. If an external perturbation hit
an element with very high hub value, the cascading effects
on the network would be more relevant due to its central role
in the system. On the other hand, a mitigation measure ap-
plied to the elements with higher hub values would produce
a higher benefit in the whole network.
The hub outliers in Fig. 8 are associated to the elements
of the network that are geographically located mainly in the
south-eastern part of Mexico City; as shown in Fig. 9, the
biggest icons are in this part of the city. Based on the avail-
able data, the density of elements that provide services in
south-eastern part is much lower compared to the other areas
of the city; as such, the few providers existing in this part
become important hubs for the whole system.
This part of the city has few providers that are central hubs
of the city and blocks with very high closeness. Together,
these two aspects underline the need for additional providers
in this area. This would reduce the respective number of re-
ceivers, decreasing the hub values of providers and reducing
the number of blocks depending on each of them.
Crossroads: betweenness analysis
As described in Sect. 2.2.2, a network that has nodes with
high betweenness values has a higher tendency to be frag-
mented because it has a strong aptitude for generating iso-
lated sub-networks. In this case study, transportation is the
only service that allows the analysis of the betweenness val-
ues of the nodes. In fact, vehicles (e.g. fire trucks, family
cars) need to pass through crossroads to go from point A to
point B (e.g. fire trucks going from a fire station to an af-
fected location; a family car going from a block to a school).
The betweenness analysis presented here shows the number
of shortest paths between pairs of nodes that pass through the
selected crossroads. As mentioned previously, the few cross-
roads considered in this pilot study are not intended to repro-
duce the very complex road network of Mexico City but to
present some highlights of the betweenness property.
Figure 10 shows the crossroads adopted in the analysis,
where the dimension of the icons is proportional to the value
of betweenness. It can be observed that the crossroads in the
ring road around the city centre have higher values of be-
tweenness, which is due to the fact that they connect the very
large suburb areas and the city centre. In particular, the cross-
roads in the south have the highest values because the number
of nodes in the south is greater than that in the north of the
city. Instead, the crossroads in the city centre connect mostly
the nodes that are inside the ring road, and for this reason
they have lower values of betweenness.
The betweenness value shows which crossroad is more
central, or more important and influent in the network, based
on shortest paths between the nodes. For example, in case a
crossroad is flooded, it will reduce or completely interrupt its
transportation service. A crossroad with higher betweenness
will influence a higher number of nodes, and as such, if its
functionality is affected, this will have a higher impact on the
network compared to a crossroad with lower betweenness.
3.2.4 Exposure: which elements have higher centrality
in the system?
Regarding the analysis from the receivers’ point of view, we
explore how the system privileges some receivers compared
with others according to their connections with the providers.
In particular, we propose a comparison between receivers
through the authority analysis.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 537
Figure 8. Boxplots of hub values for different typologies of service providers.
Figure 11 shows that the authority of the nodes tends to
be clustered around certain values, presenting discontinuities
between them. This results from the fact that all blocks re-
ceive exactly five services from five providers (i.e. degree-in
is 5), and as such, they have the same values of authority
when they receive services from the same provider nodes.
Nodes with similar authority values should therefore be geo-
graphically located close to one another. This is confirmed
in Fig. 12, where the blocks are represented in space and
coloured according to their authority values.
Figure 12 shows a clear pattern from low values in the
north-west to higher values in the south-eastern part of
MCFD. The blocks with higher authority values are located
in the part of the city that is surrounded by the providers with
highest hub values, as illustrated in Fig. 9. In contrast, the
blocks in the city centre and in the north-west have the low-
est values of authority. In fact, this part of the city has the
highest density of providers, which decreases the number of
receivers for each provider and, consequently, their hub val-
ues. Note that this aspect likely results from the assumption
of not considering redundancy, meaning that each node can
only receive a certain service from its nearest provider. Oth-
erwise, if redundancy were considered, the blocks in the city
centre would receive the same service from many different
providers due to the higher density of such nodes.
According to these results, if a hazardous event hits the
blocks in the south-eastern part of the city, this will impact
the whole system more heavily because there will be more re-
quests to the same few hubs. Such hubs, which are potentially
more overburdened in an ordinary situation due to the high
number of services they provide, can put a considerable part
of the network in crisis after an external perturbation. The
strong correlation between hubs and authority explains the
results described above. However, it is necessary to under-
line that these outputs also reflect the assumption of the rules
of proximity adopted in this model, where the network has
no redundancy by construction. The redundancy can change
the values of hub and authority of the nodes and therefore in-
fluence the magnitude of cascade impacts that are presented
in the next section.
3.3 Flood impact propagation within the graph
In this section, we present a preliminary analysis of a flood
scenario in the case of Mexico City according to the proposed
graph-based approach. The aim is to show the potential of the
approach to highlight the impacts of a hazard over the whole
system, including indirect consequences to elements outside
the flooded area, based on a graph built for this specific pur-
The adopted hazard scenario is based on the development
of a simplified model that explicitly integrates the drainage
system and the surface runoff for the estimation of flood area
extension for different return periods, under the condition of
possible failure of the pumping system in the drainage sys-
tem (Arosio et al., 2018). Note that a detailed hazard analy-
sis is not the main goal of this article; therefore, the adopted
flood modelling approach does not intend to be as detailed
as possible but instead to represent an adequate compromise
between accuracy and simplicity. The hydrological and hy- Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
538 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 9. Map of providers. Icon dimensions are proportional to the hub values. (© OpenStreetMap contributors 2019. Distributed under a
Creative Commons BY-SA License.)
draulic simulations are based on the EPA’s Stormwater Man-
agement Model (SWMM; Rossman, 2015) and implemented
on the primary deep drainage system (almost 200 km of net-
work, 14 main channels and 108 manholes). As for the rain-
fall, patterns associated with different return periods were
obtained through the uniform intensity duration frequency
(IDF) curve for the entire MCFD (Amaro, 2005). In particu-
lar, Chicago hyetographs with a duration of 6h (Artina et al.,
1997) and an intensity peak at 2.1 h were constructed starting
from the IDF curve. For each return period, the flooded ar-
eas are computed based on the volume spilled out of each of
the main manholes of the drainage system. For each drainage
catchment, assumed hydraulically independent from the oth-
ers, a water depth–area relationship extracted from the digital
terrain model (DTM) is used to compute the flood extension
and depth. Figure 13a shows the flooded areas for a return
period of 100 years. The majority of water depth values are
between 0 and 1 m (lighter blues), and only a few raster cells
(darker blue) have higher values that reach up to 9.83m in
some low-lying areas.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 539
Figure 10. Map of crossroads. Icon dimensions are proportional to the betweenness values. (© OpenStreetMap contributors 2019. Distributed
under a Creative Commons BY-SA License.)
Some provider elements are located within the flood area,
as seen in Fig. 13a. These elements provide services to other
elements located both inside and outside flooded areas, as
shown in Fig. 13b. Even if some of these receiver elements
are not directly damaged, they can potentially experience in-
direct consequences due to the reduction or interruption of
services from the providers that are directly affected. Using
the hub analysis between the providers that are flooded, it is
possible to identify the nodes that have more central role and
can generate a potentially larger cascade effect for this flood
Figure 14 shows the values of hub values between the 17
providers inside the flood area. By integrating the informa-
tion of the hazard scenario (i.e. flood area for specific a return
period) with the hub and authority analysis of the network,
it is possible to qualitatively assess that the red zone of the
city has a relatively higher risk compared with the rest of the
city. This zone is characterized by few providers with high
hub values, which serve many blocks that have high values
of authority as a result. This result shows the need for new
additional providers in the red zone around the flooded area
in order to reduce the flood impact. As a matter of fact, this
would reduce the number of receivers per provider, reducing
the hub values of flooded providers. Consequently, the num-
ber of affected blocks outside the flood footprint would be
For this pilot study, the estimation of the direct impact
of the nodes is obtained adopting simplified binary vulner-
ability functions. According to this assumption, zero dam-
age then occurs in case of no flood, full damage occurs in
case of flood regardless of its intensity (vulnerability level
I), impacted nodes fully lose their capacity to provide ser-
vices (vulnerability level II) and receiver elements are fully
affected when even a single service is dismissed (vulnera-
bility level III). Despite of the availability of many vulner-
ability functions, for the purpose of this study we prefer to
adopt such a simplified assumption, since it does not affect
conceptually the correctness of the process. As a matter of Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
540 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 11. Boxplot of authority values for different provider services.
fact, the focus here is on the mechanism of the propagation
of the impacts through the graph rather than the correct quan-
tification of them. Thus, the cascading effects are propagated
through the graph by accounting for the nodes indirectly im-
pacted, i.e. those that have lost at least one service from their
providers. By using the graph properties this task is straight-
forward. A new graph (G1) is generated by removing the
nodes directly impacted by the flood from the original graph
(G0). After that, the degree-in of each node in G1, represent-
ing the number of incoming services, is compared with the
corresponding degree-in in G0. All the nodes with a reduc-
tion of degree-in are removed, and a new graph G2is gen-
erated. This process is repeated until there are no more af-
fected nodes in the graph and we obtain the final graph with
the maximum impact extension that can be compared with
the original graph.
Figure 15 shows the number of directly (blue) and indi-
rectly (red) impacted elements due to the flood with a 100-
year return period. The total number of elements affected
is about 31 000, with more than 4000 directly and almost
27 000 indirectly affected. These results, even acknowledg-
ing the relevance of the hypothesis adopted (i.e. no service
redundancy and binomial vulnerability function), show that
indirect damage represents a significant part of the total dam-
age. Furthermore, in Fig. 15 the hypothetical direct and indi-
rect impact curves are also plotted for illustrative purposes,
as they could result in computing the results for other return
The adoption of the graph adds, to the traditional reduc-
tionist risk assessment, the opportunity to explore the loss
not only also in term of services and not only in terms of ele-
ments. In fact, comparing the original graph (G0) with the fi-
nal graph obtained after the impact propagation, the approach
allows also computing the services lost. Figure 16 shows the
number of services lost after the impact propagation sepa-
rated within the categories of the elements and between the
services lost due to the dismissal of providers (brown) and
receivers (green) nodes. In terms of nodes there is no dif-
ference between those affected because of loss of received
service and those affected because of loss of demanded ser-
vice. Instead, in terms of services (i.e. links) there is a differ-
ence between those dismissed because of loss of a provider
and those dismissed because of loss of a receiver. This differ-
ence can be important in evaluating the relative importance
of these two different cases.
We acknowledge that these results are affected by the two
important assumptions highlighted above and also due to the
fact that the services are provided only by the elements in-
side the MCFD (as elements outside this area are not con-
sidered). Changing these assumptions could result in differ-
ent cascading impacts. Regardless, the framework illustrated
here shows the potentiality to quantitatively assess indirect
impacts, which can subsequently be integrated into collec-
tive risk assessments.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 541
Figure 12. Geographical distribution of the block authority value. (© OpenStreetMap contributors 2019. Distributed under a Creative Com-
mons BY-SA License.)
4 Discussion and final considerations
In this paper we looked at the problem of risk assessment
of natural hazards in a holistic perspective, focussing on the
“system” as a whole. We used system as a general term to
identify the set of the different entities, assets and parts of a
mechanism connected to each other in order to operate as, for
instance, an organism, an organization or a city. Most of such
systems are complex because of the high number of elements
and the large variety of connections linking them. Neverthe-
less, our society is structured in these complex systems which
are widespread everywhere. How can the risk of such com-
plex systems be assessed? We believe that a reductionist ap-
proach that separates the parts of a system, computes the risk
(losses, impacts, etc.) for each of them and then sums them
up to come up with a total estimate of risk is not adequate.
Most of the research on natural hazards and their risk adopted
implicitly the reductionist approach (i.e. “split the problem
into small parts and solve it”). However, we mentioned also
emerging literature which adopts a different approach (“keep
the system as a whole”), a holistic approach.
How can the system be represented as a single, intact and
entire entity? And how can all the connections of its parts be
represented? We believe, as other authors do, that the best ap-
proximation for representing a complex system is the graph.
Many authors have already used the graph to model systems
already organized as networks by construction (e.g. electric
power network) and assess the risk of natural hazards in such Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
542 M. Arosio et al.: The whole is greater than the sum of its parts
Figure 13. (a) Flooded area for T=100 and flooded providers; (b) blocks connected to the flooded providers. (© OpenStreetMap contribu-
tors 2019. Distributed under a Creative Commons BY-SA License.)
Figure 14. Hub and authority values of flooded nodes. (© Open-
StreetMap contributors 2019. Distributed under a Creative Com-
mons BY-SA License.)
a manner. Fewer authors have used the graph to model sys-
tems not immediately and manifestly depicted as physical
networks and proceed in this manner to model the risk. Once
the effort to “translate” a system with all its components into
a graph is made, there are several advantages and benefits.
Figure 15. The full coloured bar reports the computed direct and
indirect impacted elements at T=100 years; shadow bars repre-
sent conceptually the impacts for other return period to visualize
the complete risk curve.
First of all, there is a mature theory of mathematics, the
graph theory, that already studies the properties of a graph.
Are these graph properties telling us something useful for
assessing the risk of natural hazards affecting these complex
systems? We showed that some of the graph properties can
disclose some relevant characteristics of the system related
to the risk assessment. What is the vulnerability and expo-
sure of the system? We proposed new analogies between
some graph properties such as authority, hub, betweenness
and degree-out values and the “systemic” exposure and vul-
nerability. The adoption of these analogies is supported by
the recent work published by Clark-Ginsberg et al. (2018):
despite having a different scope, they also use certain graph
properties to assess the hazards of the companies operating
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 543
Figure 16. Services impacted at T=100 years.
in the case study and promote a network representation of
the risk. In Sects. 2.2 and 3.2 we highlight the importance,
before quantifying the risk, of looking at the single risk com-
ponents from the systemic lens provided by the graph prop-
erties. This information could support more informed DRR
decision-making by strategically suggesting how to prioritize
intervention in order to minimize exposure and vulnerability
from a system point of view.
A second advantage is that the graph can be used as a tool
to propagate the impacts throughout the system from wher-
ever the hazard hit it, including indirect or cascading effects.
The links between nodes allow passing from the direct physi-
cal damage to broader economic and social indirect impacts.
The indirect impact suffered by a certain node may be de-
fined as a function of two factors: (1) the direct damage sus-
tained by one or more of its parent nodes (i.e. traditional im-
pact) and (2) the loss of service the latter provide to the for-
mer (i.e. vulnerability function). The integration of indirect
impact quantification within the graph-based framework has
been addressed in the pilot study using a simplified binary
vulnerability function.
Despite the two advantages in adopting this system’s per-
spective in risk assessment, Clark-Ginsberg et al. (2018)
highlights that there are “questions about the validity of
such assessment” regarding the ontological foundations of
networked risk, the non-linearity and emergent phenomena
that characterize system phenomena. In fact, the emergence
of the risk system demonstrates that the risk will never be
completely knowable, and for this reason the “unknown un-
knowns are an inseparable part of a risk networks”; in fact,
the boundary definition of open systems is by nature artifi-
The application to the case of urban flooding in Mexico
City it is a first attempt to demonstrate the feasibility of the
proposed approach, and it is also the first example in liter-
ature that tries to quantitatively analyse the propagation of
impact into a network of individual elements that do not
explicitly constitute a network. In this study, the complex-
ity of Mexico City is depicted by modelling certain selected
typologies of elements of the urban system and by assum-
ing simplified rules of connection between them. Further-
more, the system complexity acknowledged in this study is
restricted to the elements inside the MCFD and neglects any
potential contributions from outside elements. The definition
of a geographical boundary condition, which is a straight-
forward assumption in the traditional reduction approaches,
can be controversial in the holistic approaches that aim to
model the emergent characteristics of open-ended systems.
However, the flexibility of our approach allows for a graph to
be designed with any intended level of detail, depending on
the purpose of each specific application and the availability
of data. For instance, if a more comprehensive characteriza-
tion of the road network were required, the graph could be
expanded to include additional elements other than the ma-
jor crossroads. Another example takes into consideration the
rules of connections adopted in this study, which do not al-
low for redundancy, as each node is considered to receive
its services from the nearest provider only. A more detailed
graph could include, for example, influencing areas for each
service, which would allow considering multiple providers
for some of them, provided that the required data were avail-
able. Adopting different rules (e.g. a provider could deliver
its service to as many elements as inside a defined distance)
would allow a degree of redundancy of the network, which
could significantly change the impact of a hazardous event.
We adopted a simple flood scenario to illustrate how some of
the measures of a graph can be used in the context of natu-
ral hazard risk assessment. However, within our framework,
additional potentially relevant information can be obtained.
For example, here we presented the results of the structural
analysis of the graph without looking into functional prop-
erties such as the percolation threshold, which characterizes
the resilience of a network and can therefore provide valu-
able information for practical applications. Another possible
extension consists of studying how the network evolves with
time, following external perturbations at different return pe-
Furthermore, the proposed approach could introduce a
common base for future research on both multi-hazard and
integrated risk assessment. Since the graph properties are
hazard independent, it is possible to integrate these properties
with the characteristics of the single node, such as the phys-
ical vulnerability of a building with respect to earthquakes
or flooding (adopted by reductionist approaches), and anal-
yse multiple hazards using the same graph. Besides this, the
use of this approach can be applied to physical as well as so-
cial or integrated risk. In the former case, the graph has only
physical elements (e.g. buildings); in the latter case the graph
has nodes that reflect also social aspects (e.g. population, age,
education, etc.).
Further research will aim to fully implement and integrate
the graph-based approach in quantitative risk assessments,
both at the scenario and probabilistic level. One of the chal- Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
544 M. Arosio et al.: The whole is greater than the sum of its parts
lenges that will need to be addressed is related to data re-
quirements and availability. Currently, most exposure and
vulnerability databases focus on the properties of single el-
ements and tend to contain little to no information on the
connections between them. As we have discussed, this in-
formation is key for more thoroughly understanding and as-
sessing the risk of a system. For this reason, developing and
collecting data with information related to the connections
between the elements is paramount. To promote this perspec-
tive, it is necessary consider shifting the RA from using tra-
ditional relational databases to so-called graph databases. In
such databases, each node contains, in addition to the tradi-
tional characteristics, also a list of relationship records which
represent its connections with other nodes. The information
on these links is organized by type and direction and may
hold additional attributes.
Finally, the introduction of the graph-based approach into
the RA for collective disaster risk aims, in the long term, to
be a first step for future developments of agent-based mod-
els and complex adaptive systems in collective risk assess-
ment. In this perspective, the nodes of the network are agents,
with a defined state (e.g. level of damage), and the interac-
tion between the other agents is controlled by specific rules
(e.g. vulnerability and functional functions) inside the envi-
ronment they live in (e.g. natural hazard phenomena).
Data availability. The geospatial vector input data for the Mexico
City case study are available in the Supplement.
Supplement. The supplement related to this article is available on-
line at:
Author contributions. MA, MLVM and RF made substantial con-
tributions to the conception and design, acquisition, analysis, and
interpretation of data. All authors participated in drafting the arti-
cle and revising it critically for important intellectual content. All
authors give final approval of the published version.
Competing interests. The authors declare that they have no conflict
of interest.
Acknowledgements. This research was partly funded by Fon-
dazione Cariplo under the project “NEWFRAME: NEtWork-based
Flood Risk Assessment and Management of Emergencies”, and it
has been developed within the framework of the project “Diparti-
menti di Eccellenza”, funded by the Italian Ministry of Education,
University and Research at IUSS Pavia.
Financial support. This research was partly funded by Fondazione
Cariplo under the project “NEWFRAME: NEtWork-based Flood
Risk Assessment and Management of Emergencies”, and it has been
developed within the framework of the project “Dipartimenti di Ec-
cellenza”, funded by the Italian Ministry of Education, University
and Research at IUSS Pavia.
Review statement. This paper was edited by Bruno Merz and re-
viewed by three anonymous referees.
Abele, W. I. and Dunn, M.: International CIIP handbook 2006 –
(Vol. I) – An inventory of 20 national and 6 international critical
information infrastructure protection polices, Zurich, Switzer-
land, 2006.
Albano, R., Sole, A., Adamowski, J., and Mancusi, L.: A GIS-based
model to estimate flood consequences and the degree of accessi-
bility and operability of strategic emergency response structures
in urban areas, Nat. Hazards Earth Syst. Sci., 14, 2847–2865,, 2014.
Alexoudi, M. N., Kakderi, K. G., and Pitilakis, K. D.: Seismic risk
and hierarchy importance of interdependent lifelines. Methodol-
ogy and important issues, in: 11th ICASP International Confer-
ence on Application of Statistics and Probability in Civil Engi-
neering, 1–4 August 2011, Zurich, Switzerland, 2011.
Alfieri, L., Bisselink, B., Dottori, F., Naumann, G., de Roo, A.,
Salamon, P., Wyser, K., and Feyen, L.: Global projections of
river flood risk in a warmer world, Earth’s Future, 5, 171–182,, 2017.
Amaro, P.: Proposal of an approach for estimating relationships I-d-
Tr from 24 hours rainfalls, Universidad Nacional Autónoma de
México (UNAM), M.C., 2005.
Arosio, M., Martina, M. L. V, Carboni, E., and Creaco, E.: Sim-
plified pluvial flood risk assessment in a complex urban envi-
ronment by means of a dynamic coupled hydrological-hydraulic
model: case study of Mexico City, in: Proc. ofthe 5th IAHR Eu-
rope Congress New Challenges in Hydraulic Research and Engi-
neering, edited by: Armanini, A. and Nucci, E., 5th IAHR Europe
Congress Organizers, Trento, 429–430, 2018.
Arrighi, C., Brugioni, M., Castelli, F., Franceschini, S., and Maz-
zanti, B.: Urban micro-scale flood risk estimation with parsi-
monious hydraulic modelling and census data, Nat. Hazards
Earth Syst. Sci., 13, 1375–1391,
13-1375-2013, 2013.
Artina, S., Calenda, G., Calomino, F., Loggia, G. La, Modica, C.,
Paoletti, A., Papiri, S., Rasulo, G., and Veltri, P.: Sistemi di
fognatura. Manuale di progettazione, edited by: Hoepli, Milan,
Balbi, S., Giupponi, C., Gain, A., Mojtahed, V., Gallina, V., Torre-
san, S. and Marcomini, A.: The KULTURisk Framework (KR-
FWK): A conceptual framework for comprehensive assessment
of risk prevention measures – Project deliverable 1.6., FP7-ENV-
2010 |Project 265280, 2010.
Barabasi, A. L.: Network Science, edited by: Cambridge Uni-
versity Press, Cambridge, available at:
networksciencebook/ (last access: 10 February 2020), 2016.
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 545
Bazzurro, P. and Luco, N.: Accounting for uncertainty and corre-
lation in earthquake loss estimation, 9th Int. Conf. Struct. Saf.
Reliab., Millpress, Rotterdam, 2687–2694, 2005.
Beirlant, J., Dierckx, G., Goegebeur, Y., and Matthys, G.: Tail Index
Estimation and an Exponential Regression Model, Extremes, 2,
177–200, 1999.
Bergström, J., Uhr, C., and Frykmer, T.: A Complexity Framework
for Studying Disaster Response Management, J. Conting. Crisis
Man., 24, 124–135,,
Biggs, N. L., Lloyd, E. K., and Wilson, R. J.: Graph Theory 1736-
1936, edited by: Clarendon Press, Oxford, 1976.
Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., and Hwang, D.
U.: Complex networks: Structure and dynamics, Phys. Rep., 424,
175–308,, 2006.
Börner, K., Soma, S., and Vespignani, A.: Network Science, in: An-
nual Review of Information & Technology, vol. 41, edited by:
Medford 537–607, asis&t, New Jersey, 2007.
Bosetti, L., Ivanovic, A., and Menaal, M.: Fragility, Risk,
and Resilience: A Review of Existing Frameworks, avail-
able at:
Assessing-Fragility-Risk-and-Resilience-Frameworks.pdf (last
access: 10 February 2020), 2016.
Bouwer, L. M., Crompton, R. P., Faust, E., Höppe, P., and
Pielke, R. A.: Confronting disaster losses, Science, 318, 753,, 2007.
Bruneau, M., Chang, S. E., Eguchi, R. T., Lee, G. C., O’Rourke,
D., Reinhorn, A. M., Shinozuka, M., Tierney, K., Wallace, W.
A., and von Winterfeldt, D.: A framework to quantitatively as-
sess and enhance the seismic resilience of communites, in: 13th
World Conference on Earthquake Engineering, EarthquakeSpec-
tra, Vancouver, Canada, 2004.
Buldyrev, S. V, Parshani, R., Paul, G., Stanley, H. E., and Havlin,
S.: Catastrophic cascade of failures in interdependent networks,
Nature, 464, 1025–1028,,
Bunde, A. and Havlin, S.: Fractals and Disordered Systems,
Springer-Verlag, Berlin Heidelberg, 1991.
Burton, C. G. and Silva, V.: Assessing Integrated Earth-
quake Risk in OpenQuake with an Application to
Mainland Portugal, Earthq. Spectra, 32, 1383–1403,, 2015.
Campillo, G., Dickson, E., Leon, C., and Goicoechea, A.: Urban
risk assessment Mexico City metropolitan area, Understanding
urban risk: an approach for assessing disaster and climate risk in
cities, Mexico, 2011.
Cardona, O. D.: The Need for Rethinking the Concepts of Vulnera-
bility and Risk from a Holistic Perspective: A Necessary Review
and Criticism for Effective Risk Management, in Mapping vul-
nerability: Disasters, development and people, vol. 3, Earthscan
Publishers, London, 37–51, 2003.
Carreño, M. L., Cardona, O., and Barbat, A.: A disaster risk
management performance index, Nat. Hazards, 41, 1–20,, 2007a.
Carreño, M. L., Cardona, O. D., and Barbat, A. H.: Urban seismic
risk evaluation: A holistic approach, Nat. Hazards, 40, 137–172,, 2007b.
Carreño, M. L., Cardona, O. D., and Barbat, A. H.: New methodol-
ogy for urban seismic risk assessment from a holistic perspective,
B. Earthq. Eng., 10, 547–565,
011-9302-2, 2012.
Clark-Ginsberg, A., Abolhassani, L., and Rahmati, E. A.: Com-
paring networked and linear risk assessments: From the-
ory to evidence, Int. J. Disast. Risk Re., 30, 216–224,, 2018.
Coles, S.: An Introduction to Statistical Modeling of Extreme Val-
ues, Springer-Verlag, London, 2001.
Crowley, H. and Bommer, J. J.: Modelling seismic hazard in earth-
quake loss models with spatially distributed exposure, B. Earthq.
Eng., 4, 249–273,,
Cutter, S. L., Barnes, L., Berry, M., Burton, C., Evans, E., Tate,
E., and Webb, J.: A place-based model for understanding com-
munity resilience to natural disasters, Global Environ. Chang.,
18, 598–606,,
Cutter, S. L., Burton, C. G., and Emrich, C. T.: Journal of Homeland
Security and Disaster Resilience Indicators for Benchmarking
Baseline Conditions Disaster Resilience Indicators for Bench-
marking Baseline Conditions, J. Homel. Secur. Emerg., 7, 51,, 2010.
David, C.: The Risk Triangle, available at: https://www.ilankelman.
org/crichton/1999risktriangle.pdf (last access: 1 January 2020),
Dueñas-Osorio, L. and Vemuru, S. M.: Cascading failures in
complex infrastructure systems, Struct. Saf., 31, 157–167,, 2009.
Dueñas-Osorio, L., Craig, J. I., and Goodno, B. J.: Probabilistic re-
sponse of interdependent infrastructure networks, in 2nd annual
meeting of the Asian-pacific network of centers for earthquake
engineering research (ANCER), Honolulu, Hawaii, 2004.
Eakin, H., Bojórquez-Tapia, L. A., Janssen, M. A., Georgescu,
M., Manuel-Navarrete, D., Vivoni, E. R., Escalante, A.
E., Baeza-Castro, A., Mazari-Hiriart, M., and Lerner, A.
M.: Urban resilience efforts must consider social and po-
litical forces, P. Natl. Acad. Sci. USA, 114, 186–189,, 2017.
Euler, L.: Solutio problematis ad geometrian situs pertinentis, Com-
mentarii Academiae Scientiarum Imperialis Petropolitanae, 8,
128–140, 1736.
Falter, D., Schröter, K., Dung, N. V., Vorogushyn, S., Kreibich,
H., Hundecha, Y., Apel, H., and Merz, B.: Spatially coherent
flood risk assessment based on long-term continuous simula-
tion with a coupled model chain, J. Hydrol., 524, 182–193,, 2015.
Gallina, V., Torresan, S., Critto, A., Sperotto, A., Glade, T.,
and Marcomini, A.: A review of multi-risk methodologies for
natural hazards: Consequences and challenges for a climate
change impact assessment, J. Environ. Manage., 168, 123–132,, 2016.
Gao, J., Liu, X., Li, D., and Havlin, S.: Recent progress on
the resilience of complex networks, Energies, 8, 12187–12210,, 2015.
Grossi, P. and Kunreuther, H.: Catastrophe Modeling: A New
Aproach to Managing Risk Catastrophe Modeling, Springer Sci-
ence + Bussiness Media, Inc., Boston, 2005.
Hammond, M. J., Chen, A. S., Djordjevi´
c, S., Butler,
D., and Mark, O.: Urban flood impact assessment: Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
546 M. Arosio et al.: The whole is greater than the sum of its parts
A state-of-the-art review, Urban Water J., 12, 14–29,, 2013.
Holmgren, Å. J.: Using Graph Models to Analyze the Vulnera-
bility of Electric Power Networks, Risk Anal., 26, 955–969,, 2006.
IPCC: Managing the Risks of Extreme Events and Disasters to Ad-
vance Climate Change Adaptation. A Special Report of Work-
ing Groups I and II of the Intergovernmental Panel on Climate
Change, edited by: Field, C. B., Barros, V., Stocker, T. F., Qin,
D., Dokken, D. J., Ebi, K. L., Mastrandrea, M. D., Mach, K. J.,
Plattner, G.-K., Allen, S. K., Tignor, M., and Midgley, P. M.,
Cambridge University Press, Cambridge, UK, and New York,
NY, USA, 582 pp., 2012.
Kakderi, K., Argyroudis, S., and Pitilakis, K.: State of the art lit-
erature review of methodologies to assess the vulnerability of
a “system of systems” – Project deliverable D2.9., 2011.
Karagiorgos, K., Thaler, T., Hübl, J., Maris, F., and Fuchs, S.: Multi-
vulnerability analysis for flash flood risk management, Nat.
Hazards, 82, 63–87,,
Koenig, M. D. and Battiston, S.: From Graph Theory to Models
of Economic Networks. A Tutorial, in: Networks, Topology and
Dynamics, Springer-Verlag, Berlin, 23–63, 2009.
Lane, J. A. and Valerdi, R.: Accelerating system of sys-
tems engineering understanding and optimization through
lean enterprise principles, 2010 IEEE Int. Syst. Conf.
Proceedings, SysCon 2010, Management of Environ-
mental Quality: An International Journal, 196–201,, 2010.
Lewis, T. G.: Critical Infrastructure Protection in Homeland Secu-
rity: Defending a Networked Nation, John Wiley & Sons, 2014.
Lhomme, S., Serre, D., Diab, Y., and Laganier, R.: Analyzing re-
silience of urban networks: a preliminary step towards more
flood resilient cities, Nat. Hazards Earth Syst. Sci., 13, 221–230,, 2013.
Liu, B., Siu, Y. L., and Mitchell, G.: Hazard interaction analysis for
multi-hazard risk assessment: a systematic classification based
on hazard-forming environment, Nat. Hazards Earth Syst. Sci.,
16, 629–642,, 2016.
Luce, R. D. and Perry, A. D.: A method of matrix anal-
ysis of group structure, Psychometrika, 14, 95–116,, 1949.
Markolf, S. A., Chester, M. V., Eisenberg, D. A., Iwaniec, D.
M., Davidson, C. I., Zimmerman, R., Miller, T. R., Ruddell, B.
L., and Chang, H.: Interdependent Infrastructure as Linked So-
cial, Ecological, and Technological Systems (SETSs) to Address
Lock-in and Enhance Resilience, Earth’s Futur., 6, 1638–1659,, 2018.
Menoni, S.: Chains of damages and failures in a metropolitan envi-
ronment?: some observations on the Kobe earthquake in 1995, J.
Hazard. Mater., 86, 101–119, 2001.
Menoni, S., Pergalani, F., Boni, M., and Petrini, V.: Lifelines earth-
quake vulnerability assessment: a systemic approach, Soil Dyn.
Earthq. Eng., 22, 1199–1208,
7261(02)00148-3, 2002.
Mingers, J. and White, L.: A Review of the Recent Contribution
of Systems Thinking to Operational Research and Management
Science – Working paper series n. 197, University of Bristol,
Bristol, 2009.
Navin, P. K. and Mathur, Y. P.: Application of graph theory for op-
timal sewer layout generation, Discovery, 40, 151–157, 2015.
Nepusz, T. and Csard, G.: Network Analysis and Visualization Au-
thor, available at:
igraph.pdf (last access: 1 February 2020), 2018.
Newman, M. E. J.: Networks An Introduction, Oxford, New York,
Ouyang, M.: Review on modeling and simulation of interdependent
critical infrastructure systems, Reliab. Eng. Syst. Safe, 121, 43–
60,, 2014.
Pant, R., Thacker, S., Hall, J. W., Alderson, D., and Barr, S.: Critical
infrastructure impact assessment due to flood exposure, J. Flood
Risk Manag., 11, 22–33,,
Pescaroli, G. and Alexander, D.: Critical infrastructure, panarchies
and the vulnerability paths of cascading disasters, Nat. Hazards,
82, 175–192,, 2016.
Pescaroli, G. and Alexander, D.: Understanding Com-
pound, Interconnected, Interacting, and Cascading Risks:
A Holistic Framework, Risk Anal., 38, 2245–2257,, 2018.
Reed, D. A., Kapur, K. C., and Christie, R. D.: Methodology for as-
sessing the resilience of networked infrastructure, IEEE Syst. J.,
3, 174–180,, 2009.
Rinaldi, S. M.: Modeling and simulating critical infrastructures
and their interdependencies, Big Island, HI, USA IEEE, 8 pp.,, 2004.
Rinaldi, S. M., Peerenboom, J. P., and Kelly, T. K.: Iden-
tifying, understanding, and analyzing critical infrastructure
interdependencies, IEEE Contr. Syst. Mag., 21, 11–25,, 2001.
Rossman, L. A.: Storm Water Management Model User’s
Manual, EPA – United States ENviromental Protection
Agency, available at:
storm-water-management-model-swmm (last access: 1 Febru-
ary 2020), 2015.
Santos-Reyes, J., Gouzeva, T., and Santos-Reyes, G.: Earthquake
risk perception and Mexico City’s public safety, Procedia Engi-
neer, 84, 662–671,,
Sapountzaki, K.: Social resilience to environmental risks: A mech-
anism of vulnerability transfer?, Manag. Environ. Qual. An Int.
J., 18, 274–297,,
Scarrott, C. and Macdonald, A.: A review of extreme value thresh-
olds estimation and uncertainty quantification, Stat. J., 10, 33–60,
Schneiderbauer, S. and Ehrlich, D.: Risk, hazard and people’s
vulnerability to natural hazards: A review of definitions, con-
cepts and data, Eur. Comm. Jt. Res. Centre. EUR, 21410, 40,, 2004.
Schwarte, N., Cohen, R., Ben-Avraham, D., Barabási, A. L., and
Havlin, S.: Percolation in directed scale-free networks, Phys.
Rev. E, 66, 1–4,,
Setola, R., Rosato, V., Kyriakides, E., and Rome, E.: Managing the
Complexity of Critical Infrastructures, Springer Nature, Poland,
Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
M. Arosio et al.: The whole is greater than the sum of its parts 547
SFDRR: Sendai Framework for Disaster Risk Reduction
2015–2030, available at:
publications/43291 (last access: 18 February 2020), 2015.
Tellman, B., Bausch, J. C., Eakin, H., Anderies, J. M., Mazari-
hiriart, M., and Manuel-navarrete, D.: Adaptive pathways and
coupled infrastructure: seven centuries of adaptation to water risk
and the production of vulnerability in Mexico City, Ecol. Soc.,
23, 1,, 2018.
Terzi, S., Torresan, S., Schneiderbauer, S., Critto, A., Zebisch,
M., and Marcomini, A.: Multi-risk assessment in moun-
tain regions?: A review of modelling approaches for cli-
mate change adaptation, J. Environ. Manage., 232, 759–771,, 2019.
Trucco, P., Cagno, E., and De Ambroggi, M.: Dynamic func-
tional modelling of vulnerability and interoperability of Crit-
ical Infrastructures, Reliab. Eng. Syst. Safe., 105, 51–63,, 2012.
Tsuruta M. Shoji Y., Kataoka S., G. Y.: Damage propagation caused
by interdependency among critical infrastructures, 14th World
Conf. Earthq. Eng., 8, 2008.
Van Der Hofstad, R.: Percolation and Random Graphs, in: New Per-
spectives in Stochastic Geometry, edited by: Kendall, W. S. and
Molchanov, I., Eindhoven University of Technology, Eindhoven,
the Netherlands, 2009.
Wahl, T., Jain, S., Bender, J., Meyers, S. D., and Luther, M. E.:
Increasing risk of compound flooding from storm surge and
rainfall for major US cities, Nat. Clim. Change, 5, 1093–1097,, 2015.
Wilson, R. J.: Introduct to Graph Theory, Oliver & Boyd, Edin-
burgh, 1996.
Zimmerman, R., Foster, S., González, J. E., Jacob, K., Kunreuther,
H., Petkova, E. P., and Tollerson, E.: New York City Panel on
Climate Change 2019 Report Chapter 7: Resilience Strategies for
Critical Infrastructures and Their Interdependencies, Ann. NY
Acad. Sci., 1439, 174–229,,
Zio, E.: Challenges in the vulnerability and risk analysis of crit-
ical infrastructures, Reliab. Eng. Syst. Safe., 152, 137–150,, 2016.
Zscheischler, J., Westra, S., Van Den Hurk, B. J. J. M., Senevi-
ratne, S. I., Ward, P. J., Pitman, A., Aghakouchak, A., Bresch,
D. N., Leonard, M., Wahl, T., and Zhang, X.: Future climate
risk from compound events, Nat. Clim. Change, 8, 469–477,, 2018. Nat. Hazards Earth Syst. Sci., 20, 521–547, 2020
... The debate on what makes a complex system resilient, the variables to measure and to monitor it and, therefore, to improve it, has not yet been clearly defined both at a theoretical and applicative level [12]. In this open discussion, Arosio et al. (2020) [36] proposed a paradigm shift from a reductionist to a holistic approach to assess natural hazard risk supported by the construction of a graph. The graph is constructed by identifying the two main objects, i.e., nodes and links, and their characteristics. ...
... The connection between exposed elements constitutes a network that, in the case of a hazard, propagates the impacts along the system (the socalled cascading effects). Being a network, the system can be mathematically represented by a graph [36]. Figure 1 conceptually shows how the impacts of a hazardous event are propagated: at first, only directly impacted elements are accounted for, then elements indirectly impacted by them, and thereafter, in sequence, all the other elements connected to those. ...
... Therefore, the total, overall, impact of the event over the system is much larger than only the direct impact. The graph model proposed by Arosio et al. [36] is designed to assess the total impact. In this paper, we present a step forward with regard to the previous model that enables the graph to take into account the capacity of the system to resile to the hazardous event. ...
Full-text available
In the last decades, resilience became officially the worldwide cornerstone to reduce the risk of disasters and improve preparedness, response, and recovery capacities. Although the concept of resilience is now clear, it is still under debate how to model and quantify it. The aim of this work was to quantify the resilience of a complex system, such as a densely populated and urbanized area, by modelling it with a graph, the mathematical representation of the system element and connections. We showed that the graph can account for the resilience characteristics included in its definition according to the United Nations General Assembly, considering two significant aspects of this definition in particular: (1) resilience is a property of a system and not of single entities and (2) resilience is a property of the system dynamic response. We proposed to represent the exposed elements of the system and their connections (i.e., the services they exchange) with a weighted and redundant graph. By mean of it, we assessed the systemic properties, such as authority and hub values and highlighted the centrality of some elements. Furthermore, we showed that after an external perturbation, such as a hazardous event, each element can dynamically adapt, and a new graph configuration is set up, taking advantage of the redundancy of the connections and the capacity of each element to supply lost services. Finally, we proposed a quantitative metric for resilience as the actual reduction of the impacts of events at different return periods when resilient properties of the system are activated. To illustrate step by step the proposed methodology and show its practical feasibility, we applied it to a pilot study: the city of Monza, a densely populated urban environment exposed to river and pluvial floods.
... J.J. Monge, N. McDonald and G.W. McDonald Science of the Total Environment 803 (2022) 149947 (Arosio et al., 2020;Ogie et al., 2017;Pant et al., 2016;Yadav et al., 2020), earthquake impacts (Bono and Gutiérrez, 2011;Carvalho et al., 2017;Dueñas-Osorio et al., 2007;Dunant et al., 2021;Fragiadakis and Christodoulou, 2014), volcanic impacts (Wilkinson et al., 2012), landslide impacts (Postance et al., 2017), hurricane impacts (Winkler et al., 2011), drought impacts (Lim-Camacho et al., 2017. ...
... Being the common denominator among the various graphical methods identified, edges have different connotations in the literature: Monetary flows and services in SNs connecting affected actors in a socio-economic system (e.g. Arosio et al., 2020); chronological sequence of cascading events triggered by natural hazards in ETs and LTs (e.g. Marzocchi and Bebbington, 2012); (in)dependencies in BNs (e.g. ...
... most nodes with similar degree) and heterogeneous (i.e. few high-degree nodes also known as hubs) networks (Arosio et al., 2020). The latter is of special importance in the natural hazards' literature as it includes a special network type known as scale-free network characterised by a heavy-tailed distribution with a large number of poorly connected nodes and a few highly important hubs that when hit by a hazard event the consequences for the network could be catastrophic due to the central role of the hubs (Arosio et al., 2020;Chopra and Khanna, 2015). ...
The popular concept of wellbeing has added multiple dimensions to the current socioeconomic measures of vulnerability from natural hazards. Due to the wellbeing concept's relevance in various policy agendas, there is a need for a stronger integration of what is predominantly a socio-economic concept into the natural hazards space. Graphical methods have been used as transdisciplinary engagement tools to translate verbal descriptions of socio-ecological systems into simulation models able to test hypotheses. The purpose of this article is to identify the graphical methods that have been used in the literature to graphically represent, structure and model different segments of the hazard risk chain. A thorough review of the literature on natural hazards was performed using a set of keywords and filters that resulted in a total of 94 articles, which were then categorised based on the graphical methods used, broad families, properties, hazard types, and segments along the risk chain considered. A case study on volcanic hazards in Mount Taranaki, New Zealand showcased ways forward by conceptually combining methods to link hazards to impacts on wellbeing. Out of the review it was identified that the most widely used methodologies in the natural hazards space are probabilistic graphs (e.g. Bayesian networks) representing the random nature of hazards while mapping methods based on System Dynamic principles (SD) (e.g. causal loop diagrams) are used to characterise the dynamically emergent behaviours of socio-economic agents. While studies linking hazards to wellbeing using graphs are scarce, there is a nascent literature on the characterisation of wellbeing's multi-dimensionality using networks and SD diagrams. Hence, the possibilities to use common methods, or combinations of these, are numerous potentially enabling the creation of graph-based, distilled simulation models that can be used by experts from different backgrounds to quantitatively model the wellbeing impacts exerted by natural hazards.
... V describes the vulnerability to cascading effects in a network arrangement and can highlight specific locations or sectors for further assess- ment. Additional sources on the derivation of other network characteristics can be found here: [21,51]. ...
... V describes the vulnerability to cascading effects in a network arrangement and can highlight specific locations or sectors for further assessment. Additional sources on the derivation of other network characteristics can be found here: [21,51]. ...
Full-text available
Critical infrastructure (CI) networks are essential for the survival and functionality of society and the economy. Disruptions to CI services and the cascading effects of these disruptions are not currently included in flood risk management (FRM). The work presented in this study integrates CI into every step of FRM, including flood risk analysis, risk mitigation and risk communication. A CI network modelling technique enables the flood consequences for CI to be quantified as part of the flood risk analysis. The CI consequences derived from this analysis include spatial overviews and the temporal succession of CI disruptions. The number of affected CI end-users and the duration of the disruption are arranged in a risk matrix and in a decision-making matrix. Thus, the total flood risk is extended with CI consequences. By integrating CI and CI network characteristics into the flood risk assessment and the mitigation steps, a wider range of measures for action can be considered. Additionally, the continuous participation of CI operators is introduced as beneficial for every step of the FRM. A case study in Accra, Ghana proves the benefits of CI integration for all FRM steps. During participatory CI stakeholder engagements for this study six CI sectors were identified for the assembly of the CI network. The backbone of the analysis is a multisectoral, layered CI network model with 433 point elements, 1216 connector elements and 486 polygon elements.
... Many studies are still grounded with single hazard isolation which has shown shortcomings in considering the interactive association of the multi-hazard risk. To overcome some of these limitations, nowadays, multi-hazard studies are receiving growing scientific attention (Arosio et al. 2020). Many researchers considered multi-hazard zoning in multiple countries, taking administrative boundaries into account (Siddique & Schwarz 2015;Durlević et al. 2021;Rusk et al. 2022). ...
Full-text available
Landslides, floods, fires, windstorms, hailstorms, and earthquakes are major dangers in Bhutan due to historical events and their potential damage. At present, systematic collection of data is scarce and no multi-hazard zoning is reported in the existing literature for Bhutan. In addition, for proper disaster management, recognizing the existence of the hazards and identifying the vulnerable areas are the first important tasks for any multi-hazard risk studies. To fill the gap, the main objective of this study is to prepare the multi-hazard zoning and assess the multi-hazard population risk for Bhutan on seven historical hazard events. To achieve this, we first collected data on the historical events of different periods based on the data availability and created a district-level database. A total of 1224 hazard events were retrieved. We then calculated the weighted score for individual hazards based on the number of occurrences and the degree of impact through a multi-criteria decision analysis model (MCDA) using the analytic hierarchy process (AHP). The district-wise individual hazard scores are then obtained using the weighted scores. The total hazard score (THS) was aggregated and normalized to obtain the district-wise multi-hazard scores. A multi-hazard zoning map was created in the open-source software QGIS, highlighting 70% of districts with moderate to severe multi-hazard vulnerability. Considering the population distribution in each district at the local levels, the multi-hazard score is integrated and the multi-hazard population risk is mapped.
... This paradigm for the study of social and/or economic ecosystems not only allows the interaction of people, organizations or intangible elements of the ecosystem (such as an event that occurs within it) to be measured but also creates the possibility of using computational simulations and quantitative techniques to understand complex dynamics that would otherwise be difficult to analyse, such as the development of strongly connected communities, the growth of the ecosystem, its resilience to disruptive events, its propensity to collapse, or the general evaluation of the health of an ecosystem (Arosio et al., 2020;Huang et al., 2018). ...
Full-text available
The benefits of using complex network analysis (CNA) to study complex systems, such as an economy, have become increasingly evident in recent years. However, the lack of a single comparative index that encompasses the overall wellness of a structure can hinder the simultaneous analysis of multiple ecosystems. A formula to evaluate the structure of an economic ecosystem is proposed here, implementing a mathematical approach based on CNA metrics to construct a comparative measure that reflects the collaboration dynamics and its resultant structure. This measure provides the relevant actors with an enhanced sense of the social dynamics of an economic ecosystem, whether related to business, innovation, or entrepreneurship. Available graph metrics were analysed, and 14 different formulas were developed. The efficiency of these formulas was evaluated on real networks from 11 different innovation-driven entrepreneurial economic ecosystems in six countries from Latin America and Europe and on 800 random graphs simulating similarly constructed networks.
The sustainable transition to resilient cities is linked to the evaluation of their citizens’ habits. Understanding the Built Environment (BE) use is fundamental to plan effective risk-mitigation strategies, and users’ features and behaviors deeply affect the way BEs are used. Recent studies are moving toward the definition of typological (idealized) scenarios—namely Built Environment Typologies (BETs)—for simulation-based analyses aimed at the assessment of real-life BEs safety and resilience. Rapid surveys are available to collect data on typological features and hazards/physical vulnerability factors, but not to adequately assess BEs users’ vulnerability and exposure to single/multi natural and human-related risks, and their spatiotemporal variability. Within this framework, this work aims at providing an expeditious survey form to quantify, collect and represent such data. The form is based on remote analyses for rapid evaluations of critical hourly/daily users-related conditions. Among BEs/BETs, the attention is here focused on squares, which represent meeting spaces par excellence and host main functions for communities. A real-world square (Piazza Duomo in Reggio Calabria, Italy) is selected for the form application because of its geomorphological and riskiness characterization in correlation with its previously defined BET type. Results are assessed through Key Performance Indicators (KPIs) resuming daily trends according to the users’ age, position and familiarity with outdoor and indoor areas. Promoted in the BE S2ECURe Italian Research Project, the form can also support safety planners and local administrations in simulation-based assessment and risk-mitigation strategies development.KeywordsSurvey formUsersSquaresUsers’ exposureUsers’ vulnerabilityBuilt environmentMulti-hazards
Full-text available
Unlike other studies on wind-precipitation compound events, station data was employed from all seasons 1961–2020 to analyze the frequency and seasonal distribution of these events in Central Europe. The spatial pattern of the annual frequency is mainly determined by the cold half-years when the frequency generally decreases with increasing longitude (due to the decreasing effect of extratropical cyclones), but it also increases with increasing altitude (probably due to the orographic precipitation enhancement effected by strong winds). Nevertheless, wind-precipitation compound events are also generated by convective storms mainly in summer, when compound events are more equally distributed in Central Europe, with generally higher frequencies in lowlands. Five types of weather stations were distinguished according to the seasonal distribution of wind-precipitation compound events, with the percentage of summer events as the main criterion. Mostly winter type dominates in the west, mostly autumn type at the coast of the North Sea, mixed type in north-east Germany, mostly summer type in central part of Germany, and summer type in eastern part of Czechia and in south-east Austria. We also demonstrate on selected examples that compound events frequently occur at a station only in the season when both abnormal winds and abnormal precipitation events appear and are related to the same circulation conditions. This is the reason why wind-precipitation compound events are very rare at some stations, mainly in the highlands in the eastern part of the study region. We also discuss the role of the threshold for selecting wind-precipitation compound events and prove that the higher their frequency is at a station, the higher the percentage of stronger events among them. This finding highlights wind-precipitation compound events as a significant natural hazard mainly in exposed areas.
Full-text available
Network analysis is a useful tool to analyse the interactions and structure of graphs that represent the relationships among entities, such as sectors within an urban system. Connecting entities in this way is vital in understanding the complexity of the modern world, and how to navigate these complexities during an event. However, the field of network analysis has grown rapidly since the 1970s to produce a vast array of available metrics that describe different graph properties. This diversity allows network analysis to be applied across myriad research domains and contexts, however widespread applications have produced polysemic metrics. Challenges arise in identifying which method of network analysis to adopt, which metrics to choose, and how many are suitable. This paper undertakes a structured review of literature to provide clarity on raison d'etre behind metric selection and suggests a way forward for applied network analysis. It is essential that future studies explicitly report the rationale behind metric choice and describe how the mathematics relates to target concepts and themes. An exploratory metric analysis is an important step in identifying the most important metrics and understanding redundant ones. Finally, where applicable, one should select an optimal number of metrics that describe the network both locally and globally, so as to understand the interactions and structure as holistically as possible. Supplementary information: The online version contains supplementary material available at 10.1007/s41109-022-00476-w.
The paper discusses how the Urban System Abstraction Hierarchy (USAH) can be used as an informative hazard-agnostic tool to understand interdependencies between shocks which impact tangible parts of the city system, and longer-term stressors which impact intangible outcomes of the city system. To create resilient cities, we must grapple with such complex interdependencies. Effective solutions that foster resilience require acknowledging the interplay between sectors (e.g. healthcare systems and ecosystem services), between scales (e.g. local and regional), between timeframes (e.g. immediate shocks and longer-term stresses), and between what we can and cannot see in the physical world (e.g. tangible resources and abstract purposes). These critical ‘systems thinking’ areas can be explored by mapping urban interdependencies through their functionality, rather than their geospatial connectivity. The aim of this paper is to build and validate the USAH as a resilience tool to do just this. The analysis demonstrates how the USAH tool can make interactions explicit whilst keeping urban complexity tractable. By quantifying interdependencies, fresh perspectives on urban functionality are provided. It concludes that the USAH tool fills an important gap in the resilience literature by helping to operationalise the complexity within urban systems.
Full-text available
The development of strategies to adapt to and mitigate the potential adverse consequences of natural hazards requires support from risk assessment studies that quantify the impacts of hazardous events on our society. A comprehensive analysis of risk commonly evaluates the elements exposed to the hazard probabilistic scenarios and their vulnerabilities. However, while significant advances have been made in the assessment of direct losses, indirect impacts are less frequently examined. This work assesses the indirect consequences of two hydrologic hazards, i.e., pluvial and fluvial floods, in an urban context from a system perspective. It presents a methodology to estimate the services accessibility risk (SAR) that considers the accessibility of roads and the connection between providers and users of services in a city. The feasibility of the proposed approach is illustrated by an application to a pilot study in Monza city (northern Italy) considering pluvial and fluvial flood hazard with different return periods. The results in terms of the social and economic impacts are analyzed considering features of age, disability, and the different economic sectors.
Conference Paper
Full-text available
This work presents the case study of Mexico City (MC), which could observe an increase of pluvial flood due to climate change, urbanization and subsidence. It is presented the development of a simplified model that explicitly integrate the drainage system and the surface runoff for the estimation of both hazard and impacts on population and buildings.
Full-text available
Traditional infrastructure adaptation to extreme weather events (and now climate change) has typically been techno-centric and heavily grounded in robustness—the capacity to prevent or minimize disruptions via a risk-based approach that emphasizes control, armoring, and strengthening (e.g., raising the height of levees). However, climate and nonclimate challenges facing infrastructure are not purely technological. Ecological and social systems also warrant consideration to manage issues of overconfidence, inflexibility, interdependence, and resource utilization—among others. As a result, techno-centric adaptation strategies can result in unwanted tradeoffs, unintended consequences, and underaddressed vulnerabilities. Techno-centric strategies that lock-in today's infrastructure systems to vulnerable future design, management, and regulatory practices may be particularly problematic by exacerbating these ecological and social issues rather than ameliorating them. Given these challenges, we develop a conceptual model and infrastructure adaptation case studies to argue the following: (1) infrastructure systems are not simply technological and should be understood as complex and interconnected social, ecological, and technological systems (SETSs); (2) infrastructure challenges, like lock-in, stem from SETS interactions that are often overlooked and underappreciated; (3) framing infrastructure with a SETS lens can help identify and prevent maladaptive issues like lock-in; and (4) a SETS lens can also highlight effective infrastructure adaptation strategies that may not traditionally be considered. Ultimately, we find that treating infrastructure as SETS shows promise for increasing the adaptive capacity of infrastructure systems by highlighting how lock-in and vulnerabilities evolve and how multidisciplinary strategies can be deployed to address these challenges by broadening the options for adaptation.
Full-text available
In recent years, there has been a gradual increase in research literature on the challenges of interconnected, compound, interacting, and cascading risks. These concepts are becoming ever more central to the resilience debate. They aggregate elements of climate change adaptation, critical infrastructure protection, and societal resilience in the face of complex, high‐impact events. However, despite the potential of these concepts to link together diverse disciplines, scholars and practitioners need to avoid treating them in a superficial or ambiguous manner. Overlapping uses and definitions could generate confusion and lead to the duplication of research effort. This article gives an overview of the state of the art regarding compound, interconnected, interacting, and cascading risks. It is intended to help build a coherent basis for the implementation of the Sendai Framework for Disaster Risk Reduction (SFDRR). The main objective is to propose a holistic framework that highlights the complementarities of the four kinds of complex risk in a manner that is designed to support the work of researchers and policymakers. This article suggests how compound, interconnected, interacting, and cascading risks could be used, with little or no redundancy, as inputs to new analyses and decisional tools designed to support the implementation of the SFDRR. The findings can be used to improve policy recommendations and support tools for emergency and crisis management, such as scenario building and impact trees, thus contributing to the achievement of a system‐wide approach to resilience.
Full-text available
Floods, wildfires, heatwaves and droughts often result from a combination of interacting physical processes across multiple spatial and temporal scales. The combination of processes (climate drivers and hazards) leading to a significant impact is referred to as a ‘compound event’. Traditional risk assessment methods typically only consider one driver and/or hazard at a time, potentially leading to underestimation of risk, as the processes that cause extreme events often interact and are spatially and/or temporally dependent. Here we show how a better understanding of compound events may improve projections of potential high-impact events, and can provide a bridge between climate scientists, engineers, social scientists, impact modellers and decision-makers, who need to work closely together to understand these complex events.
Full-text available
Infrastructure development is central to the processes that abate and produce vulnerabilities in cities. Urban actors, especially those with power and authority, perceive and interpret vulnerability and decide when and how to adapt. When city managers use infrastructure to reduce urban risk in the complex, interconnected city system, new fragilities are introduced because of inherent system feedbacks. We trace the interactions between system dynamics and decision-making processes over 700 years of Mexico City’s adaptations to water risks, focusing on the decision cycles of public infrastructure providers (in this case, government authorities). We bring together two lenses in examining this history: robustness-vulnerability trade-offs to explain the evolution of systemic risk dynamics mediated by feedback control, and adaptation pathways to focus on the evolution of decision cycles that motivate significant infrastructure investments. Drawing from historical accounts, archeological evidence, and original research on water, engineering, and cultural history, we examine adaptation pathways of humans settlement, water supply, and flood risk. Mexico City’s history reveals insights that expand the theory of coupled infrastructure and lessons salient to contemporary urban risk management: (1) adapting by spatially externalizing risks can backfire: as cities expand, such risks become endogenous; (2) over time, adaptation pathways initiated to address specific risks may begin to intersect, creating complex trade-offs in risk management; and (3) city authorities are agents of risk production: even in the face of new exogenous risks (climate change), acknowledging and managing risks produced endogenously may prove more adaptive. History demonstrates that the very best solutions today may present critical challenges for tomorrow, and that collectively people have far more agency in and influence over the complex systems we live in than is often acknowledged.
Full-text available
Improving urban resilience could help cities better cope with natural disasters, such as neighborhood flood events in Mexico City pictured here. Data source: Unidad Tormenta, Sistema de Aguas de la Ciudad de Mexico.
Full-text available
Rising global temperature has put increasing pressure on understanding the linkage between atmospheric warming and the occurrence of natural hazards. While the Paris Agreement has set the ambitious target to limiting global warming to 1.5°C compared to preindustrial levels, scientists are urged to explore scenarios for different warming thresholds and quantify ranges of socioeconomic impact. In this work, we present a framework to estimate the economic damage and population affected by river floods at global scale. It is based on a modeling cascade involving hydrological, hydraulic and socioeconomic impact simulations, and makes use of state-of-the-art global layers of hazard, exposure and vulnerability at 1-km grid resolution. An ensemble of seven high-resolution global climate projections based on Representative Concentration Pathways 8.5 is used to derive streamflow simulations in the present and in the future climate. Those were analyzed to assess the frequency and magnitude of river floods and their impacts under scenarios corresponding to 1.5°C, 2°C, and 4°C global warming. Results indicate a clear positive correlation between atmospheric warming and future flood risk at global scale. At 4°C global warming, countries representing more than 70% of the global population and global gross domestic product will face increases in flood risk in excess of 500%. Changes in flood risk are unevenly distributed, with the largest increases in Asia, U.S., and Europe. In contrast, changes are statistically not significant in most countries in Africa and Oceania for all considered warming levels.
Climate change has already led to a wide range of impacts on our society, the economy and the environment. According to future scenarios, mountain regions are highly vulnerable to climate impacts, including changes in the water cycle (e.g. rainfall extremes, melting of glaciers, river runoff), loss of biodiversity and ecosystems services, damages to local economy (drinking water supply, hydropower generation, agricultural suitability) and human safety (risks of natural hazards). This is due to their exposure to recent climate warming (e.g. temperature regime changes, thawing of permafrost) and the high degree of specialization of both natural and human systems (e.g. mountain species, valley population density, tourism-based economy). These characteristics call for the application of risk assessment methodologies able to describe the complex interactions among multiple hazards, biophysical and socio-economic systems, towards climate change adaptation. Current approaches used to assess climate change risks often address individual risks separately and do not fulfil a comprehensive representation of cumulative effects associated to different hazards (i.e. compound events). Moreover, pioneering multi-layer single risk assessment (i.e. overlapping of single-risk assessments addressing different hazards) is still widely used, causing misleading evaluations of multi-risk processes. This raises key questions about the distinctive features of multi-risk assessments and the available tools and methods to address them. Here we present a review of five cutting-edge modelling approaches (Bayesian networks, agent-based models, system dynamic models, event and fault trees, and hybrid models), exploring their potential applications for multi-risk assessment and climate change adaptation in mountain regions. The comparative analysis sheds light on advantages and limitations of each approach, providing a roadmap for methodological and technical implementation of multi-risk assessment according to distinguished criteria (e.g. spatial and temporal dynamics, uncertainty management, cross-sectoral assessment, adaptation measures integration, data required and level of complexity). The results show limited applications of the selected methodologies in addressing the climate and risks challenge in mountain environments. In particular, system dynamic and hybrid models demonstrate higher potential for further applications to represent climate change effects on multi-risk processes for an effective implementation of climate adaptation strategies.
Disaster risk has long been conceptualized as a complex and non-linear set of interactions. Instead of evaluating risks as isolated entities, ‘networked’ risk assessment methods are being developed to capture interactions between hazards and vulnerabilities. In this article, we address three challenges to networked risk assessments: the limited attention paid to the role of vulnerability in shaping risk networks, the unclear value of networked assessments compared to linear ones, and the potential conflict in linear and networked assessments at theoretical level. We do so by providing one of the first comparisons between linear and networked assessments in an empirical case, the risks faced by businesses operating in Iran's Razavi Khorasan Province. We find that risk rankings vary depending on whether risks are assessed using linear or networked techniques, and that vulnerabilities feature prominently in networked risk results. We argue that although networked and linear techniques rest on fundamentally different ontological conceptualizations of the world, approaches are complementary and reflect different dimensions of risk, and can be used in conjunction to provide a more comprehensive view of risk.