ArticlePDF Available

Abstract

This paper reviews the most relevant works that have investigated robustness in power grids using Complex Networks (CN) concepts. In this broad field there are two different approaches. The first one is based solely on topological concepts, and uses metrics such as mean path length, clustering coefficient, efficiency and betweenness centrality, among many others. The second, hybrid approach consists of introducing (into the CN framework) some concepts from Electrical Engineering (EE) in the effort of enhancing the topological approach, and uses novel, more efficient electrical metrics such as electrical betweenness, net-ability, and others. There is however a controversy about whether these approaches are able to provide insights into all aspects of real power grids. The CN community argues that the topological approach does not aim to focus on the detailed operation, but to discover the unexpected emergence of collective behavior, while part of the EE community asserts that this leads to an excessive simplification. Beyond this open debate it seems to be no predominant structure (scale-free, small-world) in high-voltage transmission power grids, the vast majority of power grids studied so far. Most of them have in common that they are vulnerable to targeted attacks on the most connected nodes and robust to random failure. In this respect there are only a few works that propose strategies to improve robustness such as intentional islanding, restricted link addition, microgrids and smart grids, for which novel studies suggest that small-world networks seem to be the best topology.
Energies 2015,8, 9211-9265; doi:10.3390/en8099211 OPEN ACCESS
energies
ISSN 1996-1073
www.mdpi.com/journal/energies
Review
A Critical Review of Robustness in Power Grids Using Complex
Networks Concepts
Lucas Cuadra 1, Sancho Salcedo-Sanz 1, Javier Del Ser 2, Silvia Jiménez-Fernández 1and
Zong Woo Geem 3,*
1Department of Signal Processing and Communications, University of Alcalá, Alcalá de Henares,
Madrid 28805, Spain; E-Mails: lucas.cuadra@uah.es (L.C.); sancho.salcedo@uah.es (S.S.-S.);
silvia.jimenez@uah.es (S.J.-F.)
2OPTIMA Area, TECNALIA, 48160 Derio, Bizkaia, Spain; E-Mail: javier.delser@tecnalia.com
3Department of Energy IT, Gachon University, Seongnam 461-701, Korea
*Author to whom correspondence should be addressed; E-Mail: geem@gachon.ac.kr;
Tel.: +82-31-750-5586.
Academic Editor: Stefan Gößling-Reisemann
Received: 30 May 2015 / Accepted: 19 August 2015 / Published: 28 August 2015
Abstract: This paper reviews the most relevant works that have investigated robustness
in power grids using Complex Networks (CN) concepts. In this broad field there are two
different approaches. The first one is based solely on topological concepts, and uses metrics
such as mean path length, clustering coefficient, efficiency and betweenness centrality,
among many others. The second, hybrid approach consists of introducing (into the CN
framework) some concepts from Electrical Engineering (EE) in the effort of enhancing the
topological approach, and uses novel, more efficient electrical metrics such as electrical
betweenness, net-ability, and others. There is however a controversy about whether these
approaches are able to provide insights into all aspects of real power grids. The CN
community argues that the topological approach does not aim to focus on the detailed
operation, but to discover the unexpected emergence of collective behavior, while part of
the EE community asserts that this leads to an excessive simplification. Beyond this open
debate it seems to be no predominant structure (scale-free, small-world) in high-voltage
transmission power grids, the vast majority of power grids studied so far. Most of them have
in common that they are vulnerable to targeted attacks on the most connected nodes and
robust to random failure. In this respect there are only a few works that propose strategies to
improve robustness such as intentional islanding, restricted link addition, microgrids and
Energies 2015,89212
smart grids, for which novel studies suggest that small-world networks seem to be the
best topology.
Keywords: robustness; power grid; complex network
1. Introduction
In the context of power grids a cascading outage is a sequence of failures and disconnections triggered
by an initial event, which can be caused by natural phenomena (e.g., high wind, flooding or a lightning
shorting a line), human actions (attacks) or the emergence of imbalances between load and generation.
An outage that affects a wide area or even the whole power grid is also called “blackout” [1], and usually
occurs in a time-scale that is typically too short to stop it by human intervention.
In this respect, most of the major blackouts in power grids have been generally caused by an initial
event (for instance, critical loads) that unchains a series of “cascading failures” [27], with very severe
consequences. This is the reason why the study of cascading failures in power grids (both in power
transmission grids [2,8], distributed generation [9] and smart grids [10]) is currently a vibrant topic
which is being profusely investigated [24,815]. Some historic blackouts—such as the recently one
occurred in India on end July 2012 [15], those in the north-east area of US and Canada (August 14,
2003) [1618], the one affecting a large portion of Italy (September 28, 2003) and other countries in the
European Union [19,20]—have been widely studied using both Complex Networks (CN) and Electrical
Engineering (EE) tools [2,7,8,1214,2138]. However, there seems to exist no single framework capable
of uncontroversially explaining neither their inner nonlinear dynamics nor their pervasiveness [7,11,33],
not only due to the complexity of the topic in itself [7,39] but also because of the disconnection between
the CN and EE communities [11], and the scientific controversy about whether the pure CN theory is
able to provide insights into real power grids.
Due to the complexity of these situations and their different theoretical approaches, some extremely
important and beneficial properties for power grids, such as “reliability”, “resilience” and “robustness”,
which are different although related concepts, have been tackled with different approaches [21,4044].
“Reliability” is a beneficial property for a power grid that refers to its ability to supply electric loads with
a high level of probability, during a given time interval [40]. Further details about its technical definitions
and references therein can be found in Table 1, which, for the sake of clarity, summarizes this and other
concepts that will be used throughout this paper. Likewise, “robustness” or “vulnerability” (its opposite
concept) are often used to measure to what extent a power grid has high reliability or low reliability,
respectively. In this review we follow the approach in [12] by using the definition that considers the
vulnerability of a power grid as the performance drop when a disruptive event emerges. The performance
can be measured by using a number of metrics; if ξlabels the metric to be considered, the power grid
vulnerability to an unexpected event that removes an element j(a line, a generator, whatsoever) can be
defined as
Vξ(j).
=ξξj
ξ(1)
Energies 2015,89213
where ξand ξjrepresent the value of the metric before and after the event affecting element j,
respectively. This generic formula will be particularized for different metrics throughout this paper.
The “resilience” of a power grid [4446] is the ability to recover quickly after high-impact,
low-probability disruptive events, and is related to the potential to adapt its structure and operation for
mitigating or even preventing the impact of similar events in the future [41,45,47]. Accordingly, there
is a relationship between robustness (which establishes how much damage occurs as a consequence
of an unexpected perturbation) and resilience (which is related to how quickly the power grid can
recover from such damage). Specifically, a power grid that lacks of robustness will often collapse
before recovery, having thus small or even no resilience. As shown in [45], the concept of resilience
is broader than that of robustness, and in fact encompasses not only robustness but also redundancy,
adaptive self-organization, and rapidity. The interested reader is referred to [45] for a deeper introduction
to the resilience framework.
Table 1. Summary of definitions related to robustness in power grids and their references.
Concept Definition Ref.
Reliability Probability that an electric power grid can perform a required function under
given conditions for a given time interval (IEC definition). [45]
The probability of its satisfactory operation over the long run (IEEE definition). [48]
Disturbance An unexpected event that produces an anomalous system condition. [45]
Contingency The unexpected failure or outage of a network component, such as a generator,
transmission line, or other electrical element. [45]
Robustness Degree to which a network is able to withstand an unexpected event without degradation in
performance. It quantifies how much damage occurs as a consequence of such unexpected
perturbation. [49]
Vulnerability The lack of robustness. Vulnerability is often used to score low reliability of power grids.
It can be quantitatively defined by Equation (1). [12]
Resilience The ability of a power system to recover quickly after a disaster or, more generally, the
ability of anticipating to extraordinary, high-impact, low-probability events, quickly
recovering from these disruptive events, and adapting its operation and structure for
preventing or mitigating the impact of similar events in the future. [45]
Resilience vs. Robustness measures how much damage occurs as a consequence of an unexpected perturbation, [49]
robustness while resilience measures how quickly the network can retrieve from such damage.
Resilience vs. Resilience is related to low probability,high impact events. It is a dynamic concept. [41,49]
reliability Reliability is related to high probability,low impact events. It is a static concept.
Stability The ability to maintain or to recover a state of equilibrium after disturbances or contingencies. [40]
Critical Infrastructure whose unavailability or destruction would have a extensive impact on economy,
Infrastructure Government services and, in general, on everyday life, with severe consequences
for a nation. Examples of critical infrastructures are power grids, telecommunication
networks, transportation networks, water supply systems and natural gas and oil pipelines. [5053]
Energies 2015,89214
In this context it is insightful to note that in Figure 1a the random failure of the marked link does not
affect the network functionality (since nodes 1 and 2 remain linked to the rest of the network), while the
targeted attack on the marked node in Figure 1b will make the network disintegrate in many unconnected
parts before recovery. Thus, its lack of robustness results in negligible resilience. For more details about
methodologies for resilience analysis in largely networked infrastructure (including power grids), we
refer the interested reader to the recent works [41,44,45,47].
Number of links,
k
P(k)
There are many
nodes (66%) with
only 1 link
less than 1% of the nodes
hubs with a high number of
links (20 links)
A failure in
this link does
not a!ect the
network
functionality
(a) (b)
(c)
An attack on this node “1”
make the network collapse
1
2
2
1 link
20
links
2
links
30 links
1
Figure 1. (a) Example of a robust network; (b) Example of a scale-free network, vulnerable
to attacks on nodes with many links; (c) Node degree probability density function of a
network similar to that represented in (b).
Although robustness and resilience are different albeit related concepts, as will be shown throughout
this survey sometimes they are used indistinguishably because, from a practical point of view, robustness
is a necessary but not sufficient condition to make the power grid resilient. In this survey we will focus
on robustness of power grids using concepts extracted from CN Science and approximations from EE,
as will be motivated below.
Modeling robustness of power grids against cascading failures is the main reason why
many scientists and engineers have decided to apply the CN approach [51,5462] to power
grids [8,13,14,2225,27,30,3436,6366], these being a representative selection of the latest works in the
field. Although it will be explained in Section 2to make the manuscript self-completed, we introduce at
this point the simple mathematical concept of network graph because it will assist us in better motivating
the purpose of this paper and in explaining its structure. A network is simply a set of “entities” called
Energies 2015,89215
nodes or vertices (in our case, stations in a power grid, or routers in Internet, one of the clearest examples
of complex network) that are connected to each other by means of links or edges (correspondingly, lines
in a power grid). The CN approach has been used to explore the robustness, stability (see definition in
Table 1), and resilience of different networks in highly cited papers such as [19,38,61,6773] and, more
recently, in [7488]. The key topic of network robustness (or vulnerability, the opposite property) has
strongly attracted the attention of researchers in very different scientific and technological fields (physics,
mathematics, biology, telecommunications, energy, economy,...) [51,55,5762]. In fact, cascading
failures, described before as an apparently inherent potential weakness of electricity grids, are however
common in complex networks regardless of whether they are integrated circuits, Internet, or transport
networks, to name a few where the phenomenon is more noticeable [51,5761,8992].
Thus robustness in power grids is critical not only to ensure their own functionality against random
failures or intentional attacks (“threats” [40] in the wide sense), but also to ensure the robustness of other
infrastructures that are mutually dependent. This is the concept of “interdependent networks” [9399],
which relates to the wider one of “multilayer networks” [100103] and “network of networks” [104,105].
The concept of “interdependent networks”, as will be shown later on, has also been used recently to
investigate power grids [106108], since the power grid operates as a network of networks defined by the
country geography [107,108]. In particular, the interdependence among a class of large networks called
“critical infrastructures” (Table 1) [52,109] has currently become a vibrant research topic. The key point
in this respect is that power grid stands out as one of the most critical ones. The rationale behind this
lies on the observation that most of the other large-scale infrastructures delivering essential services
and goods (namely communications, emergency services, health, transport, water, energy, financial
services, food, and Government services) require electric power for their operation. For instance, as
pointed out in [40], in the 2003 Northeast Blackout [16], the outage collapsed several services and
facilities one after another: all trains arriving at and departing from New York City were shut down; the
water pressure was reduced because water pumps had no electricity; mobile communications became
out of order, ... Reference [53] is a good introduction to interdependent networks while we refer
the reader to [9599,103,105] for further details out of the scope of this paper. The protection of
critical infrastructures has become a priority for Governments since terrorist groups may potentially
take advantage of vulnerabilities and interdependencies in power grids [50,110113], threats that make
robustness and resilience even more crucial.
With this complex scenario in mind, the purpose of this paper is to review the works that have
tackled the robustness of power grids by using the CN approach, not only those based solely on
topological CN concepts (“pure topological approach”), but also those that enhance the CN approach
by including concepts from EE (“hybrid approaches”), in which the so-called “extended topological
model” developed by Bompard et al. [8,78] plays a key role. This is a similar approach to the one
adopted by the useful recent reviews [8,12,14].
The differential contributions of this paper are: (1) a summary of the fundamental concepts of
complex networks on which the review is based, in an effort to make the paper self-contained as
far as possible; (2) an analysis of recent papers aimed at suppressing cascading failures in power
grids [106108] by modeling power grids as networks of networks; (3) an extension of the review to
works that apply CN concepts to smart grids [64,114,115] (which, as will be shown, are much less
Energies 2015,89216
numerous than those devoted to transmission high voltage power grids, but in which, however, the CN
theory is very useful to propose new structures [64]); (4) a classification of the revised works according
to different useful metrics, in a similar approach to [14], including novel criteria that will be explained
later; (5) a critical analysis of the feasibility of CN theory to provide insights into real power grids,
which is still under debate in the literature [8,11]. Regarding this controversy, Luo and Rosas-Casals
have very recently proposed a study [116] that aims to correlate novel vulnerability metrics (based on
the extended topological approach mentioned before) with real malfunction data for several European
power transmission grids (Germany, Italy, France and Spain), and which opens a research line to find
a more meaningful connection between CN-based metrics and the empirical data of power grids.
We would like finally to emphasize that there are too many works related to the analysis of robustness
in power grids based on both pure topological and hybrid approaches. In fact, there is a huge number
of contributions, not only those directly focused on power grids, but also those emerging from
multidisciplinary works centered in collateral yet related topics, which involve other sciences (graph
theory, chaos, ecology, economics, telecommunication and computer science, and critical infrastructures
science, among many others). Thus the methodological approach we have adopted in our review hinges
on selecting and analyzing those most cited references (those who provide the scientific basis) along
with those most recent with the highest quality when explaining the concepts involved.
With these considerations in mind, the structure of the rest of this paper is as follows: Section 2
introduces the basic concepts that help understanding the works that gravitate on robustness/vulnerability
in power grids from the CN point of view. Grounded on these concepts, Section 3—the core of
this paper—focuses on reviewing those most important works dealing with the analysis of power grid
robustness by resorting to CN theory. As already mentioned, these can be grouped into two classes: those
works that study the power grid based only on topological CN concepts, and those that additionally
include electrical concepts within the CN framework. Section 4analyses the reviewed papers as a
function of the vulnerability used, discusses critically the ability of CN theory to provide insights
into real power grids, summarizes the topological structures founds, and suggests strategies to mitigate
vulnerability. The paper concludes with Section 5, which summarizes the work and synthesizes its
main findings.
2. Complex Networks Fundamentals: An Introduction
The purpose of this section is to make this paper stand by itself by providing an introduction to the
necessary concepts related to complex networks science (Subsection 2.1), the vulnerability metrics most
commonly used in the literature (Subsection 2.2), and the cascading failures issue in the more general
context of complex networks (2.3).
Energies 2015,89217
2.1. Complex Network Concepts
We have previously mentioned that a power grid is nothing more than a network in which nodes (or
“vertices”) are stations (generators, transmission substations, loads), while links (“edges”) correspond
to the transmission lines between the nodes. This representation (sometimes with weighted links) is
adequate for both high-voltage transmission grids (the vast majority of the reviewed contributions focus
on high-voltage transmission power grids) and medium- and low-voltage distribution grids [117], as
well as smart grids [10,64,114,115,118,119] (these two later classes of grids having being studied at a
lesser extent).
In turn, a network can be represented mathematically by using a “graph” G= (N,L), where N
represents the set of nodes (or vertices) and Ldenotes the set of links (edges). This is the most simple
graph. However, as will be explained, sometimes it is necessary for the graph to contain information
about links (for instance, impedance line), this information being represented by weighted links.
The following list summarizes some important concepts and definitions [6,57,59,61] that will help
better understand the review in Section 3and to discuss it in Section 4. The key concepts are:
An “undirected” graph is a graph for which the relationship between pairs of nodes are symmetric,
so that each link has no directional character (unlike a “directed graph”). Unless otherwise is
indicated, the term “graph” is assumed to refer to an “undirected graph”.
An undirected graph is “connected” if there is a path from any two different nodes of G. A
disconnected graph can be partitioned into at least two subsets of nodes so that there is no link
connecting the two components (“connected subgraphs”) of the graph.
A “simple graph” is an unweighted, undirected graph containing neither loops nor multiple edges.
The “order” of a graph G= (N,L)is the number of nodes in set N, that is the cardinality of set
N, which we represent as |N|. We label the order of a graph as N,N=|N| ≡ card(N).
The “size” of a graph G= (N,L)is the number of links in the set L,|L|, and can be defined
(.
=) as :
M.
=X
iX
j
aij (2)
where aij = 1 if node iis linked to node jand aij = 0 otherwise. Elements aij are the matrix
elements of the “adjacency matrix”.
The “degree” of a node iis the number of links connecting ito any other node. The “degree” of
node i, denoted as ki, is simply:
ki.
=
N
X
j
aij (3)
The node degree is characterized by a probability density function P(k)indicating the probability
that a randomly selected node has klinks.
A “geodesic path” is the shortest path through the network from one nodes to another. Or, in other
words, a geodesic path is the path which has minimal number of links between two nodes. Note
that there may be and often is more than one geodesic path between two nodes [61].
The “distance” between two nodes iand j,dij, is the length of the shortest path (geodesic path)
between them, that is, the minimum number of links when going from one node to the other [120].
Energies 2015,89218
The “average path length” of a network is the mean value of distances between any pair of nodes
in the network [57]:
`.
=1
N(N1) X
i6=j
dij (4)
where dij is the distance between node iand node j.
The “diameter” of a network is the length (in number of links) of the longest geodesic path between
any two vertices [61].
The “clustering coefficient” is a local property capturing the density of triangles in a network. That
is, two nodes that are connected to a third node are also directly connected to each other. Thus a
node iin a network has kilinks that connects it to kiother nodes. The clustering coefficient of
node iis defined as the ratio between the number Miof links that exist between these kivertices
and the maximum possible number of links (Ci.
= 2Mi/ki(ki1). The clustering coefficient of
the whole network is [57]:
C.
=1
NX
i
Ci(5)
Put it simple, for a given node, we compute the number of neighboring nodes that are connected
to each other, and average this number over all the nodes in the network.
Most recent studies reveal that several complex networks—such as some power grids or
Internet—have a heterogeneous topology [57,69] as the one represented in Figure 1b. This leads to
a probability density function P(k)like the one represented in Figure 1c. Note that, as most nodes
have only a few connections and only a few nodes (often referred to as “hubs”) possess a high number
of links, then the network is said to have no “scale” [121]. This is why they are called “scale-free”
networks. Many of these networks with heterogeneous node degree distribution follow a power law
distribution P(k)kγfor large k. In particular:
For illustrative purposes, Figure 1c shows the probability density function P(k)of a scale-free
network we have generated. Note that there are many nodes with few links, for instance, about
66% of nodes have only 1 link. However, there is a extremely low number of nodes with many
links (“hubs”). It is more likely that a random failure affects one node with very few nodes (such as
“2” in Figure 1b), which minimally impacts on the operation of the network as a whole. However
a targeted attack on a hub (node “1” in Figure 1b) may disconnect the network in many parts,
affecting severely its operation. This exemplifies the fact that scale-free networks are robust to
random failures at most of their constituent nodes, but fragile when undergoing targeted attacks on
a single or few hubs [121]. In contrast, Figure 1a is intuitively more robust, as mentioned before.
This is the “random” or “Erd˝
os-Rényi” (ER) network.
A scale-free network can be generated by progressively adding nodes to an existing network by
introducing links to nodes with “preferential attachment” [69,122] so that the probability of linking
to a given node iis proportional to the number of existing links kiof the node. This is the so-called
Barabási and Albert (BA) model. In contrast, in ER networks, the connection of the nodes is
completely random, with a given connection probability p.
In addition, there are some complex networks that exhibit the “small world” property. Figure 2will
help introducing this concept.
Energies 2015,89219
A small-world network is a complex network in which the mean distance or average path length `
is small when compared to the total number of nodes Nin the network: `=O(log N)as N→ ∞.
That is, there is a relatively short path between any pair of nodes [51,70]. The term “small-world
networks” is often used to refer Watts-Strogatz (WS) networks, first studied in [70]. Figure 2a
shows the aspect and P(k)of a WS we have generated with N= 1000 nodes and M= 2000
links. It has a short mean distance, `'6.44, and high clustering, C 0.22. Most of small
world networks have exponential degree distributions [123]. As will be shown, there are some
power grids that exhibit the small-world property [111], and has been found very recently to be
a beneficial property for smart grids [64].
A key feature of a small-world network is that it can be generated by taking a small fraction of
the links in a regular (ordered) network and “rewiring” them. The rewiring algorithm involves
going through each link and, with “rewiring probability” p, disconnecting one end of that link and
connecting it to a new node chosen at random, with the only restrictions that no double edges or
self-edges are ever generated [61]. Figure 2b aims at illustrating this procedure: link l13, which
was connecting node 1 to node 3, is now disconnected (from node 3) and rewired to connect node
1 to node 9. This means that, in the new network, going from node 1 to node 9 only requires one
jump via the rewired link (and thus dnew
1,9= 1). However, in the original regular network, going
from node 1 to node 9 through the geodesic or shortest path (13579) involves 4
links (d1,9= 4). That is, the rewired link can be viewed as a “shortcut” between nodes 1 and 9,
which avoids having to go through intermediate nodes. In general, creating a few shortcuts may
have the effect of reducing at a great extent the mean free path [124].
This method, applied to networks with a large number of nodes, leads to topologies like the one
represented in Figure 2a (N= 1000 and p= 0.25). This also illustrates that the architecture of
real small-world networks is extremely heterogeneous: the vast majority of the elements are poorly
connected, but simultaneously few have a large number of connections [124]. The robustness of
small-world network has been explored in [125,126] leading to the conclusion that, in non-sparse
WS network (M2N), simultaneously increasing both rewiring probability and average degree
(hki=1
NPN
i=1 ki)improve significantly the robustness of the small-world network.
An important variant of WS model is the one proposed by Newman and Watts [127] (NW
small-world model) in which one does not break any connection between any two nearest
neighbors, but instead, adds with probability pa connection between a pair of nodes. It has been
found that for sufficiently small pand sufficiently large N, the NW model is basically equivalent
to the WS model [128]. Currently, these two models are together commonly termed small-world
models. As will be shown in Section 4, a feasible strategy to improve the robustness of power
grids is to add a controlled number of links between distant nodes (shortcuts, assumed to be as
links to go from one node to another without having to go through others), similar to the NW
small-world model.
Finally it is worth mentioning that complex networks emerge not only in power grids and other
human-made systems—including Internet [129,130], the topology of web pages (where the nodes
are individual webs and edges are hyperlinks) [57,62], airline routes [131], electronic circuits [132]
or socioeconomic systems [133]—but also in systems stemming from Nature, e.g., evolution [134],
Energies 2015,89220
metabolic networks [135], protein interactions [136] and food webs [137]. For more details regarding
the description and bibliographic references of these complex networks, which are outside the scope of
this paper, we refer the interested reader to the recent books [62,90].
A way to quantify the extent to which a complex power grid is robust is to use vulnerability metrics.
The following subsection summarizes the key metrics that have appeared in our review, which will help
us understand it better.
M= 2000
N= 1000
(a)
Rewiring
123
Version June 27, 2015 submitted to Energies 8 of 45
references of these complex networks, which are outside the scope of this paper, we refer the interested212
reader to the recent books [72,97].213
Number of links,
k
P(k)
M= 2000
N= 1000
Figure 2. (a) Example of a Watts-Strogatz small-world network with N= 1000 nodes and
M= 2000 links, and its corresponding node degree probability density function, P(k). The
small-world network has a short mean distance and high clustering: `'6.44,C0.22. (b)
Creation of a small-world network.
A way to quantify the extent to which a complex power grid is robust is to use vulnerability metrics.214
The following subsection summarizes the key metrics that have appeared in our review, and will help us215
understand it better.216
l13
217
2.2. CN Vulnerability metrics218
As pointed out in [15], the concept “vulnerability” has many meanings in the literature [127,128].219
In this review we follow [15] by using the definition that considers the vulnerability as the drop in220
performance of a power grid when a disruptive event emerges. The key point is that vulnerability can221
be measured using a variety of metrics. As will be shown throughout this paper, the particular metrics222
applied in the different works will be used to finally categorize all the revised works in Table 1when223
completing the manuscript.224
The CN metrics that appear the most in the revised papers are summarized in the paragraphs below225
for understanding our review in a structured fashion, and to better understood those novel that will be226
described when revising the papers. These topology-based metrics are:227
Link , which was
connecting node 1
to node 3, is
disconnected (from
node 3) and rewired
to connect node 1
to node 9
Version June 27, 2015 submitted to Energies 8 of 45
references of these complex networks, which are outside the scope of this paper, we refer the interested212
reader to the recent books [72,97].213
Number of links,
k
P(k)
M= 2000
N= 1000
Figure 2. (a) Example of a Watts-Strogatz small-world network with N= 1000 nodes and
M= 2000 links, and its corresponding node degree probability density function, P(k). The
small-world network has a short mean distance and high clustering: `'6.44,C0.22. (b)
Creation of a small-world network.
A way to quantify the extent to which a complex power grid is robust is to use vulnerability metrics.214
The following subsection summarizes the key metrics that have appeared in our review, and will help us215
understand it better.216
l13
217
2.2. CN Vulnerability metrics218
As pointed out in [15], the concept “vulnerability” has many meanings in the literature [127,128].219
In this review we follow [15] by using the definition that considers the vulnerability as the drop in220
performance of a power grid when a disruptive event emerges. The key point is that vulnerability can221
be measured using a variety of metrics. As will be shown throughout this paper, the particular metrics222
applied in the different works will be used to finally categorize all the revised works in Table 1when223
completing the manuscript.224
The CN metrics that appear the most in the revised papers are summarized in the paragraphs below225
for understanding our review in a structured fashion, and to better understood those novel that will be226
described when revising the papers. These topology-based metrics are:227
(b)
Version June 27, 2015 submitted to Energies 8 of 45
Number of links,
k
P(k)
M= 2000
N= 1000
(a)
rewiring
1
2 3
1
2 3
Version June 27, 2015 submitted to Energies 8 of 45
references of these complex networks, which are outside the scope of this paper, we refer the interested212
reader to the recent books [72,97].213
Number of links,
k
P(k)
M= 2000
N= 1000
Figure 2. (a) Example of a Watts-Strogatz small-world network with N= 1000 nodes and
M= 2000 links, and its corresponding node degree probability density function, P(k). The
small-world network has a short mean distance and high clustering: `'6.44,C0.22. (b)
Creation of a small-world network.
A way to quantify the extent to which a complex power grid is robust is to use vulnerability metrics.214
The following subsection summarizes the key metrics that have appeared in our review, and will help us215
understand it better.216
l13
217
2.2. CN Vulnerability metrics218
As pointed out in [15], the concept “vulnerability” has many meanings in the literature [127,128].219
In this review we follow [15] by using the definition that considers the vulnerability as the drop in220
performance of a power grid when a disruptive event emerges. The key point is that vulnerability can221
be measured using a variety of metrics. As will be shown throughout this paper, the particular metrics222
applied in the different works will be used to finally categorize all the revised works in Table 1when223
completing the manuscript.224
The CN metrics that appear the most in the revised papers are summarized in the paragraphs below225
for understanding our review in a structured fashion, and to better understood those novel that will be226
described when revising the papers. These topology-based metrics are:227
9
Link , which was
connecting node 1
to node 3, is
disconnected (from
node 3) and rewired
to connect node 1
to node 9
Version June 27, 2015 submitted to Energies 8 of 45
references of these complex networks, which are outside the scope of this paper, we refer the interested212
reader to the recent books [72,97].213
Number of links,
k
P(k)
M= 2000
N= 1000
Figure 2. (a) Example of a Watts-Strogatz small-world network with N= 1000 nodes and
M= 2000 links, and its corresponding node degree probability density function, P(k). The
small-world network has a short mean distance and high clustering: `'6.44,C0.22. (b)
Creation of a small-world network.
A way to quantify the extent to which a complex power grid is robust is to use vulnerability metrics.214
The following subsection summarizes the key metrics that have appeared in our review, and will help us215
understand it better.216
l13
217
2.2. CN Vulnerability metrics218
As pointed out in [15], the concept “vulnerability” has many meanings in the literature [127,128].219
In this review we follow [15] by using the definition that considers the vulnerability as the drop in220
performance of a power grid when a disruptive event emerges. The key point is that vulnerability can221
be measured using a variety of metrics. As will be shown throughout this paper, the particular metrics222
applied in the different works will be used to finally categorize all the revised works in Table 1when223
completing the manuscript.224
The CN metrics that appear the most in the revised papers are summarized in the paragraphs below225
for understanding our review in a structured fashion, and to better understood those novel that will be226
described when revising the papers. These topology-based metrics are:227
(b)
Figure 2. (a) Example of a Watts-Strogatz small-world network with N= 1000 nodes and
M= 2000 links, and its corresponding node degree probability density function, P(k). The
small-world network has a short mean distance and high clustering: `'6.44,C0.22. (b)
Creation of a small-world network.
topology of web pages (where the nodes are individual webs, and the links are hyper-links), power211
grids, airline routes, electronic circuits... For more details regarding the description and bibliographic212
references of these complex networks, which are outside the scope of this paper, we refer the interested213
reader to the recent books [72,97].214
A way to quantify the extent to which a complex power grid is robust is to use vulnerability metrics.215
The following subsection summarizes the key metrics that have appeared in our review, and will help us216
understand it better.217
l13
218
2.2. CN Vulnerability metrics219
As pointed out in [15], the concept “vulnerability” has many meanings in the literature [127,128].220
In this review we follow [15] by using the definition that considers the vulnerability as the drop in221
performance of a power grid when a disruptive event emerges. The key point is that vulnerability can222
be measured using a variety of metrics. As will be shown throughout this paper, the particular metrics223
applied in the different works will be used to finally categorize all the revised works in Table 1when224
completing the manuscript.225
Number of links,
k
P(k)
4
5
6
7
8
9
123
4
5
6
7
8
9
10
Figure 2. (a) Example of a Watts-Strogatz (WS) network and its node degree distribution;
(b) First step in the creation of a small-world network. See the main text for further details.
2.2. CN-Based Vulnerability Metrics
As pointed out in [12], the concept “vulnerability” has many meanings in the literature [138,139]. We
have mentioned in Section 1that in this review we follow [12] by using the definition that considers the
vulnerability as the drop in performance of a power grid when a disruptive event emerges. The key point
is that such performance can be measured by using a variety of metrics. As will be shown throughout
this paper, the particular metrics applied in the different works will be used to finally categorize all the
revised works in Table 3when completing the manuscript. The CN topology-based metrics that appear
the most in the revised papers are summarized in the paragraphs below for understanding our review in a
structured fashion, and to better understood those novel that will be described when revising the papers.
These topology-based metrics are:
Energies 2015,89221
1. The average path length `and the clustering coefficient C, stated by Equations (4) and
(5), respectively.
2. The “relative size of the largest connected component”, which is defined as
G.
=N0
N,(6)
where Nand N0are the numbers of nodes in the largest connected component before and after
the event.
3. The “efficiency” Eof a network is the communication effectiveness of a networked system [140],
E.
=1
N(N1) X
i6=j
1
dij
(7)
which is a measure of the network performance under the assumption that the efficiency for
sending load (electricity, information, packets, whatsoever) between two nodes iand jis
proportional to the reciprocal of their distance. Based on this definition, and following [12], the
vulnerability of a network can be defined as the drop in the efficiency when link jis removed from
the network, that is
VE(j).
=EEj
E(8)
4. The “betweenness centrality” quantifies how much a node vis found between the paths linking
other pairs of nodes, that is,
CB(v)≡ Bv.
=X
s6=v6=t∈V
σst(v)
σst
(9)
where σst is total number of shortest paths from node sto node tand σst(v)is the number of those
paths that pass through v. A high CBvalue for node vmeans that this node, for certain paths, is
critical to support node connections. The attack or failure of vwould lead to a number of node
pairs either to be disconnected or connected via longer paths.
5. The “degree centrality” of a node iis defined as [30]
CD(i).
=ki
(N1) (10)
and can be interpreted in terms of the number of vertices and edges that are directly influenced by
the status of node i.
6. The “eccentricity” (eccentricity centrality) of a node iis:
CE(i).
= max
i,j∈N [dij ](11)
Note that a low eccentricity of node isuggests that all other nodes are close to it [30].
7. The “centroid centrality” of node iis
CC(i).
=d(i)min
j6=i[d(j)] (12)
d(j)being d(j) = Pi∈N dij [30]. It means that a node has a central position within a region,
characterized by a high density of interacting nodes.
Energies 2015,89222
Metrics CB,CD,CE, and CCare “centrality measures” and quantify to what extent a node is “central”
in a network. We will show that “electrical centrality metrics” have been proposed based on CB,CD,
CE, and CC.
2.3. Cascading Failures in Complex Networks
Cascading failures in large complex networks have been explored in a broad general
context [17,141144] (with application to power grids, Internet and transportation networks, among
others). Although these networks are very different to each other at a first glance, they all have in
common the fact that the flow of a physical quantity in the network (that is, electric power on power
grids, packets on Internet) plays a key role. In these works, the load at a node is simply the betweenness
centrality—Equation (9)—while its capacity is the maximum load that the node can handle. Indeed it is
pointed out that for those complex networks in which loads can be redistributed among nodes, intentional
attacks can trigger a cascade of failures which in turn, may yield the collapse of large network areas, or
even the whole network. In particular [141] emphasizes that this effect is of great importance in networks
with high heterogeneity (Figure 1). The study of cascades in these networks—regardless of the physical
quantity flowing through the network (electric power flow in a power grid, vehicles in a transportation
network, packets is a communication network...)—evinces that while the scale-free property makes many
man-made and natural networks robust against to random node failure, the existence of hub nodes may
make the network vulnerable to a cascade of overload failures, which may end up splitting the network
into isolated fragments. In a similar line of reasoning, the “fiber-bundle” model for scale-free networks
with power-law distribution has been proposed to model cascading failures [144]. In a fiber-bundle
model a set of N1fibers (elements) is placed on the sites of a network, and a random strength
(with a given probability distribution, frequently the Weibull distribution) is applied to each. When the
strength increases, those elements with smaller thresholds fail. The consequence is that the individual
load of each of the malfunctioning (or even broken) nodes is then distributed among their non-damaged
nearest neighbors. Thus the breakdown of a node may lead to other failures which, in turn, may trigger
and catastrophically propagate other faults. The analogy with a complex network is as follows [144]:
any fiber may be viewed as a node, while the directions of the load-transfers are equivalent to the links
connecting the nodes, and the load represents the intensity of the physical magnitude flowing into nodes.
Failures are quantified by the relative size of the largest connected component formulated in Equation (6).
The model in [144] also predicts that a scale-free network has an abrupt transition in its connectivity: as
the load is increased, the cascading failure begins to reach more and more nodes up to a “critical point”,
beyond which the network collapses in many small parts.
In [145] the capacity of a node i,Ci, is assumed to be proportional (via a tolerance parameter) to
its initial load, Li(0). The efficiency has been selected as the appropriate CN metric to quantify the
grid performance. The proposed model is based on the dynamical redistribution of the flow triggered
by the sudden initial overload or failure of a node. If the affected node is among those with the highest
load, the model predicts that its failure is sufficient to affect the efficiency of the power grid up to
its complete collapse. This is particularly important in real-world networks such as electrical power
grids, and in networks with a highly heterogeneous node degree, such as Barabasi-Albert (BA) scale-free
Energies 2015,89223
networks [69,121]. The results suggest that (1) the failure of a small number of selected nodes (those
with many connections, or hubs) suffices to collapse the entire network; and (2) failures in most of the
nodes (which have a small number of connections) do not produce any major fault at the global level on
the network.
3. Review of Robustness in Power Grids as Complex Networks
As mentioned before, and following the categorization carried out in recent works [12,14], we have
structured our review into two groups of approaches. The first one corresponds to those that only consider
the structural vulnerability of the power grid, and will be first reviewed in Subsection 3.1. The second
group of works, as will be shown in Subsection 3.2, postulate that these approaches can be enhanced
by including models and metrics from electrical engineering. Despite this enhancement, there is a
controversy [11] in the research community between both complex networks and electrical engineering
centered on whether CN approaches are able to capture and fully explain all robustness issues ocurring in
power grids. The recent work by Rosas-Casals [11] relates very illustratively and clearly the relationship
between electricity networks and complex networks. This will be discussed in our critical analysis in
Section 4.
3.1. Topological Approaches
The works within this subsection are said to belong to the pure topological approach because they
focus on structural vulnerabilities based only on the mathematical graph of the power network: a set of
nodes or vertices connected by a set of links or edges. These works resort to CN metrics (Subsection 2.2)
such as efficiency, degree and betweenness, hence they do not consider any roughly electrical concept.
As mentioned in Section 1, many of these works have been motivated by the emergence of cascading
failures in power transmission grids. In this regard, [17] proposes a very simple model to study the
behavior of avalanches, in which nodes are characterized by a value of a “load”, and they can operate up
to a maximum value of such load. The model, which does not consider electrical properties, assumes that
the load is distributed so that neighboring nodes with the larger degree can operate with the larger loads.
Despite its apparent simplicity, the model leads to results in line with the analysis of the disturbances in
the US power grid [146], one of the most studied systems in the related literature.
Just in this respect, within the body of work studying US power grids the US blackout in August
2003 is one of the major events that has been studied by conceiving the power grid as a complex
network [147]. This topological study, centered only and exclusively on the grid structure (N= 14099
substations and M= 19657 transmission lines), is based on evaluating its ability to transfer electric
power between generators and consumers when certain nodes are removed from the grid. The load of a
node is related to the number of links it has, that is, on the node degree. The concept of “connectivity
loss” to quantify the average decrease in the number of generators connected to a distributing substation
has been used. The investigation concludes that the power grid is robust enough to most perturbations
(random failures) that impact on those more abundant nodes (which are those with small number of
links), while disturbances (for instance, targeted attacks) affecting key substations (“hubs” with many
connections) may impact critically the network operation, an even collapse it. The work concludes
Energies 2015,89224
that this vulnerability is inherent to the topological structure of the power grid. Specifically, the results
indicate that the topological structure is extremely vulnerable to the removal of the nodes with highest
load (hubs): if only 4% of the nodes with the highest load are broken (or removed from the structure) all
together, the performance of the grid suffers a drop of 60%.
The analysis of topological aspects of the Italian power grid using the complex network approach
and neglecting electricity transmission details has been tackled in [148]. The authors demonstrate that,
regardless power concepts, the grid structure by itself contains enough information on its vulnerability
to cascading failures. In this work the power grid is represented as a graph with N= 341
stations—generators and distribution substations—as nodes, and M= 517 transmission lines as links.
A particular aspect of this contribution is the use of weighted links: every link between nodes iand j
is additionally modeled with a real number eij [0,1] which quantifies how efficient the transmission
between nodes iand jis. As such eij = 1 represents that the link between node iand node jis working
faultlessly, while eij = 0 means that issues hold along the line that disable the transmission of power
from node ito node j. The results evince that the analyzed grid exhibits a high heterogeneity in the
node load distribution (see Figure 1c): while most of the nodes receive small loads, a small number of
nodes (hubs) must convey extremely high loads. It is just the failure of one of this hubs what triggers
large-scale blackouts. This is a common finding with the aforementioned study [147], which focuses on
the US grid.
In a similar approach, [149] has focused on studying the cascading failure problem in power grids (and
in artificially created BA scale-free networks) by including, in the CN framework, a model in which the
capacity of a link is a function of its load. The motivation of this model is that, in a power grid with a
highly heterogeneous load distribution, those nodes with the strongest loads should be more protected by
assigning them large capacities. This approach is different from others in which the capacity of i-th link
(Ci) is assumed to be proportional to its load (Li) by means of a constant value λ:Ci=λ·Li. In [149],
the novelty is that λis not a constant but an increasing function of the link load, λ(Li)λα,β(Li),
depending on two parameters αand β.αis the step height of the Heaviside step function, which has
been used for simplicity. βis the the step position. When tested on real power grids and artificial BA
network, the results reveal that it is possible to make the network more robust along with a reduction of
the cost by assign large capacities to those nodes with strongest loads.
The tolerance analysis of scale-free networks against targeted attacks that trigger cascades of
failures [150] has been motivated by the question of how to design scale-free networks of finite capacity
so that they are resistant to cascading failures. To achieve this the load (or betweenness) in a node is
considered as the total number of shortest paths through such a node. The capacity of a node is assumed
to be the maximum load the node can carry, and proportional to its initial load, as in [141143]. A failure
of a node (that is, the removal of this node from the graph) may affect the loads on the other neighboring
nodes. If the load arriving to a neighbor node increases beyond its limiting capacity, the node will
collapse. Therefore any failure leads to a redistribution of loads over the network, and consequently
succeeding failures can emerge. Failures may stop without affecting the network connectivity at a great
extent, or may propagate widely and collapse a considerable fraction or even the whole network. In this
work, cascading failures are quantified by the relative size of the largest connected component Gdefined
by Equation (6). The integrity of the network is maintained if G1, while the global collapse emerges
Energies 2015,89225
if G0[150]. By analyzing the dynamics of load redistribution obtained by removing selectively a
small subset of low-degree nodes, the authors have found the minimum value of the capacity parameter
to prevent a scale-free network from cascading failures.
The dependability of North American eastern and western power transmission grids has been
investigated using a scale-free Barabási-Albert model of the network topology [151]. Prior to the
analysis, the authors confirm experimentally that the topologies of the Eastern Interconnect and Western
System power transmission grids have scale-free nature. Based on this fact, and using only the most
general topological data about the transmission grids, the authors successfully prove the accuracy of the
proposed Barabási-Albert network model. Additionally, the loss-of-load probability reliability index has
been applied to the Barabási-Albert network model using a simple failure propagation model. The results
are similar to those computed using standard power engineering methods, and confirm the validity of the
scale-free network model.
The topological vulnerability of three European electric power grids (i.e., Spanish 400 kV grid, French
400 kV grid, and Italian 380 kV grid) has been analyzed [152] by evaluating the impact on vulnerability
when nodes and/or edges are removed. An interesting point of this work is that it proposes a method
that intelligently add edges so as to reduce vulnerability. This study also differs from others adopting
the same approach in that the particularly stretched tight geography of Italy makes its power grid very
different from those of Spain and France. Specifically, it is shown to be so vulnerable that the joint
removal of only three links is sufficient to collapse dramatically the grid and to cause a drop in the
efficiency of about 30%. The counterpart however is that it is also the only power grid whose robustness
can be increased the most with the simple addition of a single edge [152].
The North America power grid is again being studied in [153] by using its real topology and feasible
assumptions about the load and overload of transmission substations. The substations can be classified
into three different groups: the set of generation substations GG, whose NG= 1633 elements produce
electric power to distribute, the set of transmission substations set GT, whose NT= 10287 elements
transfer power along high voltage lines, and the distribution substations, whose ND= 2179 elements
distribute power to small, local grids. The efficiency from Equation (7) is the metric used as a measure
of performance, being defined in this particular case as
E.
=1
NGNDX
iGGX
jGD
ij (13)
where ij is the efficiency of the most efficient path between the generator iand the distribution substation
j, calculated as the harmonic composition of the efficiencies of the component edges. The damage D
that a failure causes is defined in [153] as the normalized efficiency loss,
D=E(G0)E(Gf)
E(G0)(14)
where E(G0)is the efficiency of the network before the emergence of any breakdown and E(Gf)is the
final efficiency that is reached by the network after the end of the transient caused by the failure, that
is, when the grid efficiency reaches a new stable state. The results point out that the loss of a single
substation can lead to a 25% efficiency reduction because it triggers an overload cascade. While the loss
of a single node can yield significant damage, the subsequent removals have only incremental effects.
Energies 2015,89226
The topological properties of two very different power grids have been also studied in the light of the
CN theory [111]. The first power grid investigated in [111] is the Nordic power grid, which includes the
national transmission grids of Sweden, Finland, Norway and the main part of Denmark (Sjaelland). Its
order and size are N= 4789 and M= 5571, respectively. The second grid explored is the US Western
States Electricity Transmission (WECC) grid, which extends from Alberta (in the north) to Mexico (in
the south), and from California (in the west) to Montana (in the Midwest). The corresponding order
and size of its graph is N= 4941 and M= 6594. The Nordic grid is more scattered than the grid
of the US western states. Both transmission grids have a clustering coefficient Csignificantly larger
than the random graphs, while the average path length `is more than twice as large as the equivalent
random graph. These power grids exhibit “small-world nature”, as explained in Section 2.1. Their
structural vulnerability have been studied in [111] by means of numerical simulations of the error and
attack tolerance, leading to the conclusion that both power grid have comparable disintegration patterns.
In particular both studied grids collapse appreciably faster when the nodes are removed deliberately
(targeted attack) than randomly (failures). The conclusion is that the analyzed power grids are more
sensitive to attacks than random networks.
Power grid outages and vulnerability have been tackled by using topological CN estimators such us
the average path length `and clustering coefficient C[154]. Based on the notion that the U.S. Western
Systems Coordinating Council (WSCC) grid is a small world network, with its sub-network of 300kV as
a pseudo-small world, this research is based on the idea of obtaining two different graphs, and comparing
the way a cascade outage progresses. The first one is the graph that represents the structure of WSCC
grid when the lines that triggered the 1996 blackout are removed. That is, it represents the graph in the
early time instants that caused the blackout. We label this graph “G1”. The second graph investigated
(“G2”) is based on the undamaged WSCC power grid, but with the same number of removed lines than
in G1, but selected at random. A key finding of this work is that `(G1)> `(G2),i.e., the mean path length
of the graph G1, which represents the initial moments of the event that provoked the blackout, is higher
that that of graph G2(the initial graph in which the same number of nodes that in G1have been removed
at random). This means that the disrupting event triggering the 1996 blackout could progress because,
apparently, it degraded the small world structure of the initial undamaged network by reducing `, that
is, by removing lines that acted as shortcuts (remember Figure 2). The problem was not the number of
links damaged but also their quality in the context of small-world: removing the same number of nodes
at random (leading to G2) does not affect as much as in G1since `(G2)remains small, `(G2)< `(G1)(see
Subsection 2.1).
The efficiency and other topological properties of high-voltage electrical power transmission grids
in three UE countries (the Italian 380 kV, the French 400 kV and the Spanish 400 kV networks) have
been analyzed in [155]. The vulnerability analysis has been carried out by measuring the efficiency
degradation generated by the removal of links. This analysis has unveiled a number of topological
properties which are common to these networks, independently of their structure and which are typical
of “small world” networks. In fact, albeit very different the three power grid explored exhibit a very
large clustering coefficient and relative small path length, larger than those of random networks. Other
authors who have analyzed the US electrical transmission lines [147] have reported similar results.
Energies 2015,89227
Similarly, [20] analyzes the topological structure and static tolerance to random failures (errors) and
attacks of thirty-three different European power grids using data from the Union for the Coordination of
Transport of Electricity (UCTE). The study has been carried out over transmission grids (voltage levels
ranging from 110 kV to 400 kV, ignoring distribution grids), and focuses on analyzing the tolerance to
random failures and selective attacks of the most connected nodes (highest node degree). The results
reveal that the grid has been found to be robust enough against random loss of nodes but fragile when the
most connected nodes are targeted attack. That is, although the explored grids seem to have exponential
degree distributions, and most of them lack small-world property, this grids show however a behavior
similar to scale-free networks when nodes are removed. The authors thus conclude that this behavior
is not unique to scale-free networks. The authors also concluded that the node vulnerability can be
logarithmically related to the size of the power grid, and suggest that a feasible method to prevent
disturbances propagation would be to design the network to allow for intentional separation into stable
small islands. This important topic of power grid size have been recently investigated in [28], and suggest
that there may be an optimal size for the power grid based on a balance between efficiency and risk of
large failure.
The robustness of the European power grid under intentional attacks has been studied in [19] based
on CN arguments along with a mean field theory approach. The purpose is to analytically predict the
fragility of the networks against selective removal of nodes. The European power grid seems to have two
different classes of grids: robust and fragile. Although networks in the robust group represent only 33%
of the UCTE nodes under study and they manage a similar amount of power than that of the networks
in the fragile class, they suffer much less percentage of the whole UCTE average interruption time,
power loss and undelivered energy. How this can be related with the internal topological structure of
the networks and the “subgraphs” abundances is a key issue the study does not reveal. What it does
reveal is that fragility (measured by the undelivered energy and the total power loss) increases with γ,
the parameter that characterizes the degree probability distribution [19]. From a structural point of view,
increasing γimplies, rather counter-intuitively, a deviation towards more connected and not randomly
topologies [156]. The authors conjecture that it seems as if the same criteria that favors connectivity (as
a measure originally intended to avoid interruptions in power service) would simultaneously complicate
the “islanding” of disturbances (preventing its spread).
In this respect, [107] seems to have found an explanation for this apparent contradiction. The
novelty of [107], when compared to other works belonging to the pure topological approach, consists
of studying how the interconnectivity (interdependence) between networks affects the sizes of their
cascades. Explicitly, this work focuses on networks abstracted from two interdependent power grids
in the southeastern of the United States. The first power grid has 439 nodes and 527 internal links,
while the second grid has 504 nodes and 734 internal links. These two networks are interconnected by 8
external links. Thus, the complete grid, view as the interconnection of both power grids (“1” and “2”),
has 943 nodes and 1261 links. The model in [107] is based on applying the classic “sandpile model” of
Bak-Tang-Wiesenfeld [157,158] to the corresponding network graph composed of nodes and links, each
node having a capacity for keeping sand grains (viewed as load for power grids). The model is as follows:
sand grains are dropped randomly on nodes, and whenever a node receives more grains than its capacity,
it tumbles down and sheds all its grains onto its neighbors which, in turn, may end up having too many
Energies 2015,89228
grains and thus collapsing. Consequently dropping a single grain can cause an avalanche (cascade).
These cascades, like blackouts in power grids, are characterized by a power law distribution: they are
often tiny but very occasionally huge. Applying this model to the two aforementioned interdependent
power grids in the southeastern of the United States (and on an idealization of them, which is easier to
work with) the authors lead to the key result that interdependence can have a stable minimum with critical
amount of interconnectivity p. On the one hand, some interconnectivity (0<p<p) is beneficial for
an individual network since the other network acts as a reservoir for extra load. In fact, the probability of
a large cascade in a network can be reduced at great extent by increasing slightly the interconnectivity p
(as long as p < p). Thus a way to mitigate cascades hinges on operating close to this critical optimum
point pby adding (or removing) interconnections. On the other hand, too much interdependence, may
become however harmful [107]: too many interconnections open paths for the neighboring network to
inject extra load. Therefore, networks that interconnect to one another to mitigate their own cascades
may accidentally cause larger global cascades in the whole network. This is the reason why authors
warn against the construction of a great number on interconnections among different power grids to
balance production (renewable sources of energy, for instance wave energy converters and wind-turbines
placed offshore) and consumption (high populated areas far from these regions). The idea is adding a
controlled number of interconnections to keep the global network of networks close to the critical amount
of interconnectivity p.
Again focusing on European grids, [156] uses topological CN measures to evaluate the robustness of
the European electricity transmission grid, which is a large networked infrastructure formed by almost
2800 substations and more than 200000 km of transmission lines. This work aims at finding evidences
to relate unexpected blackouts and cascading failures—in the form of reliability indexes: energy not
supplied, total power loss, restoration time, and equivalent interruption time—with the topological
structure of the grid. A key finding is that the grid fragility increases as the topology deviates from that
of a random network. The authors found that national grids might have very different local structure.
Specifically, this local structure can be characterized by the existence of some patterns named “network
motifs” or subgraphs. These are shown to arise at a much higher frequency than expected in random
networks. As a consequence, grid fragility increases as motifs (e.g., stars and triangles) begin to
appear [156].
A key issue discussed in [159] is based on the fact that many studies usually compute the load on
a node (or an link) by using its degree or betweenness, and the redistribution of such a load is usually
forwarded following the shortest path (for instance, the works [141143], revised above). [159] argues
that this principle based on betweenness is only reasonable for small- or medium-sized networks because
of the requirements of structural information of the complete networks. The authors in [159] combine
the CN approach with a more realistic distribution of load among the neighboring nodes. In this work
the distribution of load among neighboring nodes is carried out so that the one with the higher load will
receive the higher shared load from the broken node. The model incorporates an adjustable parameter α
that governs the strength of the initial load of a node, which permits investigating the response of the US
power grid under attacks causing cascading propagation.
A local preferential redistribution rule of the load [159161] that breaks a particular node has recently
been added to the CN approach [162], in an attempt at analyzing cascading failures in power grids. In this
Energies 2015,89229
rule, the load on the affected node is redistributed to its neighboring nodes according to the preferential
probability (the one with a higher degree receives more load). Specifically, the weight of a node is
correlated with its link degree kas kβ. As argued in [162], this is different from other models because the
load on a node is usually estimated by using its degree or betweenness (as in the above revised [141143])
so that the load redistribution is forwarded following the shortest path routing strategy, which may be
not practical for large power networks. The proposed rule has been tested on different standard IEEE
test power networks (IEEE 300, 162, 145, 118, 57, 30 bus test systems) as small power systems, and in
the European power grid as a large real power system. The metric used to quantify the robustness of the
whole network is the “normalized avalanche size” given by
SN=X
iN
Si
N(N1) (15)
where Siis the avalanche size after removing node i. The experimental work reveals that the larger βis,
the more robust the power network results to be.
The work [163] is especially useful since it provides a very clear analysis of the most important
features that power grids exhibit based on CN concepts. The work was motivated by the question about
what patterns would arise in the European power grid when analyzing data corresponding to a six years
interval (2002 to 2008). Data refer to three malfunction indicators: energy not supplied, total loss of
power, and restoration time. It has been shown that fragility (measured by energy not delivered and
total loss of power for a particular grid) increases with γ, the parameter that characterizes the degree
probability distribution. Based on the previous result [19] that found that the European power grids
is composed of both fragile and robust grids, the corresponding cumulative distribution functions for
the robust grids present a higher probability of occurrence than that of the fragile ones for the same
measure. Although robust grids accumulate much less events than fragile ones, the values for the robust
power grids are significantly higher than those of the fragile grids. The authors hypothesize that failures
affecting robust grids lead to higher risks and more important consequences than those striking fragile
grids, although disruptive events in the latter are more frequent. The authors have not found either a
plausible or general explanation to this phenomenon.
Switching again to US power grids, [84] analyzes the robustness of power grids under random and
selective node removals. In particular the authors analytically estimate the thresholds corresponding to
the removal of critical nodes that make the grid collapse: a selective node breakdown is much more
effective to disintegrate the grid because even a small fraction of high-degree node removal can destroy
the grid as a whole. Although the empirical thresholds under random node breakdowns match accurately
the theoretical values, those thresholds corresponding to selective attacks differ slightly from those
predicted in [19].
While the aforementioned references focus on high-voltage transmission power grids, the work
in [117] shifts the scope onto the medium- and low-voltage grids in northern Netherlands, with the
aim at understanding its potentials as a feasible infrastructure to delocalize electricity distribution. The
study employs a number of statistical topological measures for the mentioned purpose. The second key
difference when compared to most works (as will be summarized later on in Table 3) is that it proposes
to utilize a weighted link topological model applied to the lower—medium- and low-voltage—layers of
the power grid. The authors have found that the node degree distributions tend to approach a power-law,
Energies 2015,89230
that is, there are a few nodes that have many connections, while the majority has a very limited number
of links. This result is similar to those found in the papers reviewed above, yet with some details: there
are high-voltage power grids whose node degree distribution match better an exponential distribution,
while others with many more nodes approach a power-law distribution [151]. The work suggests that
the power-law distribution of the medium- and low-voltage grids may be caused by the relatively small
number of nodes (in good agreement with [151]) that receive electricity from the high-voltage grid,
and have to distribute this to many more substation at lower voltages. Another finding is that the
betweenness distribution follows an exponential decay unlike the usual power-law of high-voltage grids.
Another remarkable aspect is the relatively higher tolerance of the medium-voltage network: since
the medium-voltage network is more densely meshed, it is less prone to failures than its low-voltage
counterpart [117].
Likewise [63] delves into the Florida high-voltage power grid as a network with strong geographical
constrains that embeds it in space. This power grid is a relatively small network consisting of N= 84
vertices (Ng= 31 generators and Nl= 53 loads) with strong geometrical constrains (“spatial network”,
as the Italian power grid [152]). The nodes are connected by M= 200 weighted links (power
transmission lines), the “electrical conductance weight” being the magnitude associated with each link.
In this work, the electrical conductance between two nodes has been assumed to be proportional to the
number of links and inversely proportional to the corresponding geographical distance. The conductance
matrix, W, is thus the weighted version of the adjacency matrix A. The research shows that the Florida
high-voltage power grid seems to have a complex architecture quite different from random-graph models
usually considered. It seems to be optimized not only by reducing the construction cost (measured by
the total length of power lines), but also through reducing the total pairwise link resistance in the grid,
which increases the robustness of power transmission between generators and loads against random line
failures. The modeling of power grids as spatial networks suggest that the Florida power grid has been
organized such that (1) the deployment cost of transmission lines and the total resistance of lines are
both minimized to some degree; and (2) there is a relatively high clustering so that the grid connectivity
is robust against random failures of both stations and power lines.
Finally, and related to the power law distribution used to fit data corresponding to some huge
blackouts, the analysis of the distribution of three reliability indicators (Energy Not Supplied, Total Loss
of Power and Restoration Time) in electric power grids (Table 2)—using real data from the major failures
occurred in the European power grid between 2002 and 2012 (and also in the US)—has been carried
in [164]. The research shows that the Lomax distribution (or Pareto II distribution) [165] describes these
indicators more accurately than the power law distribution (or Pareto distribution [166]). This is the
key contribution of this work because most of the research papers exploring power grids from the CN
viewpoint use the power law distribution to fit data corresponding to huge blackouts in the United States
and in the European Union.
Energies 2015,89231
Table 2. Summary of reliability indicators by ENTSOE (European Network of Transmission
System Operators for Electricity).
Acronym Definition Ref
ENS Estimation of Energy Not Supplied to the final customers, due to incidents in the
transmission network and given in MWh. [167]
TLP Total Loss of Power, which is given in MW and is a measure of generation shortfall. [167]
RT Restoration Time, measured in minutes, corresponds to the time from the disturbance until
from the disturbance until the system frequency returns to its nominal value. [167]
3.2. Hybrid Approaches: Combining CN and Electric Engineering Concepts
As emphasized in [8,14,78,85,168] the purely topological approach may lead to inaccurate results,
since it is not able to capture some of the peculiarities of power networks described by the Kirchoff’s
laws. Although it will be shown in a more detailed way, there are some basic ideas that motivate the
introduction of electrical power engineering concepts. The first one, unlike in general purpose CNs, is
that a power grid is a flow-based network in whichthe physical quantity (electric power) flowing between
two modes will involve most links. From the electrical engineering viewpoint, the metric of distance in
CN theory should be substituted by “electrical distance” involving line impedances [8]. The second
reason is that in conventional CN analysis, all elements are usually identical, assumption that does not
hold in practice over power transmission networks due to the existence of different types of nodes such
as generation and load buses. Finally, in power grids transmission lines undergo flow limits, which
restrict its ability to transport power. As a consequence, links should reflect this restriction. Based on
this rationale [8] argues that, when applying to power networks, the graph must be weighted (impedance,
maximum power) and directed (since electric power flows from generators to loads).
For the sake of clarity we have organized this Section into three Subsections: Subsection 3.2.1
introduces concepts from Electrical Engineering used in hybrid models, those that include in the CN
analysis simplified electric power flow models (Subsection 3.2.2). Finally, Subsection 3.2.3 overviews
novel electric metrics inspired by their topological counterparts.
3.2.1. Electrical Engineering Framework
Given a power grid with Nnodes and Mlinks—which may be referred to as “buses” and “lines” (or
“branches”) in power analysis, each link between nodes iand k,l= (i, k)lik, has a line impedance
zik(l) = rik (l) + jxik (l)(16)
where rik(l)is the resistance and xik (l)the reactance. The line admittance is obtained from the inverse
of its impedance, i.e.,
yik(l) = gik (l) + jbik (l) = 1
zik(l)(17)
with gik being the conductance and bik the susceptance. With these magnitudes, power flow models
aim to obtain complete information on voltage angles and magnitudes at each bus iof a power system
Energies 2015,89232
at given loads and generation [1]. A possible formulation of the alternate current (AC) flow problem
reduces to the solution of a system of Nequations [30]
Pi=
N
X
k=1 |Vi||Vk|[gik cos(θiθj) + bik sin(θiθj)] (18)
Qi=
N
X
k=1 |Vi||Vk|[gik sin(θiθj) + bik cos(θiθj)] (19)
with i= 1,··· , N , and where: Piand Qirepresent the real power and the reactive power, respectively,
at bus i;|Vi|is the voltage magnitude at bus i;gik is the conductance of the link connecting buses i, k;
bik is the susceptance of the link connecting buses i, k; and (θiθj)is the voltage angle difference for
buses i,k.
Thus, for an AC model, the power balance equations can be written for each bus (nodes of the
network). Real and reactive power flow on each branch (links of the network) and the generator reactive
power output can be analytically computed [1]. However, due to the non-linearity of the above formulae
numerical methods are required to obtain a solution. Note that this problem is very time consuming if
the power grid has a large number of nodes. This is the reason why many works resort to simplified
direct current (DC) power flow models, assuming that all the power is basically active power (i.e.,
reactive power is assumed to be negligible). The AC power flow model is more accurate than the DC
approximation, but at the expense of requiring more computational load.
3.2.2. Power Flow Models on CN Graphs
An example of a hybrid approach involving a flow model is [169], which delves into the robustness of
power grids by using a model that combines CN with power engineering concepts such as line impedance
and DC flow models. The complex network is a synthetic Watts-Strogatz network with N200 nodes
and 400 weighted links. The resilience analysis is carried out in terms of edge attack, line overload,
cascade effects, and network disruption. By using the small-world network model this work concludes
that line congestion decreases as the density of shortcuts increases. In other words, a power grid with
more shortcuts in its interconnection topology—that is, with the small-world property—tends to be more
robust than regular grids. This result has been recently proven by [108], as mentioned before.
A DC flow model has also been included in the CN approach to study the vulnerability of a power
grid in North China [170]. The novelty of this work is that it utilizes a directional, weighted graph
(the power flow direction in the power grid is considered). The graph has N= 2256 nodes and
M= 2892 links. The tolerance of the power grid to random errors and targeted attacks has been
analyzed by the conventional method of node and/or edge removal. The resilience analysis is based
on the size of the largest power supply region under an edge attack strategy. The author suggest some
possible solutions to cascading failures: (1) to remove a small part of the loads to maintain the stability
of the whole network; and (2) to create a number of self-healing islands to avoid large scale blackouts.
Based also on the maximum power flow through the links, [171] includes line admittances in the
pure topological model of a synthetic power grid (IEEE 39 bus system) with N= 39 nodes and
M= 46 weighted links. It is inspired by the fact that in power grids, electric power might not necessarily
Energies 2015,89233
flow only through the shortest path so this work proposes a centrality index based on the maximum power
flow through the links. The links which carry more portion of power from the source (generator) to sink
(load) are given a higher weight in this analysis. The resilience has been carried out in term of flow
availability. In a similar approach, [172] makes use of power flow in the analysis of a synthetic (IEEE
bus test system) high-voltage network with N= 550 nodes and M= 800 unweighted links. The
resilience is assessed in terms of the influence in network connectivity and power degradation.
Besides, [173] includes line impedances and DC flow model in a CN power grid in North America
with N= 29500 nodes and M= 50000 weighted links. The DC power flow model is used to simulate
the power grid dynamics and the network vulnerability under the failure of a few nodes (not larger
than 10 nodes). Connectivity loss and blackout size have been selected as vulnerability metrics. DC
and AC power flows are also used in [7] to analyze the complexity of a real high power grid in China
(Shanghai Power Grid) with N= 210 and M= 320 links. After having carried out a number of critically
analyses and blackout simulations, a interesting result suggests that the explored power grid seems to
have the small-world property. Also located in China, [174] focuses on a real high-voltage power grid
that has N900 nodes and M1150 links, by including the reactance of the lines. The work
analyses the characteristic path length, node degree, betweenness and resilience to loss of load and node
attacks. Similarly, but focuses on the blackout occurred in India on 30 and 31 July 2012, [15] combines
the network concepts (N= 572 nodes and M= 871 links) with those from electrical engineering
such as the active (P) and reactive (Q) power loads and the locally preferential load redistribution rule.
The active and reactive power load capacities of a given node jhave been modeled, respectively, as
Pj= (1 + β)Pj(0) and Qj= (1 + γ)Qj(0), where Pj(0) and Qj(0) are their initial values, and βand γ
are the tolerance parameters of the active and reactive power loads, respectively. The main conclusion is
that the probability of a cascading failure is small when tolerance parameters βand γare both larger than
some thresholds βand γ, which, however, increases the cost of the infrastructure in the power grid.
In a more generic context, the authors in [175] investigate the structural vulnerability of
scale-free grids (synthetic IEEE 14, IEEE 24, IEEE 30, IEEE 57, IEEE 118, and IEEE 300
bus networks) by comparing physical power flow models and scale-free CN metrics. This work
provides an useful discussion of the utilization of several metrics in scale-free graphs for vulnerability
assessment, specifically:
1. The “geodesic vulnerability” v, which measures the functionality of the network when it suffers
a node disruption with respect to its steady condition (“base case”), and is defined as [175]:
v.
= 1 P
i6=j
1/dLC
ij
P
i6=j
1/dBC
ij
(20)
where dLC
ij is the shortest geodesic distant between nodes iand jafter node fail, and dBC
ij is the
shortest geodesic distant between nodes iand jin the base case.
2. The “impact on connectivity” of the network, S, can be computed by calculating the number of
nodes that remain connected as
S.
= 1 NLC
N(21)
with NLC being the number of connected nodes after the node failure.
Energies 2015,89234
3. The “load shedding”, LS, which aims at estimating the total apparent power that remains connected
after node fail, is defined as
LS .
= 1
N
P
i=1 (PLC
Di)2+ (QLC
Di)21/2
N
P
i(PBC
Di)2+ (QBC
Di)21/2
(22)
where PLC
Diis the active power load that remains electrically connected after disruption of node i;
QLC
Diis the reactive power load that remains electrically connected; PBC
Didenotes the active power
load under the base case (before disruption); and QBC
Distands for the reactive power load under the
base case.
Two main conclusions are drawn in [175]: (1) the proposed geodesic vulnerability index vis useful
to carry out comparative connectivity and functionality benchmarks among different network topologies
in power grids; (2) an added value of vis that it is less time consuming to assess the vulnerability of
power grids.
The approach [36] deserves special attention since it makes use of the CN approach hybridized with
the more elaborated electrical DC-based OPA model (from the US Oak Ridge National Laboratory, the
Power System Engineering Research Center at the University of Wisconsin-Madison, and the University
of Alaska), in which blackouts are modeled by overloads and outages of transmission lines in the context
of DC flow dispatch. It focuses on real power grids that have a global inhomogeneous structure but
contains a number of relatively homogeneous regions, which are coupled to each other like pearls on a
string [36]. The described results suggest that in some cases highly inhomogeneous power grids can
have a higher risk of large blackouts than both uncoupled individual grids and homogeneous grids
of comparable size. The authors suggest that this result might change as the size of the individual
homogeneous regions gets larger within the global inhomogeneous grid. In fact the unit size of the
homogeneous parts embedded in the inhomogeneous global network seems to be critical in determining
whether large blackouts will become more likely as the system evolves between more homogeneous
or inhomogeneous.
The recent work [1] uses the Pahwa’s model—a novel model to study cascade failures in power
grids in which the grid is modeled as a complex network (nodes represent buses and links represent
electrical branches) and the power flows on lines are calculated using a DC power flow model—to
study two extreme setups. The first one is a scenario characterized by a load growth, which models
the ever-increasing user demand along time. The second limiting setup focuses on power fluctuations
mimicking the effects of intermittent renewable energy sources. The obtained results determine that
increasing the power grid size can abruptly trigger blackouts. This is the reason why the authors
recommend taking into account this effect in planned grid layouts so as to integrate national power
grids into “super-grids” [176].
Recent research in [30] utilizes electric concepts (a detailed AC electric power model) along with
CN metrics (i.e., degree centrality, eccentricity, betweenness centrality and centroid centrality). It aims
at quantifying the importance of premeditated physical attacks that generate breakdowns in the electric
power grid. The power model developed is used to describe the operating state of the electric power
Energies 2015,89235
network under the assumption that the system operates under balanced conditions. Specifically, in a
power grid having Nnodes (buses) the AC load flow problem reduces to solving a set of Nequations,
as those stated by Equations (18) and (19). This power model along with the aforementioned metrics has
been applied to the graph representing the Swiss power grid transmission system. The target is to detect
and rank the most critical elements of a power grid under a variety of premeditated attack scenarios, both
deterministic (targeted) and stochastic attacks. The effect of each attack scenario has been quantified in
terms of the blackout size (electric-power-not-served). The first conclusion is that the effect of targeted
attacks on a node (substation) is much more harmful than the one appearing after a random removal of
a node (substation) or on a link (transmission lines). The highest threat arising from a targeted attack
seems to be the appearance of frequency instability.
To complete this set of works that combine the CN approach with power-flow modes, we would
like to refer the reader to the very recent contribution in [21]. The power flow model applied on the
corresponding graph is based on a Kuramoto model [177] and a linearized DC power flow model [178].
One of the novel aspects of this work is that network resilience is characterized in terms of the
“backup capacity”. This metric is defined as the additional link capacity (overcapacity) that needs to
be supplied to secure the proper network operation when the most-loaded link suffers from a failure
or attack. Four different networks are modeled and set under test: the British transmission power grid
(N= 120 nodes—synchronous machines (both generators and motors)—and M= 165 transmission
lines), and three classes of random networks, namely, Erd˝
os-Rényi random graphs, Erd˝
os-Rényi random
graphs with a fixed number of links, and spatial networks in which the nodes are embedded in a
two-dimensional plane. In the experimental work, the probability density functions of the backup
capacity PBhave been computed for the mentioned networks. In particular, special emphasis has
been put on investigating the probability density functions down to their tails, in the effort of gaining
a physical insight into resilient networks. This has been done using large-deviation techniques, which
help study the extremely low probabilities (PB10100) in the tails of PB. The proposed method
makes use of an additional Boltzmann factor exp(PB(G)/T )in a Markov-chain Monte Carlo (MC)
simulation, which generates the network instances. The parameter Tmodels an artificial temperature,
which allows sampling different regions of PB. This work reveals two important conclusions: the first
is that very resilient networks are basically characterized by a small diameter. This is of practical
importance because it means that generators should be placed near power consumers, strategy that
can be currently implemented by fostering distributed generation via renewable energies [179]. This
strategy would also reduce the costs for creating or upgrading power transmission grids. The second
important conclusion of [21] is that networks can be made more resilient by adding more links, which
has been also pointed out in [108]. As suggested in [108,152,169], adding sufficient number of links
between far nodes makes the grid more robust. This is also in line with the virtues of small-word
networks [86,123,140,174,180]. Since the power grid operates as a network of networks circumscribed
by the country geography (embedded network), in order to reduce the risk of cascade failures [108]
suggests the deployment of a small number of longer transmission lines that form shortcuts to different
parts of the grid.
Finally, very recent works [2225] propose novel remove/attack strategies that can concurrently
occur on substations and transmission lines either simultaneously [22,25] or sequentially [24]. These
Energies 2015,89236
joint substation-line attack strategies have been tested using node degree and node load (the sum of
the absolute values of power injected into it by all generation-demand-node pairs) on the IEEE 39
bus system. These strategies have been found to be useful in finding more power grid vulnerabilities
when compared to conventional approaches where attacks affect either nodes (substations) or links
(transmission lines) separately. The new model in [23] introduces a metric called “risk graph”, which
aims at describing the hidden relationship among potential target nodes (prone to cascading failures), so
that if several nodes are closely linked together, the simultaneous failure of these nodes is more likely
to cause large cascading failures. This has been tested on IEEE 57 and 118 bus systems and the Polish
transmission network. The obtained results in this work unveil the potentiality of the proposed risk graph
to efficiently characterize the real vulnerability of the power grid.
3.2.3. Novel Electrical Metrics Inspired by Topological Metrics
Many of the contributions that have applied power flow models on graph networks have also
elaborated novel “electrical metrics” that, although inspired by topological metrics, are found to be
more effective to identify critical components in power grids [8], a task deemed crucial when exploring
its robustness. We begin with the so-called “electrical centrality” of a given node a, which is defined
as [173,181]
ca.
=1
ea
(23)
where eais a measure of the connectivity distance for each node aand
ea.
=
N
X
b=1,b6=a
eab
N1(24)
with eab denoting the matrix elements of the absolute value of the inverse of the grid admittance
matrix, i.e.,
E=|Y1|.
=D(25)
where Dis called “electrical distance” [182,183]. This electrical centrality has been computed over a
synthetic high-voltage transmission network (the IEEE 300-bus grid) with N= 300 nodes and M= 411
links. The resilience analysis has been based on the sensitivity of the relationship between voltages and
currents, defined by the impedance matrix Y. The main finding is that the power grid seems to exhibit a
scale-free structure, having a number of highly-connected “hub” buses.
A novel betweenness index that employs the reactance of the transmission lines as weights of the
network edges has been explored in [184]. Specifically, the weights of the links that model the electricity
transmission lines have been defined as the reactance of the electric path from one node to another. The
aim is to introduce in the graph the physical concept in which more power is transmitted through those
lines that have less reactance. The model assigns a higher weight to those edges with less reactance.
Experiments have been carried out over the IEEE-39 and the IEEE-118 bus system. The betweenness
index has been discovered to identify critical lines through the network, either because of their location
in the grid or by the amount of power they convey.
The modification of topological metrics to obtain more new metrics that describe better the operation
of power grids has ignited a series of recent contributions [34,115,118,119,185,186], which have
Energies 2015,89237
explored the performance of novel electrical metrics to quantify the extent to which a node iof a power
grid is declared critical. The two metrics that have been found to work best are the “electrical degree
centrality” and the “electrical betweenness centrality”; the “electrical degree centrality” CE
D(i)of a node
ihinges on the degree centrality given by Equation (10), and is given by
CE
D(i).
=P
ij
Pij
N1(26)
where ijrepresents that node iis linked to node j, and Pij is the electric power that flows in the
line linking nodes iand j. Similarly, the “electrical betweenness centrality” stems from its topological
counterpart formulated in Equation (9), i.e.,
CE
B(i).
=X
s6=v6=t∈V
Pst(i)
Pst
(27)
where the ratio rst(k).
=Pst(i)/Pst is a measure of the level at which the line linking sto tneeds i
to transmit power between them along the shortest electrical path. The feasibility of CE
D(·)and CE
B(·)
has been tested in different power grids: IEEE 30, 57, and 118 bus systems [34,115], IEEE 30 bus
system [119], and IEEE 300 bus [186]. All the results reported in these references pinpoint that CE
B(i)
is the most useful to quantify the extent to which a node iof a power grid is critical.
Following [87], assuming that for high-voltage transmission networks x(l)r(l), and that a unit
current flows along the link l= (s, t)from node sto t, then the caused voltage difference between the
ends of the link equals u=U(s)U(t) = zpr(l)(or equivalently u= 1/ypr(l)). Therefore zpr(l)is
interpreted as the electrical distance between node sand tand ypr(l)as the “coupling strength” between
the two end nodes. These considerations lead to the electrical degree centrality defined as [87]
CE
D(v).
=kY(v, v)k
N1(28)
The work in [87] has investigated a number of centrality measures when applied to power grids,
and generalized their analysis based on centrality in graph theory to that of power grids (including
its electrical parameters). The analysis has been performed over the NYISO-2935 system (New York
Independent System Operator’s transmission network), containing 2935 nodes and 6567 links, and on
the IEEE-300 system. It has been found out that when the electrical parameters are included in the
definitions of the centrality metrics, the distribution of the degree centrality and other measures of
centrality become considerably different from those based solely on the topological structure, resulting
in a easier identification of important nodes that otherwise could not be identified.
More recently, the works in [5,187] have proposed a novel metric, coined as “effective graph
resistance” or RG, as an alternative vulnerability measure to determine the critical transmission lines
in a power grid. This parameter is given by
RG=
N
X
i=1
N
X
j=i+1
Rij (29)
where in a DC model, Rij is the effective resistance between buses iand j, and is equal to the equivalent
impedance Zeq,i,j between these buses. The proposed approach has been tested over the IEEE 118 power
Energies 2015,89238
system, and the results have been compared to the traditional average shortest path length topological
metric, proving the feasibility of RGto efficiently evaluate the vulnerability of power grids.
Finally we would like to stand out three important electric-based metrics that have been proposed
in [8,35,78,85,188], which have laid the foundations for the so-called “extended topological approach”.
Among other contributions, the model improves the topological CN-based approach with novel
metrics referred to as “net-ability”, “electrical betweenness” and “entropy degree”, whose definitions
we postpone to the following paragraph since they require certain prior comments. According
to [8] the motivation of these contributions is that the pure topological concepts and metrics of the
general CN approach ignore the electrical properties and the working restrictions of power grids,
hence their straightforward application without any further consideration may fail to capture specific
electrical aspects under certain network topologies and operational circumstances. For instance, in
a general-purpose complex network, each node (either source or sink) has the same function when
the physical magnitude at hand (e.g., packets, power or whatsoever) is transmitted over the network.
However, in power grids buses are completely different depending on whether they are generation, load,
or transmission buses. In these references Gdenotes the set of generation buses (|G|=NG), Lis the
set of load buses (|L|=NL), and Trepresents the set of transmission buses (|T|=NT=M). Note
that GL=Nand NG+NL=N, the total number of nodes in the network. With these definitions in
mind, the following metrics are defined:
The “electrical extended betweenness”,
BE.
=1
2X
gGX
dL
Cd
gX
lT|fgd
l|, v 6=g6=d∈ N (30)
with Cd
gdenoting the “power transmission capacity” from bus gto dgiven by
Cd
g= min (Pmax
1
fgd
1
,...,Pmax
NT
fgd
PNT)(31)
and
Pmax
lis the power flow limit of line l(l= 1,··· , NT), which is a physical constrain of line
l, unrelated to operational conditions; and
fgd
lis the change of the power on line lfor injection at generation bus gand withdrawal at
load bus d.
The “net-ability”, proposed to evaluate the global performance of a grid by including electrical
magnitudes such as capacity or impedance, is defined as [188]
A.
=1
NGNLX
gGX
dL
Cd
g
Zd
g
(32)
where Zd
gis the electrical distance (impedance) between generation bus gand withdrawal at load
bus d.
The vulnerability of line linterpreted as the net-ability drop caused by an outage (cut) of the line
l, is thus
VA(l).
=A−Al
A(33)
Energies 2015,89239
The “entropic degree” of a node i, denoted as Si, aims at including three elements in the definition
of node degree when computed over a weighted network [35]: (1) the strength of the connection
between node iand jin terms of link weight wij; (2) the number of links connected with the node;
and (3) the distribution of weights among the links. The entropic degree of a node iis defined
as [35]
Si.
= 1X
j
pij log pij!X
j
wij (34)
where pij .
=wij
Pjwij is the normalized weight of the link between nodes iand jfor each link lij
connecting nodes iand j.
Based on this framework, [188] has elaborated on the vulnerability of a synthetic IEEE network with
N= 90 and M= 120 weighted links. Three different methods to assess the impact of line outages
have been explored: (1) the method based on efficiency; (2) the new method based on net-ability; and
(3) the computation of line overloads by DC power flow. The first vulnerability measure—using the
definition of the network efficiency E[140] given by (7)—is based on the efficiency decrease when line
lis removed (Equation (8)). The second method is based on the vulnerability of line linterpreted as the
net-ability drop caused by an outage (cut) of the line (Equation (33)). Since the third approach based on
the computation of line overloads by DC power flow is the one that most realistically captures the details
of the power grid under analysis, it has been considered by the authors as the baseline method. Results
reveal that the net-ability metric is able to successfully identify the most critical lines.
Also within the extended topological approach, [35] has explored the feasibility of net-ability and
entropic degree metrics to quantify to what extent a power grid is robust, and compared them to their
counterpart structural metrics (node degree or network efficiency). The net-ability and entropic degree
metrics have been applied to real power grids, and have been found to provide a good characterization
of the power grid. The network explored has a graph with N= 550 nodes and M= 700 weighted links,
and corresponds to a real high-voltage power grid in Italy. As mentioned, resilience has been studied in
terms of the global efficiency, net-ability, and overload. Similarly the work described in [85] considers a
synthetic high-voltage power grid modeled as a CN (N= 32,M= 422 links) enhanced with a model
of power injection/withdrawal at buses, and the electrical betweenness defined in Equation (30). The
resilience analysis has been carried out in terms of unserved energy/load based on a node and/or edge
attack scheme. These extended electrical-based metrics have been proven to be more effective than their
corresponding topological counterparts when identifying critical components in a power grid. Therefore
the overview in [8] concludes that net-ability should be used instead of efficiency, entropy degree instead
of node degree, and electrical betweenness instead of topological betweenness.
4. Discussion
The critical discussion of this survey starts with Table 3, where a cross-comparison of the results and
suggested strategies to make power grids more robust is summarized. In particular the table performs
a comparative study of selected works according to the adopted metrics which, for the sake of clarity,
have been listed in Tables 2and 4along with their corresponding equations. Those references belonging
to the CN approach have been specifically defined in Subsection 2.2. Table 4permits easily finding the
Energies 2015,89240
symbols, equations and references corresponding to any given metric. The reasons why works in Table 3
have been selected for the discussion are:
1. They are directly comparable to each other by using the metrics summarized in Tables 2and 4.
2. They are either the most cited (best known and most representative) or the most recent studies
applying concepts from CN to power grids.
3. They investigate large, real power grids in US, European Union, China and India, these latter
being selected because they correspond to nations with emerging economies, dense populations,
and significant electricity needs.
4. Some of the works in Table 3analyze synthetic topologies as key study cases, such as IEEE bus
networks, or WS networks, BA and ER networks, these latter being appropriate structures either
to study asymptotic behaviors or to simplify the studies.
Table 3does not list papers which, although having been revised for the sake a global understanding
of the main topics of the paper—cascading failure models, AC-based power flow models [189], hidden
failure models [190] and stochastic models [191]—do not tackle directly the issue of power grid
robustness. The acronyms and symbols on the first row of this table are as follows: N(order, second
column) and M(size, third column) are approximations, typically in the order of tens, with the aim of
giving a rough idea of order and the size of the network without getting lost in unnecessary details. ND
and Bstand for node degree analysis and betweenness statistic analysis, respectively. AT” stands for
attack strategy. “w/u?” denote whether weighted or unweighted links have been used in the graph that
models the network. Finally, the last column represents the metrics that the surveyed works have used to
study the vulnerability of the grid.
Based on the columns in Table 3we discuss the results by going through the following aspects:
Removing/attacking strategies to perform vulnerability analysis (Subsection 4.1).
Unweighted versus weighted graph analysis (Subsection 4.2).
Analysis of vulnerability metrics (Subsection 4.3).
Ability of CN to explain power grids (Subsection 4.4).
Analysis of power grid structures (Subsection 4.5)
Strategies to improve robustness (Subsection 4.6).
The last two subsections aim at answering questions related to each other, such as: What types of grid
structures are more robust?What can be done to improve the robustness of existing networks?What are
the implications of this conclusion in terms of future grid design?
4.1. Removing/Attacking Strategies to Perform Vulnerability Analysis
The analysis of vulnerability in power grids is typically based on removing either nodes (node attack
strategy) or links (link attack strategy). Recently attack strategies that can occur concurrently on nodes
and links either simultaneously [22,25] or sequentially [24] have been found to be useful in discovering
more power grid vulnerabilities when compared to the aforementioned conventional approaches.
Regarding the conventional strategies (based on removing either nodes or links, but not both),
there seems to be no prevailing scheme in any of the two approaches: node attack has been used
Energies 2015,89241
in [19,20,84,107,111,147,148,153,159,173,174], which contain indistinctly both topological—and
hybrid-based approaches. Link attack has been adopted in [35,152,155,169,172,184,188,192,193],
which also include both topological—and hybrid-based approaches. There are a few works that have
carried out both analyses [7,85,112,117,151,170].
Table 3. Comparative study of selected references according to the metrics and indicators in
Tables 2and 4. “” indicates that a metric is used. SN stands for several networks. “–” means
not available/not used.
Reference Order Size ND BAttack w/u? Electrical concepts Metrics/indicators used in
N M strategy into the CN approach vulnerability analysis
[17] SN SN   NA u`,G
[141] 4941 –   NA u`,G
[147]14100 19660   NA u`, Connectivity loss
[145]14100 19660   NA u`,E
[150]3000 12000   NA u`,G
[20]3000 3800 NA u`,G
[19]3000 3800 NA u`,G, ENS, TLP
[111]4800 5500 NA u`,C,G
[152]380 570 NA, LA uD
[154]6400 8700 NA u`,C
[155]370 570 LA u`,C
[147]14000 19600   NA u Connectivity loss
[151]31400 NA, EA u Probability of load loss
[156]2700 3300 NA u Motifs (sub-graph) size
ENS, TPL, RT
[84]8500 13900 NA uG
[159]4900 6600 NA uSN
[107]940 1260 NA u Blackout size
[149]4940 6600   NA uG
[117]4850 5300   NA both `,G
[148]340 520   NA w E
[153]14000 19600 NA w E,D
[35]550 700 NA w Impedance, DC flow E,A, overload
[85]32 420 NA, LA w Impedance, DC flow BE, ENS
[188]90 120 LA w Impedance, DC flow E,A
[169]200 400 LA w Line impedance, DC flow Overload, cascade
[87]2930 6570 NA w Line impedance, DC power flow CE
D
[173]29500 50000 NA w Line impedance and DC flow `, connectivity level
[172]550 800 NA w DC flow Connectivity, TLP
[7]210 320 NA both DC and AC power flow Blackout size, C,`
[174]900 1150   NA w Line reactance Loss of load, `
[15]570 870   NA w Active, reactive power loads Loss of load
[175] SN SN   NA w AC model v,S, LD
[170]2560 2890 NA, LA w Impedance Largest power supply region
[181]300 410 NA both Line impedance Impedance matrix sensitivity
[184]150 46 NA w Line reactance, active power E
[171]39 46 NA w Line admittance, power flow Flow availability
[30] 240 310 NA, LA w AC power flow model CE
D,CE
B, ENS
Energies 2015,89242
Table 3. Cont.
Reference Order Size ND BAttack w/u? Electrical concepts Metrics/indicators used in
N M strategy into the CN approach vulnerability analysis
[36] SN SN LA w DC-based OPA model LS
[21] 120 165 NA w DC power-flow PB
[34] SN SN NA w DC Power flow CE
D,CE
B
[115] 30 41 NA w DC Power flow S, connectivity loss
[118] NA w DC Power flow S, conectivity Loss, CE
B
Table 4. Summary of metrics and their corresponding equations, references and approaches
in relation to Table 3.
Metric Equation or definition Reference
Average path length, `(4) [61]
Clustering coefficient, C(5) [57]
Size of the largest connected component, G(6) [61]
Efficiency, E(definition 1) (7) [140]
Network Efficiency, E(definition 2) (13) [153]
Betweenness centrality, CB(v)≡ Bv(9) [61]
Degree centrality, CD(10) [30]
Damage, D(14) [153]
Normalized avalanche size, SN(15) [162]
Geodesic vulnerability, v(20) [175]
Impact on connectivity, S(21) [175]
Connectivity loss Average decrease in the number of generators [153]
connected to a distributing substation
Connectivity level Average fraction of generators [153]
connected by each load substation
Backup capacity, PB
Additional link capacity (overcapacity) [21]
that needs to be supplied to secure the proper
network operation when the most loaded
link suffers from a failure or attack
Load shedding, LS (22) [175]
Electrical centrality, ca(23) [173,181]
Electrical distance, D(25) [182,183]
Electrical degree centrality (def. 1), CE
D(i)(26) [115,118,119,186]
Electrical degree centrality (def. 2), CE
D(i)(28) [87]
Electrical betweenness centrality, CE
B(i)(27) [34,115]
Electrical betweenness, BE(30) [8,78,85,168]
Net-ability, A(32) [8,78,168]
Entropic degree, Si(34) [8,78,168]
Effective graph resistance, RG(29) [5,187]
Energies 2015,89243
4.2. Unweighted versus Weighted Graph Analysis
An interesting point of discussion is whether the graph representing the particular grid under
consideration uses either weight or unweighted links. In this respect, an interesting point to
note in Table 3is that, at a first glance, it has been naturally divided into three sub-tables:
the first sub-table corresponds to those references in which the graph associated to the network
under analysis has unweighted links. We have highlighted them in bold (u“unweighted”)
to facilitate the visual inspection of the table and the subsequent discussion. The works
in [19,20,107,111,117,147,151,152,154156] have in common that each power network under study
has been represented using the simplest graph model: undirected and unweighted. This is because these
approaches do not include any characterization of the link weights, a key difference with respect to
the contributions contained in the other two sub-tables where links have been weighted to enhance the
representation of the power grid. In particular, in the references of the third subtable weights are related
to electric concepts such as the maximum power flow that can be transmitted through the link.
Unweighted graphs are by far the most used representation in the group of references that tackle
robustness in power grids from the pure topological CN viewpoint. It should be remarked that within this
group [148,153] have used weighted links, but not to represent electrical principles. Another interesting
finding is that most of these works using unweighted graphs analyze the node degree distribution of the
network so as to determine the class of network that better fits the power grid under study, e.g., scale-free
network or random network. On the contrary most of the hybrid approaches, which include power flow
models and/or electric-based metrics, made use of weighted graphs. However, they do not undertake
any statistical analysis of the node degree distribution even though it might exhibit differences when
compared to that of the unweighted approaches, as Pagani et al. have noted in [14].
A deeper insight into the role of weighted links is taken in [8], where it is noted that in power grids
transmission lines have power flow limits, which must be represented by weights wij standing for the flow
limit on line lij l(i, j)linking nodes iand j. The authors in [8] argue that when applying CN analysis
to power grids, the electrical power grid must be represented as a weighted and directed network graph
G= (N,L,W), where Wis the set of weight elements wij . This is in contrast to the general approach
stated in Section 2.1, where the graph is defined as G= (N,L)and does not require weighted links.
4.3. Analysis of Vulnerability Metrics
In Table 3one of the criteria to classify the selected references is the class of utilized metrics.
Performance metrics can be grouped into two classes in a similar fashion to [12]. The first group
of metrics corresponds to the topology-based performance measurements, which quantify the grid
performance based only on the underlying structure of the network. Topology metrics include the average
path length, node degree distribution, betweenness, size of the largest component, and network efficiency,
the rest having appeared at a lesser extent. In particular, regarding the above listed metrics:
1. The average path length formulated in Equation (4) has been utilized to analyze the topological
aspects of the Italian power grid [148]; the topological vulnerability of three European electric
power grids (Spanish 400 kV grid, French 400 kV grid, and Italian 380 kV grid) [152]; the
Energies 2015,89244
vulnerability of the European Nordic power grid (which includes the national transmission grids
of Sweden, Finland, Norway and the main part of Denmark) [111]; the western USA power
grid [70,111,154]; the topological structure and static tolerance to errors and attacks of thirty-three
different European power grids [20]; a synthetic Watts-Strogatz power grid [169]; the IEEE 300
power grid [155,173]; the medium and low-voltage grids in northern Netherlands [117]; the IEEE
118 bus test systems [172]; and high-voltage power grids in China [7,174].
2. The node degree distribution has been used in the study of the August 2003 blackout in
US [147]; the topological aspects of the Italian power grid [148]; the dependability of North
American eastern and western power transmission grids [151]; the topological properties of Nordic
power grid and US Western States Electricity Transmission (WECC) grid [111]; the robustness
of the European electricity transmission grid [19,20,155,156]; the US power grid [84]; the
medium- and low-voltage grids in northern Netherlands [117]; and real high-voltage power grids in
China [170,174].
3. The betweenness from Equation (9) has been utilized to analyze the August 2003 blackout in US
[147]; the topological aspects of the Italian power grid [148]; the IEEE 300-bus grid [181]; the
medium- and low-voltage grids in northern Netherlands [117]; and a real high-voltage power grid
in China [174].
4. The size of the largest component, which is the fraction of nodes in the connected sub-network
that still have the largest number of nodes so that there is at list a path between any two nodes, has
been used in [19,20,111,111,117,141,142,144,150].
5. The network efficiency has been used to analyze the Italian power grid [145,148], the North
American power grid [153], the European power grid [155], several synthetic IEEE power
grids [184,188], a real high-voltage power grid in Italy [35], to name the most cited.
The second class of scores are based on power flow models and on novel electric metrics inspired
by their topological counterparts. Although there are many models in the literature aimed at capturing
power flow redistribution after node/link failure [12], we mention here those that have appeared in the
review, and which correspond to the most cited contributions with the highest scientific relevance:
the direct current-based OPA models [12,36,194], AC-based power flow model [189] and its DC
approximation [8,35,78,85,188] . Based on these power flow redistribution models it is possible to
compute the flow-based performance and vulnerability metrics. The most used electric metrics in our
review have been:
1. The net-ability, which has been used in [8,35,78,85,188].
2. The electrical degree centrality CE
D(i)of a node i, used in: [8,35,78,85,188].
3. The electrical betweenness centrality CE
B, used in [8,35,78,85,188].
4. The entropic degree Siof a node i, used in [8,35,78].
5. The effective graph resistance RG, utilized in [5,187].
A key point to note in electrical-based metrics is that they capture important features of power grids
(not considered by topological approaches), and are more effective in identifying critical components in
a power grid [8], which is crucial when exploring its robustness. Thus, in this respect net-ability is used
instead of efficiency, entropy degree is used instead of node degree, or electrical betweenness is used