Content uploaded by Carlos Natalino
Author content
All content in this area was uploaded by Carlos Natalino on Apr 02, 2018
Content may be subject to copyright.
c
IFIP, 2018. This is the author’s version of the work. It is posted here by permission of IFIP for your personal use. Not for redistribution. The definitive
version is to appear in the 22nd International Conference on Optical Network Design and Modeling (ONDM), May 14-17, 2018, Dublin, Ireland.
Control Plane Robustness in Software-Defined
Optical Networks under Targeted Fiber Cuts
Jing Zhu∗,†, Carlos Natalino†, Lena Wosinska†, Marija Furdek†, Zuqing Zhu∗
∗School of Information Science and Technology, University of Science and Technology of China, Hefei, China
†Optical Networks Laboratory (ONLab), KTH Royal Institute of Technology, Sweden
∗Email: zhujinng@mail.ustc.edu.cn, {carlosns, wosinska, marifur}@kth.se, zqzhu@ieee.org
Abstract—The Software-Defined Optical Networking (SDON)
paradigm enables programmable, adaptive and application-
aware backbone networks. However, aside from the manifold
advantages, the centralized Network Control and Management
in SDONs also gives rise to a number of security concerns at dif-
ferent network layers. As communication between the control and
the data plane devices in an SDON utilizes the common optical
fiber infrastructure, it can be subject to various targeted attacks
aimed at disabling the underlying optical network infrastructure
and disrupting the services running in the network.
In this work, we focus on the threats from targeted fiber
cuts to the control plane (CP) robustness in an SDON under
different link cut attack scenarios with diverse damaging poten-
tial, modeled through a newly defined link criticality measure
based on the routing of control paths. To quantify the robustness
of a particular CP realization, we propose a metric called
Average Control Plane Connectivity (ACPC) and analyze the
CP robustness for a varying number of controller instances in
master/slave configuration. Simulation results indicate that CP
enhancements in terms of controller addition do not necessarily
yield linear improvements in CP robustness but require tailored
CP design strategies.
Index Terms—Control plane robustness, Physical-layer secu-
rity, Software-defined optical networks, Targeted fiber cuts.
I. INTRODUCTION
Optical backbone networks are critical communication in-
frastructure supporting a variety of vital network services.
In order to enable programmable, scalable and flexible net-
work control and management (NC&M), Software-Defined
Networking (SDN) has been proposed to decouple the network
control and data planes (CP and DP), such that the NC&M
tasks are handled by logically centralized controllers while the
DP devices only take care of packet forwarding/data transmis-
sion [1], [2]. Hence, implementing Software-Defined Optical
Networks (SDONs) enables flexible and programmable optical
backbone networks, and significantly shortens the time-to-
market of new services [3], [4]. Similar to its packet-based
counterparts, the CP of an SDON uses centralized controllers
to collect the statuses and configure the operation of DP
devices (e.g., optical transponders and switches) [5].
One of the essential aspects in SDON planning is the CP
design [6]. As each fiber link in an SDON can carry Tb/s
traffic, a well-designed CP should be able to simultaneously
satisfy the requirements on low communication latency and
high reliability of the control channels [7]. In general, the CP
comprises one or multiple controller instances and each of
them controls a subset of DP devices. Each DP device can
connect to multiple controller instances, typically two, with
one serving as master and the other as slave (Fig. 1). Several
studies have addressed resilient SDN control plane design [6],
[8]–[11]. Nevertheless, all of them considered CP disruptions
due to random failures only, whereas the damage caused by
deliberate attacks has not been investigated yet.
Optical networks are subject to physical-layer vulnerabilities
which can be leveraged by malicious users to launch attacks
aimed at service disruption [12]. In SDON, such attacks can
affect not only the data plane communication, but may seri-
ously disrupt the control plane as well. As the network ’brain’,
the robustness of the control plane is an important prerequisite
for robust SDON deployment. The damaging potential of
attacks can be boosted by design of attack techniques, e.g., by
targeting the most critical components. In particular, we focus
on deliberate fiber cut attacks where an attacker cuts the most
critical links in an effort to maximize the CP communication
disruption. Targeted fiber cuts have a larger disruptive effect
than random failures [13], and are more challenging to address
through careful network design.
In our previous work [14], we have investigated the robust-
ness of data plane communication to targeted link cuts. In this
paper, we consider the threats from targeted fiber cuts to the
control plane and evaluate the CP robustness in an SDON from
the perspectives of connectivity and transmission distance. Our
evaluation is based on two newly proposed metrics: (i) a link
criticality measure that quantifies the importance of links to
support the CP connections and (ii) the Average Control Plane
Connectivity (ACPC) that evaluates the robustness of a specific
CP realization (i.e., the controller placement and the routing of
control channels over the optical fiber topology). We consider
DP
Switch
Master/Slave
CP
Master/Slave
Controller
Master control
Slave control
Fig. 1. An example of a software-defined optical network.
978-3-903176-07-2 c
2018 IFIP
two attack scenarios: one, where the attacker is not aware of
the CP realization and, thus, uses general knowledge of the
topology to select the targeted links to cut; and the other, where
the attacker is aware of the CP realization and, thus, selects the
most critical fibers to cut. Extensive simulation experiments
are conducted for three realistic backbone topologies, where
we analyze the CP robustness depending on the number of
controller instances in the network and assess whether adding
master/slave controller configuration to the switches can en-
hance the CP robustness. Results show that adding controller
instances or considering master/slave configuration might not
always lead to an increase in CP robustness, especially when
the knowledge of the CP realization is available to the attacker.
The remainder of the paper is organized as follows. Section
II reviews the related work. The proposed control plane
connectivity measures are presented in Section III. Sections
IV and Vanalyze network performance in the two considered
attack scenarios, while Section VI concludes the paper.
II. RE LATE D WOR K
Since the inception of SDN, there have been intensive
efforts on control plane design. The fundamental problem of
CP design, i.e., how many controllers to deploy and where
to place them, has been addressed in [15]. A comprehensive
survey on fault management in SDN can be found in [16].
Control plane resiliency was investigated under various failure
scenarios in [6], [8]–[11]. In [8], the authors proposed a
method for controller placement aimed at maximizing the
number of protected SDN switches. The work in [9] compared
several controller placement schemes in terms of CP connec-
tivity. The study in [10] considered failures of fiber links,
switches and controllers, and designed an algorithm for Pareto-
optimal controller placement with load balancing. Resilience
from cascading controller failures was addressed in [11], by
designing several algorithms to balance and redistribute the
load among controllers. In [6], a survivable CP establishment
scheme was proposed to protect SDONs against single node
failures, utilizing a mutual backup model for the controllers.
However, these studies did not consider failure scenarios
caused by malicious man-made attacks.
Robustness of large-scale network topologies in the pres-
ence of targeted attacks was evaluated in [17]. Santos et
al. [18] investigated the identification of critical nodes in a
telecommunication network, i.e., nodes whose removal would
minimize the network connectivity. The work in [14] studied
the robustness of optical content delivery networks in the
presence of targeted fiber cuts, gauged by average content
accessibility. As the aforementioned investigations only ad-
dressed survivability issues concerning the data plane, they
cannot be directly mapped to assess the control plane ro-
bustness in SDONs. Attacks aimed at disabling control plane
elements were investigated in [19], where the authors proposed
a cost-efficient controller assignment algorithm to protect
an SDN with multiple controllers from Byzantine attacks
targeting controllers and control channels. They assumed that
the attacker has complete knowledge about the CP realization,
i.e., the controller location and connectivity. The assumption
of complete CP realization knowledge might not always be
applicable because network operators typically try to prevent
disclosing operational details. In this paper, we consider both
cases, i.e., the scenario where the attacker is aware only of
the network topology, and the case where the attacker is also
aware of the CP realization.
III. CON TRO L PLA NE CONNECTIVITY MEASURES
We consider a backbone SDON with topology modeled as
a graph G(V, E ), where Vdenotes the set of nodes hosting
switching elements, and Ethe set of undirected fiber links.
We assume that the CP and DP of the SDON are supported
by the same physical infrastructure, which means that the
controllers are co-located with the optical switches, while the
control channels share fiber links with data plane connections
(i.e., in-band control). There are |U|controller instances in the
SDON, and the set U(U⊂V) represents their locations. To
realize CP resiliency, each controller manages several optical
switches, and each switch may connect to one or two controller
instances, i.e., one master and one slave [7]. To reduce the
control latency, each optical switch is assumed to connect to
the physically closest controller instances.
In a targeted fiber link cut attack, the attacker deliberately
chooses certain fiber links to cut. Link selection can be guided
by different policies, depending on the knowledge and the aim
of the attacker. However, a generalized strategy of an attacker
would be to try to maximize the level of resulting disruption
at a minimal effort, by selecting the links deemed most
critical. Link criticality can be evaluated according to different
criteria, such as topological properties or the number of carried
connections. Link cuts can be performed simultaneously or
sequentially. In this paper, we examine the former approach
of simultaneously cutting a certain number of the most critical
links, under different criticality considerations. If the set of
intact fiber links upon an attack is denoted with E0, the attack
intensity can be quantified by the link cut ratio r:
r=|E| − |E0|
|E|.(1)
Note that the targeted fiber cuts can disrupt the connectivity
between switches and controllers, among the switches, and
among the controllers. We focus on the case where the con-
nectivity between switches and controllers is disrupted, which
affects CP robustness in the SDON, i.e., the survivability of
the control channels [6]. Here, we assume that the connectivity
between a switch and its controller is lost if no path exists
between them in G(V, E 0)after the attack.
The following notation is used throughout the paper to assist
CP robustness evaluation in SDONs.
•xu,v: boolean variable that equals 1 if the optical switch
at node vconnects to the controller at node u, and 0
otherwise.
•Pu,v: the shortest path between the controller at node u
and the optical switch at node vbefore the attack.
•zu,v,e: boolean variable that equals 1 if link eis traversed
by Pu,v, and 0 otherwise.
•yu,v,r : boolean variable that equals 0 if, after an attack
with cut ratio r, the connectivity between the optical
switch at node vand the controller at node uis lost,
and 1 otherwise.
•Pu,v,r : the shortest path between the controller at node u
and the optical switch at node vafter an attack with cut
ratio r.
•du,v,r : the transmission distance of path Pu,v,r .
Using these notations, we define three metrics to measure link
criticality with respect to the control plane, and to evaluate the
CP robustness upon an attack with cut ratio r.
1) Link Criticality (Lc)
If the attacker is aware of the CP realization, the cut fiber
links can be selected according to their importance to the CP.
So far, there are no metrics that define the criticality of a link
based on its importance to the CP. Therefore, we define link
criticality Lcmetric to quantify the importance of each link in
the network proportionally to the number of traversing control
channels. Formally, the metric is defined as:
Lc(e) = X
u∈U,v∈V
xu,v ·zu,v,e. (2)
2) Average Control Plane Connectivity (ACPC)
The ACPC quantifies the portion of network switches that can
still connect to any of their controller instances (master or
slave) after an attack. Formally, the ACPC upon an attack
with cut ratio rcan be calculated as:
ACP C (r) =
P
u∈U,v∈V
xu,v ·yu,v,r
|V|.(3)
3) Average Transmission Distance (ATD)
Besides connectivity, the latency of control channels is also
a critical enabler of the efficient operation of an SDON. In
optical networks, a significant portion of latency is related
to the propagation of the optical signal in the fiber. Hence,
transmission distance is a major factor for the latency. We
define the ATD as:
AT D(r) =
P
u∈U,v∈V
du,v,r ·xu,v ·yu,v,r
|V|.(4)
1 2 3 4 5 6 7 8 9 1011 1213 1415 1617 18
Link Index
0
1
2
3
4
5
6
Lc(e) of Link
NBC(1)
NBC(2)
Fig. 3. Lc(e)in Sprint topology with controllers placed according to NBC.
Note that ATD is computed only for working control paths,
i.e., those disrupted by the attack are not taken into account.
IV. ATTAC K SCENARIO WITH NOCP RE AL IZ ATION
KNOW LE DG E
Our simulation experiments are carried out using a custom-
built Java-based tool that leverages GraphStream [20] for
graph manipulation. We consider three realistic topologies
whose characteristics are summarized in Table I. We consider
two controller placement schemes, i.e., the Node Degree Cen-
trality (NDC) and the Node Betweenness Centrality (NBC).
The NDC scheme places the controller instances at the nodes
with higher nodal degree. The NBC scheme places the con-
troller instances at the nodes with higher node betweenness
centrality, which refers to the number of all-node-pairs shortest
paths traversing a node [21]. We first analyze how the number
of controller instances in an SDON affects the CP robustness
in the case where each optical switch only connects to its
master controller (i.e., no slave controller is used). Then,
we investigate whether considering master/slave controller
configuration improves the CP robustness.
This section considers the less sophisticated attack scenario
denoted as unavailable knowledge scenario (UKS) where the
attacker has the knowledge of the physical network topology
TABLE I
TOPOLOGY CHARACTERISTICS
Topology Nodes Links Degree (±Deviation) Diameter (hops)
Sprint [22] 11 18 3.27 (±1.42) 4
USNET [23] 30 36 2.4 (±0.6) 11
Germany [24] 50 88 3.5 (±1.04) 9
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(a) Sprint
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(b) USNET
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(c) Germany
Fig. 2. Average Control Plane Connectivity in the UKS scenario with single switch-controller assignment.
TABLE II
LIN KS TO B E CUT W ITH r= 0.5I N SPRINT (CP CRITICAL LIN KS I N RED)
Scenario Link Index
UKS(1) [1,2,3,5,8,12, 14, 15, 17]
UKS(2) [1,2,3,5,8,12, 14, 15,17]
(G(V, E ), but does not know the details of the CP realization.
According to [21], one effective scheme for selecting the most
critical links is utilizing the link betweenness centrality, which
is defined as the number of the shortest paths between all
node pairs that traverse a specific link. Hence, in UKS, we
assume that the attacker aims at maximizing the disruption
potential of the attack by targeting the fiber links with higher
link betweenness centrality.
Fig. 2shows the ACPC in the UKS scenario with single
switch-controller assignment. It means that each switch is
statically assigned to one (the closest) controller, and does
not connect to any other controller even in the presence of
attacks. Here, the curves in each plot correspond to a controller
placement scheme with a certain number of controllers, e.g.,
“NDC(1)” represents the case where the SDON has one
controller placed according to the NDC scheme. We observe
that for a given number of controllers and a placement scheme,
ACPC decreases for higher cut ratio runtil it reaches the
minimum, where the controller(s) are reachable only by its
local optical switch(es) placed at the same node. However,
there is a large variation in the impact of link cuts depending
on the network topology. For instance, in USNET, when there
is one controller, a drastic ACPC decrease occurs at around
r= 0.2, while for Sprint and Germany the ACPC it does not
drop significantly until about r= 0.4. The lower connectivity
of USNET (as listed in Table I) makes this topology more
vulnerable to targeted fiber cuts.
Interestingly, in the UKS scenario with static single
switch-controller assignment, a larger number of con-
trollers does not guarantee a higher ACPC, and in some
cases, ACPC can degrade with the number of controllers.
For example, in Fig. 2(a), when up to 7 links are cut
(r≤0.39), the ACPC results are the same regardless of the
number of controllers for both placement strategies. When we
have rwithin [0.44,0.61], the ACPC for NBC(1) is higher
than that for NBC(2). The same phenomenon can be observed
by comparing the ACPC results for NDC(1) and NDC(3) at
r= 0.67. These situations occur because when a controller
is added to the network, the routing of control paths changes
significantly. The control channels tend to be distributed more
evenly over the links, which makes targeted attacks based on
link betweenness centrality more effective.
To verify our analysis above,in Fig. 3we show the link
criticality w.r.t control plane Lc(e)in the Sprint network for
the scenarios that place 1 and 2 controllers with the NBC
scheme. We also list the links that are selected by the link
betweenness centrality with r= 0.5in the UKS case in Table
II. By cross-referencing the results in Fig. 3and Table II,
we find that the link betweenness centrality selects 6 and 7
links truly critical for the control plane in the two scenarios,
respectively. Hence, placing more controllers in an SDON
that statically assigns single controllers does not necessarily
improve the SDON robustness.
The ATD values for UKS with single switch-controller
assignment are plotted in Fig. 4. A general observation is
that CP needs to use longer paths as links are cut, leading
to an increase in ATD. Recall that only working control paths
are accounted. By ignoring the disrupted control paths, it is
possible to measure the ATD for the control paths that remain
connected. The drops in ATD shown in Fig. 4are associated
with drops in ACPC for the same cut ratio, i.e., cutting links
tends to disrupt control path of the farthest switch(es), which
leads to a decrease in the ATD for the remaining working
control paths. For instance, in Fig. 4(a), when rincreases
from 0.06 to 0.11, the value of ACPC is 1although there
is an increase in ATD. Nevertheless, when rchanges from
0.39 to 0.44, ATD for both NBC(2) and NBC(3) decreases
due to a drop in ACPC, which accounts for the fact that the
topology is no longer fully connected, and thus the surviving
control channels can only take relatively shorter paths.
We also analyze whether CP robustness can be improved
by considering a master/slave controller configuration for each
optical switch. Fig. 5shows the ACPC for the cases with single
or master/slave switch-controller assignment. The number of
controller instances placed in the network is set to 3. In
the master/slave controller configuration, two controllers are
assigned to each optical switch. Every controller instance can
simultaneously act as the master for one set of switches and
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
1000
2000
3000
4000
5000
ATD (km)
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(a) Sprint
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
1000
2000
3000
4000
5000
ATD (km)
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(b) USNET
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
200
400
600
800
1000
1200
ATD (km)
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(c) Germany
Fig. 4. Average Transmission Distance in the UKS scenario with single switch-controller assignment.
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0.2
0.4
0.6
0.8
1
ACPC
NDC(w/o)
NDC(w)
NBC(w/o)
NBC(w)
(a) Sprint
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(w/o)
NDC(w)
NBC(w/o)
NBC(w)
(b) USNET
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(w/o)
NDC(w)
NBC(w/o)
NBC(w)
(c) Germany
Fig. 5. Comparison of the ACPC for UKS with single and master/slave switch-controller assignment (3 controllers in the SDON).
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(a) Sprint
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(b) USNET
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(1)
NDC(2)
NDC(3)
NBC(1)
NBC(2)
NBC(3)
(c) Germany
Fig. 6. Results on ACPC for AKS with single switch-controller assignment.
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0.2
0.4
0.6
0.8
1
ACPC
NDC(w/o)
NDC(w)
NBC(w/o)
NBC(w)
(a) Sprint
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(w/o)
NDC(w)
NBC(w/o)
NBC(w)
(b) USNET
0 0.2 0.4 0.6 0.8 1
Cut Ratio (r)
0
0.2
0.4
0.6
0.8
1
ACPC
NDC(w/o)
NDC(w)
NBC(w/o)
NBC(w)
(c) Germany
Fig. 7. ACPC for the AKS scenario with single and master/slave switch-controller assignment (3 controllers in the SDON).
the slave for another set. Results indicate that considering
master/slave controller assignment tends to increase the ACPC.
However, such benefits are observed at different cut ratios
depending on the network topology. These results suggest
that by considering master/slave controller configuration
in UKS, the ACPC can be enhanced. This can be easily
understood since in UKS, the importance of links targeted by
the attack is independent of the existence of slave controllers.
V. ATTACK SCENARIO WITH FUL L CP R EA LI ZATION
KNOW LE DG E
In this section, we analyze the available knowledge scenario
(AKS), where we assume that the attacker knows the details of
the CP realization and is able to calculate Lc(e)and selects the
br· |E|c links with the highest value of Lc(e)to cut. Apart
from the link selection strategy, this experiment follows the
same setup as that in Section IV.
Fig. 6shows the obtained ACPC for AKS with statically
assigned single controller. In AKS, the general trend of ACPC
with respect to ris similar to that of the UKS scenario. When
comparing the curves for different number of controllers, it can
be observed that adding more controllers does not always
improve the ACPC. However, gains are visible in most cases.
For instance, for Sprint and USNET, higher gains are observed
when moving from 1 to 2 controller instances. Further addition
of controllers still provides gains, but less pronounced. For
example, the results in Fig. 6(a) indicate that for r= 0.17
and the NBC placement scheme, the ACPC decreases when
the number of controllers increases from 2 to 3 (compare the
curves of NBC(2) and NBC(3)). Moreover, in Fig. 6(a), the
ACPC obtained for NBC(2) and NBC(3) is the same when r
changes within [0.22,0.39], while for r= 0.5, the ACPC for
NBC(2) is higher than that of NBC(3). This phenomenon can
be explained as follows. When more controllers are placed
in the SDON, the control channels of switches to different
controllers may traverse the same links. Hence, when these
links are cut, multiple control channels are interrupted. In Fig.
6, we observe that this phenomenon occurs more frequently in
Sprint and Germany than in USNET. This is because they have
larger deviations on nodal degree, which makes link sharing
among control channels more common.
The ATD for AKS follows similar trends as in the UKS
case, and is omitted for conciseness. Fig. 7shows the ACPC
for scenarios with single and master/slave switch-controller
assignment when there are 3 controllers in the SDON. The
results indicate that considering master/slave controller con-
figuration might not improve ACPC if the attacker has
the knowledge of the CP realization. At certain values of r,
adding slave controllers can even degrade ACPC. For instance,
when rranges within [0.06,0.28] in Fig. 7(a), there is no
improvement of ACPC for both controller placement schemes
after considering a master/slave controller configuration. This
can be explained by the fact that considering master/slave
controller configuration generates more control channels and
in turn makes certain links more vulnerable to targeted fiber
cuts by increasing their Lc(e).
VI. CONCLUSION
The paper considers the threats from targeted fiber cuts
and evaluates control plane robustness in SDONs in terms
of Average Control Plane Connectivity (ACPC) and Average
Transmission Distance (ATD). Two attack scenarios were
considered with different extents of control plane realization
knowledge available to the attacker, and the impact of the
number of controller instances to CP robustness was assessed.
Moreover, two controller assignment configurations were con-
sidered: single or master/slave switch-controller assignment.
For attacks with unknown CP realization and single controller
assignment, adding more controllers does not guarantee an
increase in ACPC, but adopting master/slave controller config-
uration benefits the CP robustness. When the attacker has the
CP realization details, considering master/slave configuration
or adding more controllers does not ensure improved ACPC.
The extensive simulation results indicate strong necessity to
protect the information related to the CP realization.
ACK NOW LE DG ME NT S
This work was supported in part by the NSFC Project
61701472, CAS Key Project (QYZDY-SSW-JSC003), NGB-
WMCN Key Project (2017ZX03001019-004), China Postdoc-
toral Science Foundation (2016M602031), and Fundamental
Research Funds for the Central Universities (WK2100060021).
C. Natalino, L. Wosinska and M. Furdek were supported
in part by the RESyST project funded by the Unity through
Knowledge Fund of the Croatian Ministry of Science, grant
agreement no. 1/17, and the COST Action 15127 RECODIS.
REFERENCES
[1] D. Kreutz, F. M. V. Ramos, P. E. Verłssimo, C. E. Rothenberg,
S. Azodolmolky, and S. Uhlig, “Software-defined networking: A com-
prehensive survey,” Proc. IEEE, vol. 103, pp. 14–76, Jan. 2015.
[2] S. Li, D. Hu, W. Fang, S. Ma, C. Chen, H. Huang, and Z. Zhu, “Pro-
tocol oblivious forwarding (POF): Software-defined networking with
enhanced programmability,” IEEE Netw., vol. 31, pp. 12–20, Mar./Apr.
2017.
[3] Z. Zhu, C. Chen, S. Ma, L. Liu, X. Feng, and S. Yoo, “Demonstration of
cooperative resource allocation in an OpenFlow-controlled multidomain
and multinational SD-EON testbed,” J. Lightw. Technol., vol. 33, pp.
1508–1514, Apr. 2015.
[4] C. Chen, X. Chen, M. Zhang, S. Ma, Y. Shao, S. Li, M. S. Suleiman, and
Z. Zhu, “Demonstrations of efficient online spectrum defragmentation in
software-defined elastic optical networks,” J. Lightw. Technol., vol. 32,
pp. 4701–4711, Dec. 2014.
[5] Z. Zhu, X. Chen, C. Chen, S. Ma, M. Zhang, L. Liu, and S. Yoo,
“OpenFlow-assisted online defragmentation in single-/multi-domain
software-defined elastic optical networks,” J. Opt. Commun. Netw.,
vol. 7, pp. A7–A15, Jan. 2015.
[6] B. Zhao, X. Chen, J. Zhu, and Z. Zhu, “Survivable control plane
establishment with live control service backup and migration in SD-
EONs,” J. Opt. Commun. Netw., vol. 8, pp. 371–381, Jun. 2016.
[7] X. Chen, B. Zhao, S. Ma, C. Chen, D. Hu, W. Zhou, and Z. Zhu,
“Leveraging master-slave openflow controller arrangement to improve
control plane resiliency in SD-EONs,” Opt. Express, vol. 23, pp. 7550–
7558, Mar. 2015.
[8] N. Beheshti and Y. Zhang, “Fast failover for control traffic in software-
defined networks,” in Proc. of GLOBECOM, Dec. 2012, pp. 2665–2670.
[9] Y. Hu, W. Wendong, X. Gong, X. Que, and C. Shiduan, “Reliability-
aware controller placement for software-defined networks,” in Proc. of
IFIP/IEEE IM, May 2013, pp. 672–675.
[10] D. Hock, M. Hartmann, S. Gebert, M. Jarschel, T. Zinner, and P. Tran-
Gia, “Pareto-optimal resilient controller placement in SDN-based core
networks,” in Proc. of ITC, Sept. 2013, pp. 1–9.
[11] G. Yao, J. Bi, and L. Guo, “On the cascading failures of multi-controllers
in software defined networks,” in Proc. of ICNP, Oct. 2013, pp. 1–2.
[12] N. Skorin-Kapov, M. Furdek, S. Zsigmond, and L. Wosinska, “Physical-
layer security in evolving optical networks,” IEEE Commun. Mag.,
vol. 54, pp. 110–117, Aug. 2016.
[13] R. Albert, H. Jeong, and A. Barabasi, “Error and attack tolerance of
complex networks,” Nature, vol. 406, pp. 378–382, Jul. 2000.
[14] C. Natalino, A. Yayimli, L. Wosinska, and M. Furdek, “Content acces-
sibility in optical cloud networks under targeted link cuts,” in Proc. of
ONDM, May 2017, pp. 1–6.
[15] B. Heller, R. Sherwood, and N. McKeown, “The controller placement
problem,” in Prof. of HotSDN, Aug. 2012, pp. 7–12.
[16] P. Fonseca and E. Mota, “A survey on fault management in software-
defined networks,” IEEE Commun. Surveys Tut., vol. 19, pp. 2284–2321,
Fourth Quarter 2017.
[17] S. Iyer, T. Killingback, B. Sundaram, and Z. Wang, “Attack robustness
and centrality of complex networks,” PLoS ONE, vol. 8, pp. 1–17, Apr.
2013.
[18] D. Santos, A. Sousa, and P. Monteiro, “Compact models for critical
node detection in telecommunication networks,” in Proc. of INOC, Feb.
2017, pp. 1–10.
[19] H. Li, P. Li, S. Guo, and A. Nayak, “Byzantine-resilient secure software-
defined networks with multiple controllers in cloud,” IEEE Trans. Cloud
Comput., vol. 2, pp. 436–447, Oct. 2014.
[20] Y. Pign, A. Dutot, F. Guinand, and D. Olivier, “Graphstream: A tool
for bridging the gap between complex systems and dynamic graphs,” in
Proc. of ECCS, Sep. 2007, pp. 1–10.
[21] D. Rueda, E. Calle, and J. Marzo, “Robustness comparison of 15 real
telecommunication networks: Structural and centrality measurements,”
J. Netw. Syst. Manag., vol. 25, pp. 269–289, Apr. 2017.
[22] S. Knight, H. X. Nguyen, N. Falkner, R. Bowden, and M. Roughan,
“The internet topology zoo,” IEEE J. Sel. Areas Commun., vol. 29, pp.
1765–1775, Oct. 2011.
[23] J. Simmons, Optical Network Design and Planning, 2nd ed. Springer,
2014.
[24] S. Orlowski, M. Pi´
oro, A. Tomaszewski, and R. Wess¨
aly, “Sndlib 1.0:
Survivable network design library,” in Proc. of INOC, Apr. 2007, pp.
1–11.