Conference PaperPDF Available

Automated Bootstrapping of A Fault-Resilient In-Band Control Plane


Abstract and Figures

Adoption of Software-defined Networking (SDN) in critical envi- ronments, such as factory automation, avionics and smart-grid networks, will require in-band control. In such networks, the out- of-band control model, prevalent in data center deployments, is inapplicable due to high wiring costs and installation efforts. Exist- ing designs for seamlessly enabling in-band control plane cater only for single-controller operation, assume proprietary switch modi- fications, and/or require a high number of manual configuration steps, making them non-resilient to failures and hard to deploy. To address these concerns, we design two nearly completely automated bootstrapping schemes for a multi-controller in-band network control plane resilient to link, switch, and controller fail- ures. One assumes hybrid OpenFlow/legacy switches with (R)STP and the second uses an incremental approach that circumvents (R)STP. We implement both schemes as OpenDaylight extensions, and qualitatively evaluate their performance with respect to: the time required to converge the bootstrapping procedure; the time required to dynamically extend the network; and the resulting flow table occupancy. The proposed schemes enable fast bootstrapping of a robust, in-band managed network with support for seamless re- dundancy of control flows and network extensions, while ensuring interoperability with off-the-shelf switches. The presented schemes were demonstrated successfully in an operational industrial net- work with critical fail-safe requirements.
Content may be subject to copyright.
Automated Bootstrapping of
A Fault-Resilient In-Band Control Plane
Ermin Sakic
Mirza Avdic
Siemens AG
Technical University Munich
Amaury Van Bemten
Wolfgang Kellerer
Technical University Munich
Adoption of Software-dened Networking (SDN) in critical envi-
ronments, such as factory automation, avionics and smart-grid
networks, will require in-band control. In such networks, the out-
of-band control model, prevalent in data center deployments, is
inapplicable due to high wiring costs and installation eorts. Exist-
ing designs for seamlessly enabling in-band control plane cater only
for single-controller operation, assume proprietary switch modi-
cations, and/or require a high number of manual conguration
steps, making them non-resilient to failures and hard to deploy.
To address these concerns, we design two nearly completely
automated bootstrapping schemes for a multi-controller in-band
network control plane resilient to link, switch, and controller fail-
ures. One assumes hybrid OpenFlow/legacy switches with (R)STP
and the second uses an incremental approach that circumvents
(R)STP. We implement both schemes as OpenDaylight extensions,
and qualitatively evaluate their performance with respect to: the
time required to converge the bootstrapping procedure; the time
required to dynamically extend the network; and the resulting ow
table occupancy. The proposed schemes enable fast bootstrapping
of a robust, in-band managed network with support for seamless re-
dundancy of control ows and network extensions, while ensuring
interoperability with o-the-shelf switches. The presented schemes
were demonstrated successfully in an operational industrial net-
work with critical fail-safe requirements.
Networks Network management
Computer systems
Dependable and fault-tolerant systems and net-
Software Dened Networking, network bootstrapping, distributed
control plane
ACM Reference Format:
Ermin Sakic, Mirza Avdic, Amaury Van Bemten, and Wolfgang Kellerer.
2020. Automated Bootstrapping of A Fault-Resilient In-Band Control Plane.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from
SOSR ’20, March 3, 2020, San Jose, CA, USA
©2020 Association for Computing Machinery.
ACM ISBN 978-1-4503-7101-8/20/03. . . $15.00
In Symposium on SDN Research (SOSR ’20), March 3, 2020, San Jose, CA, USA.
ACM, New York, NY, USA, 13 pages.
Adoption of SDN architecture in industrial scenarios, especially as
the enabler of Time Sensitive Networking mechanisms, has recently
started gaining traction [
]. Initial
SDN deployments (i.e., in data-centers) have relied on out-of-band
control (OOBC) with which control trac is exchanged between
switches and SDN controller(s) using dedicated links and switch
management ports. Bootstrapping with OOBC is trivial, due to con-
trollers being capable of controlling the switches without requiring
connectivity to any other switches. OOBC, however, is often un-
desirable in large-scale and critical industrial environments due to
associated CAPEX and installation eorts [4, 18, 39]. For example,
deploying an additional network in machine tool manufacturing
or drive technology networks is immensely expensive, due to as-
sociated cabling costs for the shielding required for RF/EMI noise
suppression. Similarly, avionics and automotive networks benet
greatly from decreased cabling weight [18].
With in-band control (IBC), control trac is forwarded along
the same links used for the data plane trac. This makes IBC
networks the preferable solution from the cost and operational
perspectives. However, their implementation is challenging. For
example, current most popular open-source controller platforms
OpenDaylight (ODL) [
] and ONOS [
] provide no mechanisms to
support or automate the IBC bootstrapping, nor do they allow for
protection of control plane trac against arbitrary link and node
failures, both which are of critical importance in industrial networks
]. Summarized, the bootstrapping of a reliable IBC softwarized
network must address the following challenges:
Establishing robust switch-controller and controller-controller
connections using IBC and an initially uncongured control
Preserving the control plane stability in case of multiple link,
switch and/or controller failures;
Support for addition of new nodes and links to the previously
reliably bootstrapped IBC network;
(4) Minimization of manual per-device pre-conguration.
The few automated bootstrapping approaches to date address
the above mentioned challenges partially or in isolation of each
other. For instance, they neglect the single-controller single-point-
of-failure issue [
], do not guarantee reliable control connec-
tions [
], omit the practical constraint of controllers’ state
synchronization prior to switch recongurations [
], or as-
sume functional extensions of switch components, e.g., the DHCP
client or OpenFlow agent [
]. This limits their applicability
Design Auto. Switch Management IP
Auto. Controller List
Resilience of Control
Support (R)STP Not Required No Proprietary
Switch Extensions
Sharma et al. [39–41] /(reactive)
Schi et al. I [36–38] /(timeout) ✓✓✓
Schi et al. II [6] ? ? (timeout/reactive) ✓✓✓
Heise et al. [18] (proactive replication) ✗ ✗
Canini et al. [7] (reactive) ?
Su et al. [43] (reactive) ?
Bentstuen et al. [4] ✗ ✗
Hybrid Switch (HSW) (proactive replication)
Hop-by-Hop (HHC) (proactive replication) ✓✓✓
Table 1: Comparison of HSW and HHC to existing schemes. Additional details are provided in Sec. 9.
in existing infrastructures. Most concerning, the reproducibility of
state-of-the-art proposals in real environments is often limited, due
to unclear pre-conguration assumptions or closed-source imple-
mentations. An overview of the relevant literature is presented in
Table 1 with detailed comparisons covered in Sec. 9.
Our contributions. This paper tackles the challenge of bootstrap-
ping a reliable IBC softwarized network while satisfying challenges
(1)–(4). Succinctly, our contributions are:
We compare the existing bootstrapping approaches w.r.t. their re-
liance on legacy protocols, the need for manual conguration and
proprietary extensions, the support for control ows resilience
and multi-controller conguration.
We propose two novel automated bootstrapping schemes tar-
geting reliable IBC networks with multi-controller support and
solving the above-mentioned challenges.
We evaluate the proposed designs in realistic industrial topologies
with varying sizes and controller placements w.r.t. (i) the time
required to converge the bootstrapping procedure, (ii) the time
required to dynamically extend the network, and (iii) the ow
table occupancy. We limit our evaluation to the two schemes
proposed in this paper, due to the unavailability and/or ambiguity
of closed-source state-of-the-art solutions.
We discuss implementation aspects relevant for the successful
introduction of the designs in networks equipped with o-the-
shelf OpenFlow agents.
In order to ensure reproducibility, we have released the complete
source code for both our approaches (a critically missing aspect of
existing works) 1.
Organization. Sec. 2 provides the motivation for automated and
reliable in-band bootstrapping. Sec. 3 presents the system model
and requirements and summarizes the terminology. Sec. 4 and 5
describe the two novel bootstrapping schemes. Sec. 6 discusses
selected design & evaluation aspects, Sec. 7 presents the evaluation
methodology and Sec. 8 accordingly discusses the results. Sec. 9
compares the presented designs with prior works. Finally, Sec. 10
concludes the paper.
(1) Establishing robust switch- and controller-controller communica-
tion using IBC and initially uncongured control plane:
In industrial networks, OOBC imposes a requirement for ruggedi-
zed wiring immune to electromagnetic interference and heavy elec-
trical surges typical for electric utility substations, factory oors
and trac control cabinets, further raising the costs. Alternatively,
Automated Bootstrapping of In-Band Controlled Software Dened Networks [GitHub]
- automated-bootstrapping
IBC is the preferable way of controlling SDN switches. It involves
sharing the same physical network for data plane and control plane
trac forwarding. However, in contrast to OOBC, IBC requires
a more complex initial setup, due to: i) unpredictable data plane
trac that could impact the control plane responsiveness; ii) data
plane failures which may impact the availability of the control
plane; iii) establishing deterministic control plane ows in a non-
congured data plane is non-trivial, and typically requires manual
(2) Preserving the control plane stability in case of arbitrary number
of link, switch and/or controller failures:
To protect against controller’s single-point-of-failure, state-of-
the-art network controllers deploy and replicate their state across
multiple replicas in strong and/or eventually consistent manner
]. ODL [
] and ONOS [
] leverage the strong consistency
model relying on RAFT consensus [
]. RAFT guarantees linearized
reads and writes at all times, including network partition and con-
troller failure scenarios. In the context of a distributed SDN control
plane, each controller instance is a RAFT node with a data-store
that stores information about the network topology and switches.
Prior to allowing for data-store modications, RAFT elects a leader
replica, responsible for committing state ordering updates into the
distributed data-store and their replication to followers. If a leader
fails, a new leader is elected, resulting in a temporary downtime
]. Only a replica with an up-to-date data-store may become
a new leader [20, 27].
Ensuring consistent topology and switch state views relies on
controller-to-controller communication, thus making it crucial for
bootstrapping procedure to enable robust control channels by de-
sign. The physical dislocation of controllers additionally imposes
transmission of the synchronization trac using IBC ows. ODL’s
southbound module (OpenFlowPlugin) requires a prior controller-to-
controller channel establishment and a successful consensus round
even for a "trivial" population of OpenFlow rules during bootstrap-
ping. Similarly, the per-switch controller’s role (i.e., "backup" or
"master") must be decided using consensus among the contending
controllers. Hence the conguration of switches for multi-controller
support and the support for controller state synchronization must
be targeted in tandem.
Enabling a resilient distributed control plane thus requires the
bootstrapping processes to ensure resilience in case of following
Failures of
out of 2
[14, 34]
controllers: No managed
switches may be left unmanaged due to individual replica failures;
1disjoint paths between each two controllers,
failure of
switches / links must not result in control plane
partitioning [14, 47].
(3) Support for node and link extensions of the previously reliably
bootstrapped IBC network:
Dynamic network topology changes, i.e., attachment of new
end-devices (host discovery) and packet forwarding devices to ex-
isting managed networks should be natively supported by the pro-
posed designs. In particular, dynamic plugging and unplugging
of machine networks (i.e., network cells) in industrial backbones,
reconguration of modular machine network assemblies and addi-
tion/removal of production line networks are typical Industry 4.0
use cases [11, 28].
(4) Minimization of manual per-device pre-conguration:
Involved conguration eort should be minimized - Congura-
tion of controller lists (comprising IP address / port number pairs)
and switch identities should be automated so to enable agile device
deployment / network extensions (e.g., for VNF on-boarding). To
this end, controllers or external DHCP servers should be charged
with providing switches with management IPs and thus automated
identity binding.
3.1 General Model
We assume the deployment of 2
1controller replicas in or-
der to cater for
non-Byzantine [
] controller failures. A failed
controller replica is an inactive, unreachable replica. Apart from
individualized identity conguration, each replica is an exact clone
and thus implements the OpenFlow southbound logic and is capa-
ble of executing the bootstrapping procedure. Updates to a replica’s
distributed data-store are synchronized among all reachable and
active replicas using RAFT.
Switches are operated in OpenFlow Master-Slave mode [
where any controller replica may become the Master of a switch.
Forwarding table modication messages initiated by a Slave con-
troller are proxied through the switch’s Master. All switches are
controlled in IBC manner, using OpenFlow v1.3+ [
]. To enable
the (optional) automated identity (IP address) and controller list
assignment, each switch deploys a DHCP client and an SSH server.
The controllers accordingly execute DHCP server instances for
automated IP address roll-out. For purpose of automated authenti-
cation, prior to bootstrapping step, switches and controllers may
have their certicates or symmetric keys on-boarded so to enable
PKI-, or MAC-based authentication [10, 12], respectively. Depend-
ing on the target scheme, the switches either all support (R)STP or
they do not (i.e., they have it disabled). Moreover, each switch is
capable of forwarding trac via traditional MAC learning (using
-mode forwarding). We assume an IPv4-enabled infrastruc-
link / switch fault resilience model requires an adequate un-
derlying topology, where
1disjoint paths can be found for any
controller-switch / controller-controller pair. For simplication, we
relax this condition and model a link failure between an SDN con-
troller and the switch directly attached to it as a controller failure.
We duplicate trac on disjoint paths for each communication pair.
Higher layer protocols are then tasked with duplicate elimination,
i.e., we rely on TCP at transport layer. This is feasible, since both
OpenFlow and controller-to-controller synchronization in Open-
Daylight transmit TCP trac. A generalized solution should instead
deploy robust duplicate frame elimination measures (e.g., as per
TSN 802.1CB [30]).
3.2 Secure and Standalone Modes
In Standalone mode, a switch forwards packets autonomously - i.e.,
it behaves like a non-managed MAC learning forwarder. Namely,
it contains one generic OpenFlow rule that matches all trac and
forwards it to the reserved
(ref. [
]) port. After establish-
ing a connection to a controller, the
rule is automatically
removed. In Secure mode, it is non-existent. Following an estab-
lished controller connection in Secure mode, the ow table remains
unmodied, and the forwarding behavior continues according to
controller-added rules. In case of an empty ow table and non-
congured controller lists, the switch in Secure mode drops all
arriving trac. Depending on the scheme, switches must support
either Secure and Standalone mode or Secure mode only.
3.3 In-band Mode
With In-band mode, a switch is initialized with generic rules that
ensure controller-destined trac is forwarded according to the
MAC learning table (i.e., using the NORMAL port).
In switches basing their OpenFlow implementation on the Open
vSwitch (OVS) agent, the In-band rules have a priority higher than
any ow rules that may be congured by the controller. There, In-
band mode automatically pre-congures the rules for forwarding:
i) ARP requests and DHCP Discover messages generated by the
switch; and ii) ARP replies and DHCP Oer messages destined for
the switch. For each controller, rules matching following trac
are added: i) ARP replies destined for controller’s MAC address; ii)
ARP requests generated with controller’s MAC address as source;
iii) ARP replies containing controller’s IP address as target; iv)
ARP requests containing controller’s IP address as source; v) trac
destined for controller’s IP / TCP port pair; and vi) trac with
controller’s IP and TCP port as source.
We rst introduce the Hybrid Switch (HSW) bootstrapping scheme
that heavily relies on existence of (R)STP, Standalone, In-band
modes, and
forwarding. HSW leverages (R)STP to establish
an acyclic graph and thus remedy the potential storm issues stem-
ming from trac broadcasts (e.g., from ARP and DHCP requests).
Fig. 1 summarizes the workow of HSW. In
Phase 0
, controllers
establish the controller-to-controller communication over the span-
ning tree computed by the network. In
Phase 1
, switches are pro-
vided dedicated management IP addresses and controllers’ IP/port
pairs. In
Phase 2a
, controllers establish the control over the con-
nected switches, install a set of initial rules and eventually disable
(R)STP in switches so to enable blocked ports. Explicit resilient ow
rules are embedded in
Phase 2b
, so to fulll the fault-tolerance
requirements. To support network extensions at runtime (
Phase 3
not depicted), a virtual spanning tree is maintained in controllers
and enforced for discovery trac, allowing for safe forwarding
of (broadcast) trac initiated by newly attached nodes, even after
disabling (R)STP.
HSW requires the following assumptions to hold at startup:
i) Controllers are aware of IP addresses of other controllers, or
are capable of resolving these using standardized DNS queries; ii)
Switches are initialized in Standalone mode with (R)STP and In-
band mode enabled; iii) Switches are provisioned with controllers’
public certicates or symmetric MAC keys [10, 12].
C1 C2 C3S
DHCP: Switch IP
SSH: Controllers’ IP:port
OF handshake
OF: Initial rules
SSH: Disable In-band mode
SSH: Disable (R)STP
OF: Resilient rules
Phase 0
Phase 1
Phase 2a
Phase 2b
Inter-controller synchronization and leader election
is done concurrently
with the
Figure 1: HSW - Message sequence diagram of the bootstrap-
ping procedure as described in Sec. 4.
4.1 Phase 0 - Network Startup
Phase 0a
- In-Band Flow Rules. Switches boot with In-
band ow rules set up, enabled to bidirectionally forward trac
between switches and controllers. The trac matching In-band
ow rules is forwarded using the reserved
OpenFlow port.
After a switch establishes a connection with the controller, due
to enabled Standalone mode, the switch automatically deletes any
rules which are of a priority lower than the In-band rules. Without
enabling the In-band mode, this behavior would lead to switch
ow tables becoming empty, and thus trac drops, including the
controller-initiated OpenFlow trac. Instead, In-band mode rules
(ref. Sec. 3.3) ensure the incoming trac is forwarded even after a
successful initial controller-switch connection.
Phase 0b
- Controller Synchronization. The switches ini-
tially boot with Standalone mode and (R)STP enabled. Prior to
taking the switch ownership, the controllers elect a leader and es-
tablish an active RAFT cluster using the established spanning tree
thus enabling synchronization.
4.2 Phase 1 - Distribution of Switch and
Controller Connection Identiers
As mentioned before, a fully-automated address distribution to
the switches’ management interfaces is an optional feature of our
schemes. To this end, a DHCP server instance, e.g., realized as
a replicated application of the controller, distributes the IPv4 ad-
dresses to switches on an FCFS basis. To omit potential address
conicts, the RAFT leader replica is charged with address leasing
using an automatically derived list of free addresses given the re-
served controller IPs. Follower replicas on the other hand, ignore
DHCP requests, as long as the current leader is active. In case of
duplicate DHCP requests, DHCP server replies only to the rst
request [39].
The leader then establishes management session with provi-
sioned switches (using SSH / OVSDB). Since in this phase all switches
forward control trac according to the
data-path (Stan-
dalone mode) the connection establishment succeeds. The leader
next proceeds to provision the switches with redundant controllers’
IP addresses and TCP ports.
4.3 Phase 2 - Enabling a Functional and
Resilient Control Plane
The goal of
Phase 2
is to provide the discovered switches with rules
necessary to enable a resilient control plane.
Phase 2
is divided into
two sub-steps:
Phase 2a
- initial control plane rules are installed
to allow communication with adjacent switches; and
Phase 2b
resilient control plane rules are installed. To avoid broadcast storms,
(R)STP remains functional until all initially booted switches are
discovered. (R)STP is disabled in
Phase 2b
and, after all links were
discovered, RAFT leader computes and embeds resilient paths.
Phase 2a
- Initial OpenFlow Flow Rules. For each con-
nected switch the leader controller installs the rules depicted in
Table 2. The matching semantics correspond to OpenFlow elds
dened in [
]. The leader next disables the In-band mode ow
rules using the SSH / OVSDB management channel. This is done
in order to enable full control over control plane trac forwarding
based on initial rules. Otherwise, e.g., in OVS, the switches would
continue to forward OpenFlow, ARP and DHCP trac according
to In-band mode imposed rules, since they have a highest available
On rule semantics (Table 2): DHCP and SSH rules enable a con-
tinued successful execution of
Phase 1
for switches adjacent to
the congured switch. ARP rules prevent the ARP table cache time-
outs. LLDP rules allow the controllers to discover topology updates.
An additional per-controller ARP rule is required for controller’s
placement discovery. To this end, controllers periodically generate
probe packets.
Phase 2b
- Enabling Control Plane Resilience. After the
initial rules embedding, the leader controller disables (R)STP on
each switch in order to fully discover the underlying topology. To
ensure the topology is entirely discovered, the leader controller
waits for a predened period (ref. Sec. 6.2) and eventually computes
and installs the resilient rules. Resilient control ows are installed
for each switch-controller and controller-controller pair as per Sec.
4.5. Finally, the controller removes the initial
Phase 2a
rules (except
for LLDP and ARP).
After nalizing
Phase 2b
, the control plane is resilient according
to Sec. 2 requirements. Data plane failures are covered using the
disjoint paths. Leader controller failures are covered by deploying
multiple controller backup instances.
Purpose Packet Type Matching Action
Dynamic switch IP
address conguration DHCP udp, udp_src=68
udp, udp_src=67 Send to NORMAL
Remote switch
conguration SSH tcp, tcp_src=22
tcp, tcp_src=22 Send to NORMAL
OpenFlow interaction OpenFlow tcp, tcp_src=6653
tcp, tcp_dst=6653 Sendto NORMAL
Switch IP resolution ARP arp, arp_tpa=control plane IP prex
arp, arp_spa=control plane IP prex Send to NORMAL
Topology discovery LLDP eth_type=0x88cc Send to CONTROLLER
Controller IP resolution ARP
arp, arp_tpa=c1 IP
arp, arp_spa=c1 IP Send to NORMAL
arp, arp_tpa=cN IP
arp, arp_spa=cN IP Send to NORMAL
Controller self- discovery ARP arp,arp_tpa=arbitrar y IP Send to CONTROLLER
synchronization TCP
ip, ip_src=c1 IP, ip_dst=c2 IP
ip, ip_src=c2 IP, ip_dst=c1 IP Send to NORMAL
ip, ip_src=cX IP, ip_dst=cY IP
ip, ip_src=cY IP, ip_dst=cX IP Send to NORMAL
NEXT: Dynamic switch IP
address conguration DHCP in_port=TREE port, udp, udp_src=68
in_port=TREE port, udp, udp_src=67
Send to other TREE
NEXT: Remote switch
conguration SSH in_port=TREE port, tcp, tcp_src=22
in_port=TREE port, tcp, tcp_dst=22
Send to other TREE
NEXT: Controller-Switch
OpenFlow interaction OpenFlow in_port=TREE port, tcp, tcp_dst=6633
in_port=TREE port, tcp, tcp_src=6633
Send to other TREE
NEXT: Any ARP trac ARP in_port=TREEp ort, arp Send to other TREE
NEXT: Network extension
discovery DHCP in_port=INACTIVE port, udp, udp_src=68 Send to CONTROLLER
Table 2: HSW - Initial and Network Extension (NEXT) ow
rules installed on switches throughout Phase 2.
S1 S2
S3 S4
(a) S-C resilient paths
S1 S2
S3 S4
(b) C-C resilient paths
Figure 2: Exemplary resilient path output for k=
for: (a)
switch-controller paths between S4 and all controllers; and
(b) all controller-controller pairs.
4.4 Phase 3 - Dynamic Network Extensions
To enable dynamic network extensions at runtime, the managed
switches which are direct neighbors of the newly booted switches
must forward the control plane trac between the newly con-
nected switches and controllers. In particular, DHCP, SSH, Open-
Flow and ARP trac forwarding must be enabled so that newly
added switches can be bootstrapped. Since the HSW disables (R)STP
prior to executing
Phase 2b
, the rules that should match above
mentioned trac may not rely on MAC-learning based
forwarding, due to potential broadcast storms. Instead, the leader
controller maintains a virtual tree topology, used to compute and in-
stall the network extension (NEXT) rules that broadcast the control
plane trac to and from newly attached switches on the tree links
only. Due to generic match semantics, NEXT rules are installed
with a lower priority than Phase 2b rules.
A special discovery rule matches packets arriving on inactive
ports (i.e., the last rule of Table 2). An inactive port is a port without
an active neighbor, i.e., initially unconnected to another switch.
If a new switch is connected to an already bootstrapped network,
the discovery rule will forward its DHCP Discover message to
controllers, encapsulated as a
message. The
contains the information about the ingress port the message arrived
on, i.e., corresponding to the previously inactive port. The leader
replicas processes the
by extracting the newly attached
switch’s MAC address, and adding the previously inactive port to
the existing tree. If the new switch is connected to the existing
network with multiple links, only the rst activated port is used.
Fig. 3 illustrates this scenario.
Inactive Ports
(a) Initial virtual
spanning tree.
(b) Addition of a
switch with 2links.
(c) Resulting virtual
spanning tree.
Figure 3: Exemplary network extension with one switch and
two links. c) depicts the resulting tree.
4.5 Control/Data Plane Failure Handling
Impact of Data Plane Failures on Control Plane Flows: To tolerate
switch node/link failures,
Phase 2b
1resilient paths
for each controller-controller and controller-switch connection pair,
necessary to tolerate
arbitrary data plane failures. We implement
a simple algorithm to nd such paths. The leader controller executes
1subsequent Dijkstra runs per connection pair. After each run,
the link metrics of the found path are multiplied by a factor of 1000.
Typical industrials networks having diameters of at most dozens
of hops
[19, 46]
, this ensures previously used links are reused only
if no other path is available. Thus, through this iterative metric
1maximally disjoint paths are eectively computed
(as in 45
3of 802.1Q-2018 [
]). The leader controller then maps
and installs these paths. Exemplary resilient paths for
depicted in Fig. 2.
Resilient ow rules are installed for ARP, OpenFlow and SSH
ows. OpenFlow and SSH use TCP at transport layer, that takes
care of duplicate packet elimination. Duplicate ARP requests and
replies, on the other hand, are delivered duplicated to sink nodes,
which does not negatively impact the correctness. Thus, with the
"seamless" replication mechanism, individual data plane failures
never lead to packet loss, as long as disjoint alternative paths exist.
In [
], the resulting global inter-controller trac gener-
ated by topology state exchange was shown to grow quadratically
with the number of controllers and linearly with the number of
network elements. For moderate control plane sizes, the imposed
control plane load for a routing application was determined close
to negligible - 6.7Mbps of per-replica load in a 5-replica cluster
. Hence, we propose a form of conditional trac policing for
misbehaving redundant switch-controller and controller-controller
ows but do not investigate this issue further in this paper.
Impact of Data Plane Failures on Network Extension Tree: Failures
of individual data plane elements must result in adaptation of the
network extension tree. The approach we followed in Sec. 4.4 as-
sumes initial tree computation and embedding, following by the
controllers computing an alternative tree for each possible data
plane failure in the previously embedded tree. When a link in the
underlying topology fails that is mapped to the currently active tree,
the leader controller refreshes the rules with those of an alternative
tree, proactively computed for that particular link / node change.
During the transition period, no new switches can be admitted, but
since data plane failures are rare events, we consider the additional
delay in network extension, incurred by reactive tree re-embedding,
a negligible disadvantage.
Handling Control Plane Failures: If the leader replica fails, the
remaining controllers elect a new leader using the distributed leader
election procedure, thus incurring a short interruption period where
no client requests relying on consensus can be handled by the
controllers [
]. If a follower replica fails, current leader and the
operation of the system remain unaected. The resilient control
plane remains operational as long as the controller majority remains
Note: Tolerating transitional faults during Phases 0-2 is currently
unsupported and will be investigated in future work.
Our Hop-by-Hop scheme (HHC) realizes an iterative approach
to switch discovery. In contrast to HSW, it alleviates the need for
(R)STP and thus the (R)STP expiration timer (ref. Sec. 6.2). As before,
a number of minimum constraints must hold at network startup: i)
Controllers are aware of IP addresses of other participants, or are
capable of discovering them using standardized DNS queries; ii)
Switches are initialized in Secure mode without (R)STP and with dis-
abled In-band mode; iii) Switches are provisioned with controllers’
public certicates (PKI) or symmetric MAC keys [10, 12].
Fig. 4 depicts the abstract sequence diagram of HHC. In Secure
mode, a switch relies on the initial generic (non-customized) ow
table rules, available at boot time. By leveraging these, in
Phase 0
the controllers establish bilateral connections. In
Phase 1
, switches
are assigned management IP addresses and controller lists. Using
the generic rules, the switches adjacent to controller establish their
OpenFlow sessions. Appropriately in
Phase 2a
, the leader rolls
out the control plane ow rules to these switches. The provisioned
rules realize the spanning tree forwarding functionality, used for
iterative propagation of next hop switch’s control trac to the
controller. With each newly discovered network element, HHC
iteratively updates the tree. In
Phase 2b
, the leader computes
and installs resilient paths for all control plane ows, whenever
such paths become feasible. The attachment of new switches to an
already bootstrapped network is possible by gradually expanding
the spanning tree.
5.1 Phase 0 - Network Startup
Phase 0a
- Pre-Configured Flow Rules. HHC assumes a
set of initially precongured generic OpenFlow rules, necessary: i)
to allow an initial connection with the controllers while in Secure
mode; ii) to prevent broadcast storms in non-bootstrapped parts of
a network.
C1 C2 C3
DHCP: Switch IP
SSH: Controllers’ IP:port
OF handshake
OF: Initial tree rules
OF: Resilient rules
DHCP: Switch IP
SSH: Controllers’ IP:port
OF handshake
OF: Initial tree rules
Phase 0
Phase 1
Phase 2a
Phase 2b
Inter-controller synchronization and leader election
is done concurrently
with the
Figure 4: HHC - Message sequence diagram of the bootstrap-
ping procedure as described in Sec. 5.
In contrast to HSW, which bootstraps individual switches con-
currently in FCFS manner, HHC bootstraps the network iteratively
hop-by-hop, starting from switches adjacent to the leader controller.
The non-customized generic rules (ref. Table 3) allow receiving traf-
c addressed to the switch itself, i.e., dropping any other trac,
except for the trac generated by the switch itself. Trac initiated
by a switch is ooded on all its ports. This trac should only be
allowed to reach the leader controller, and is therefore, dropped
by any neighboring switches. These rules thus prevent the occur-
rence of broadcast storms (special case being the inter-controller
ow rules, ref. Sec. 5.1.2). Furthermore, they allow the controller to
congure the switches connected directly to it.
Purpose Packet Type Matching Action
in_port=LOCAL, eth_src=switch_mac, udp, udp_src=68 Send to ALL
Dynamic switch IP
address conguration DHCP udp, udp_src=67 Send to LOCAL
Remote switch
conguration SSH in_port=LOCAL, eth_src=switch_mac, tcp, tcp_src=22 Sendto ALL
eth_dst=switch_mac, tcp, tcp_dst=22 Send to LOCAL
in_port=LOCAL, eth_src=switch_mac, tcp, tcp_dst=6633 Send to ALL
OpenFlow interaction OpenFlow eth_dst=switch_mac, tcp, tcp_src=6633 Sendto LOCAL
IP Resolution ARP in_port=LOCAL,eth_src=switch_mac, arp, arp_op=1 Sendto ALL
eth_dst=switch_mac, arp, arp_op=2 Send to LOCAL
IP Resolution ARP in_port=LOCAL,eth_src=switch_mac, arp, arp_op=2 Sendto ALL
Controller-Switch /
Controller IP Resolution ARP arp, arp_op=1
arp, arp_op=2 Send to ALL
Synchronization TCP tcp, tcp_src=2550
tcp, tcp_dst=2550 Send to NORMAL
Table 3: HHC - Pre-congured OpenFlow rules
Phase 0b
- Controller Synchronization. Excluding (R)STP
implies we should avoid forwarding controller synchronization
trac using
port so to avoid broadcast storms. However,
Secure mode implies that this trac must be handled with addi-
tional initial precongured rules that match and forward this trac
type (ref. last four rules of Table 3), prior to establishing OpenFlow
connection with controller. Thus, it is impossible to come up with
the generic set of precongured ow rules that do not leverage
ports. Using either, however, initially results
in broadcast storms. Therefore, apart from the precongured ow
rules, we deploy a mechanism to cope with broadcast storms for
controller-to-controller trac (ref. Sec. 6.5).
5.2 Phase 1 - Distribution of Switch and
Controller Connection Identiers
The leader controller assigns the IP addresses initially only to its
direct neighbor switches, and subsequently provisions them with
controller lists. In order for the switches located two hops away
from leader to receive their IP addresses, the generic precongured
rules in the direct neighbor switches must rst be extended with a
new set of rules in Phase 2a.
5.3 Phase 2 - Enabling a Functional and
Resilient Control Plane
In contrast to HSW, which computes the resilient control plane
ows after disabling (R)STP, HHC tries to compute and deploy re-
silient ow rules whenever feasible. Namely, it installs the resilient
ow rules as soon as there exist
1disjoint paths for a single
switch-controller communication pair. If the current discovered
topology does not allow for identifying all required paths, ow
rules are provisioned for a single path only. Whenever there is a
change in topology, the leader retries computing the remaining
disjoint paths.
Phase 2a
- Initial OpenFlow Flow Rules. In this sub-step,
leader provides the direct neighbor switches with rules that allow
for the next-hop switches to communicate with all controllers (ref.
Table 4). These rules have a lower priority than the resilient rules
computed in
Phase 2b
. Since (R)STP is unavailable, broadcast
storms must be avoided. Thus, in addition to the base topology
discovered by LLDP- and ARP-probing, HHC maintains a virtual
spanning tree.
The tree topology is updated and enforced upon switches on
every topology change using Table 4 rules. Packets used in topol-
ogy and controller discovery are sent directly to
s. In OVS, packets forwarded to
leverage the
data-path, which initially may seem problem-
atic. However, due to not relying on Standalone operation, MAC-
learning tables are empty and every packet is ooded instead. The
ooded trac cannot create broadcast storms as it can only reach
two types of switches: i) those with
rules installed and, ii)
those with generic precongured rules, which drop all trac except
their own. The discovery trac is hence broadcasted only in the
tree, since the PACKET-INs match the OpenFlow type.
Purpose PacketType Matching Action
(NEXT) Dynamic switch IP
address conguration DHCP in_port=TREE port, udp, udp_src=67
in_port=TREE port, udp, udp_src=68 Send to other TREE ports
(NEXT) Remote switch
conguration SSH in_port=TREE port, tcp, tcp_dst=22
in_port=TREE port, tcp, tcp_src=22 Send to other TREE ports
(NEXT) Controller-Switch
OpenFlow interaction OpenFlow in_port=TREE port, tcp, tcp_src=6633
in_port=TREE port, tcp, tcp_dst=6633 Send to other TREE ports
(NEXT) Any ARP trac ARP in_port=TREEport, arp Send to other TREE ports
Topology Discovery LLDP eth_type=0x88cc Send to CONTROLLER
Controller self-discovery ARP arp, arp_tpa=arbitrary IP Send to CONTROLLER
NEXT: Network extension
discovery DHCP in_port=INACTIVE port, udp, udp_src=68 Send to CONTROLLER
Table 4: HHC - Initial and Network Extension (NEXT) Flow
Rules Installed on Switches in Phase 2a.
Phase 2b
- Enabling Control Plane Resilience. To compute
resilient paths, same as in Sec. 4.3.2, we deploy Dijkstra’s algorithm.
HHC does not assume visibility of entire topology to compute
per-switch resilient paths. Instead, resilient paths are installed it-
eratively, whenever disjoint paths become available. This results
in a quicker control plane resilience, as conrmed by experimental
results in Sec. 8.
5.4 Phase 3 - Dynamic Network Extensions
To support dynamic network extensions, HHC adopts the same
idea as HSW. Main dierence to HSW is that HHC enforces the
virtual tree topology together with remaining initial ows, i.e., the
spanning tree is (re-)enforced iteratively during the bootstrapping
procedure itself. Compared to rules installed in
Phase 2
, a sin-
gle discovery rule per inactive switch port must be additionally
installed, in order for new switches to successfully register with
controllers (ref. last rule of Table 4). Whenever an element of the
tree fails, HHC refreshes the rules with the alternative tree (ref. Sec.
We next discuss selected design & evaluation aspects of the two
automated bootstrapping schemes.
6.1 Flow Table Occupancy
Both bootstrapping schemes enforce a non-negligible number of
forwarding rules. The exact ow table occupancy (FTO) can be pre-
determined only for loop-free topologies. In non-loop-free topolo-
gies, the FTO varies depending on the outputs of the used tree
computation and routing algorithm. However, the lower and the
upper bound FTO can always be calculated from derived expres-
sions. For brevity, we next provide only the upper FTO bounds for
both schemes.
6.1.1 HSW. An HSW-bootstrapped switch has its FTO upper-
bounded by FH SW rules:
FH SW n4+(5+3∗ |C |) +i7+mj6+k
n∈ [0,|C |
2];i∈ [1,DT r ee ];m∈ [0,|C |];j∈ [0,|S| 1];kN
|C |
denotes the number of deployed controllers,
DT r e e
the maximum node degree of the computed spanning tree, and
|S |
is the number of switches.
The 4xed rule types of Table
are TCP and ARP rules that al-
low for controller synchronization. The 5xed rules are composed
of: ARP, SSH, OpenFlow rules for forwarding incoming trac from
controllers to the
port; and 2 discovery rules (LLDP, ARP). 3
ow rules (ARP, SSH, OpenFlow) are embedded per controller so to
forward local trac toward the respective controller (ref. Table 2).
Index idenotes the degree of a switch in the virtual spanning tree
used for network extensions. The 7xed rules are the NEXT discov-
ery rules. Index jdenotes how many resilient paths that start/end
in a particular controller traverse a switch. Index mdenotes the
number of controller replicas. The 6xed rules are the resilient
ow rules (ARP, SSH, OpenFlow) used for trac relaying, in direc-
tions to/from other switches.
denotes the no. of inactive ports,
imposing a discovery rule per port.
6.1.2 HHC. HHC’s FTO is upper-bounded by FH H C :
FH H C 13 +n4+(5+3∗ |C|) +i7+mj6+k
HHC’s ow table has the same composition as HSW, except for
the additional 13 precongured rules (ref. Table 3).
6.2 (R)STP Timer Parametrization
HSW assumes that in
Phase 2a
all available switches are discovered
and are provided with necessary ow rules. Hence, in
Phase 2b
leader controller proceeds to disable (R)STP on all switches, so to
enable the blocked ports and populate the network topology view.
Safe expiration timer
after which to disable (R)STP can be
estimated as:
DH C P +TCli s t +TH S
represents the transmission interval of DHCP
Discover packets by the switches’ DHCP clients;
is the
delay imposed by a successful DHCP handshake;
TCli s t
is the time
required to deliver the controllers’ IP address list;
the OpenFlow session establishment time and time required to
install and conrm the initial ows (ref.
Phase 2a
); and
is the time required for (R)STP to recover from potential network
failures during bootstrapping. Worst-case time necessary to recover
the spanning tree after switch / link failures with STP may reach up
to 50
]. In corner cases, the recovery time for RSTP may increase
up to 120
(ref. Count-to-Innity problem [
]). In experiments
conducted in [
], however, the worst-case RSTP recovery time in
a16-switch topology peaked at 50
. If a new switch sends a DHCP
Discover message before the timer expiration, HSW preempts the
timer. On a successful expiration, HSW disables (R)STP on the
switches as per
Phase 2b
. Parameters used in evaluation were
deduced empirically (ref. Table 5).
Parameter TR
DH C P TCl i s t TH S
Value 1s1s1s2s50s[13]
Table 5: Parametrization of the (R)STP timer in HSW.
6.3 Network Topology Discovery
ODL’s OpenFlowPlugin relies on LLDP ow rules to learn the
network topology. The controllers periodically output OpenFlow
messages with encapsulated LLDP data units (LLDP-
DUs) on each switch’s port. Neighbor switches then receive and
forward these packets back to their Master. Thus, controllers learn
about the topology adjacency. By default, OpenFlowPlugin trans-
mits the LLDPDUs each 5
. If no probes are received for 3con-
secutive periods, the link originating in that port is considered
unavailable and is removed from the data-store. To facilitate faster
discovery of switches, we increase the rate of LLDPDU transmis-
sions to 1
. Since the additional control plane load is generated only
on unused ports, this optimization imposes no overhead.
6.4 Overhead of Controller Clustering
On a newly discovered switch, the OpenFlowPlugin module of an
ODL replica attempts to take its ownership by initiating a role
request to become its Master. Another controller replica may, how-
ever, be elected as the RAFT leader of the node inventory data-store
used to serialize (i.e., reach consensus on and order) switches’ state
modications. The election of the OpenFlow Master is thus inde-
pendent of the inventory data-store ownership and the leader of the
bootstrapping procedure. According to the OpenFlow specication,
only the Master of a switch may directly modify its ow table. For
this reason, multiple controllers may exchange data in order to
apply changes to a switch’s ow table. Fig. 5 illustrates the worst
case relevant for our evaluation.
OpenDaylight Replica 1
Bootstrapping Logic
Inventory DS
OpenFlow Plugin
OpenDaylight Replica 2
Bootstrapping Logic
Inventory DS
OpenFlow Plugin
OpenDaylight Replica 3
Bootstrapping Logic
Inventory DS
OpenFlow Plugin
Figure 5: An exemplary data ow of a FLOW_MOD RPC with
each entity’s leader on a dierent controller.
6.5 Coping with broadcast storms in HHC
To solve the issues related to
Phase 2b
where particular ows may
initially cause broadcast storms, we rely on rate limiting mecha-
nisms provided by the data plane (e.g., OpenFlow’s Metering [
or Linux’s Trac Control). Namely, we police the following inter-
controller ows (ref. Table 3): i) controller-initiated ARP trac; ii)
s destined for the inter-controller TCP port; ii)
s with inter-controller TCP port as source. The rate limit for po-
licers may be congured to a very low value, e.g., we used 1
for metering both ARP requests and replies (
12 ARP
). Simi-
larly, a low maximum rate can be chosen for
packets. It suces to match and rate-limit only
trac (as only these packets may generate broadcasts)
and not the complete TCP ow.
7.1 Evaluation Environment
We have implemented HSW and HHC as extensions of ODL’s
release and have evaluated their performance against
emulated industrial topologies
[8, 17, 19, 46]
. We focus on the im-
pact of topology type on the bootstrapping eciency and not on the
impact of network size. This said, we did successfully validate our
approaches in topologies comprising 50 switches in a single Layer-2
domain, which are larger than the average industrial topologies
with requirements on strict QoS guarantees
[19, 46]
. Our network
emulator generates the input topology by isolating OVS instances in
Docker containers and interconnecting them as per target topology.
In the single controller scenario, the controller and the topology
were hosted on PC equipped with a recent Intel Core i7 CPU. Multi-
controller scenarios were executed on a dual-CPU Intel Xeon E5
7.2 Evaluated Metrics
The schemes are compared along with the following KPIs:
Global bootstrapping convergence time (GBCT): Dened as the
dierence between i) the time instant at which all switches are
provisioned with resilient ow rules; and ii) the instant when the
rst switch was observed by any controller.
Time required to extend the network (TEXT): For each added
switch, we measure and average the dierence between the in-
stant i) when a switch is provided with the control ow rules; ii)
when the switch was rst observed by a controller.
Flow table occupancy (FTO): No. of active ow entries in a ow
table after successful bootstrapping procedure.
Industrial topologies, in particular those often found in process
industry and factory automation, were selected due to their re-
quirement on redundant communication and reliable and dynamic
network adaptation [
]. Fig. 6 depicts the evaluated topologies, with
corresponding suxes denoting the topology size. line-N and star-
Ntopologies were evaluated with a single controller only, while the
remaining topologies deploy up to 3controllers. Note that while
line-N,star-N and 1-ring-N topologies do not satisfy the conditions
for path disjointness, we also evaluate these topologies to gain
additional insights. In single-controller scenarios, the replica was
always placed adjacent to
. For scenarios involving 3controllers,
controllers were placed according to Table 6. Indeed, in general,
the controller placement inuences the observed measured KPIs in
both single- and multi-controller scenarios. For brevity, we however
limit our discussion to the impact of the RAFT leader placement.
Topology Controller
placement Topology Controller
ring-4 S1, S2, S31-ring-5 S1, S4, S5
ring-8 S1, S4, S61-ring-7 S1, S3, S7
ring-16 S1, S6, S11 1-ring-10 S1, S3, S10
grid-4 S1, S2, S32-ring-6 S1, S3, S6
grid-9 S1, S5, S92-ring-11 S1, S7, S11
Table 6: Controllers’ placement with 3controllers.
8.1 Bootstrapping Convergence Time
8.1.1 Single-Controller Setup. Global Bootstrapping
imes (GBCTs) are depicted in Fig. 7. GBCT of HSW
is not impacted by the type of the underlying topology. Instead, the
time to embed resilient paths increases with the overall topology
size. This is due to the fact that HSW’s Phases 1and 2do not
execute in a concurrent manner. HHC’s GBCT, on the other hand,
is dependent on the design of the underlying topology. In particular,
its convergence time scales with the number of additional hops
the control trac must traverse to reach a certain switch in the
(a) line-N
(b) star-N
(c) ring-N
(d) 1-ring-5
(e) 1-ring-7
(f) 1-ring-10
(g) 2-ring-6
(h) 2-ring-11
SN+1 SN+2 S2N
SN - N+1 SN - N+2 SN
(i) grid-N
Figure 6: Considered industrial topologies [8, 17].
HHC bootstraps the switches with same hop distance from the
controller concurrently. For example, in grid topologies, the max.
no. of hops between a switch and controller does not increase with
the same rate as the number of switches. Thus, going from grid-4
to grid-9, the absolute number of switches increases by 5, but the
hops required for HHC’s controller to reach most distant switches
increases only by 2, leading to an increased performance gap over
HSW. Additionally, in contrast to HSW, HHC does not suer from
an articially introduced lower bound (due to its non-reliance on
8.1.2 Multi-Controller Setup. HHC relies heavily on the con-
trollers’ data store during its operation - each read / write operation
requires proxying the request through the current RAFT leader.
Additionally, controllers frequently block and synchronize their
data-stores, thus adding additional processing latency in sequential
code execution. In contrast to HHC, HSW does not rely on the data-
store as much, providing for better scaling with increasing control
plane size, hence the drop in performance gap for the 3-controller
With HSW, the RAFT leader’s placement does not impact the
resulting performance, since the switches are bootstrapped in the
FCFS manner. Thus, HSW’s values in Fig. 7b are generally less
spread than for HHC. For example, with HHC grid-9 may be boot-
strapped as quick as grid-4. If the leader for grid-9 is elected adjacent
(ref. Fig. 6), the leader will require 2hops to reach any switch,
i.e., the same number of hops as required by grid-4 for a leader
placed on any of the 4switches. Note, however, that such compar-
ison does not hold in ring topologies, since the leader placement
there does not inuence the max. hop distance to leader.
(a) (b)
Figure 7: Observed GBCT for single- and
-controller scenarios. Measured time is normalized with respect to the minimum
observed GBCT means (
sfor and
controllers, respectively). HHC outperforms HSW for all evaluated
topologies, mostly due to HSW being lower bounded by the (R)STP timer (ref. Sec. 6.2).
8.1.3 Discussion. In general, HHC outperforms HSW for all
evaluated topologies. This is due to the lower bound on the GBCT
in HSW as imposed by the (R)STP timer (set to 55
, ref. Sec. 6.2). In
HSW, GBCT increases linearly with the topology size, contrary to
HHC, where GBCT exhibits a non-linear relation with the maximal
hop distance between the leader controller and the switch. The
larger the distance, the larger the increase in necessary bootstrap-
ping time. Thus, the performance gap between HSW and HHC
depends on the placement of the RAFT leader and the overall topol-
ogy size. In 3-controller scenarios, HHC’s extensive reliance on the
distributed data-store reduces the performance gap.
8.2 Network Extension Time
8.2.1 Single-Controller Setup. In single-controller scenarios, HHC
requires a nearly constant
ime to
end (TEXT) a topology, i.e.,
deploy a new switch, independent of the existing topology and no.
of new switches (ref. Fig. 8). The slight increase for larger topology
sizes relates to the accumulated CPU load from having to consider
additional switches, e.g., additional LLDP packets to process when
refreshing the topology and spanning tree computation overhead.
For HSW, just like with GBCT, TEXT grows linearly with the topol-
ogy size. This is due to the waiting period related to disabling the
(R)STP timer and contention in sequential rule installation. An op-
timistic case is the single-switch extension where no contention
impacts rule installation order (not depicted).
8.2.2 Multi-Controller Setup. A newly added switch in a multi-
controller setup must on average wait longer on its control ow
rules. The inter-controller synchronization results in a higher TEXT
degradation for HHC than for HSW. Additionally, compared to the
single-controller case, where TEXT remains mostly constant, in sce-
narios with 3controllers, HHC’s TEXT grows linearly with topology
size, due to a larger resulting controller-controller separation.
8.2.3 Discussion. HHC outperforms HSW both in single- and
3-controller scenarios. However, due to the distributed synchroniza-
tion overhead, and the fact that control ow rules can be provided
only when a controller is discovered, the performance gap between
HSW and HHC is reduced.
8.3 Flow Table Occupancy
Fig. 9 portrays the substantial growth in the
(FTO) when deploying 3controllers instead of a single one. This
is is due to switches being provided with resilient ow rules for
connections to all controllers. Additionally, some switches contain
rules used to forward the inter-controller trac. However, the
change in FTO is inuenced not only by number of controllers, but
also by the topology size, degree of connectivity, and the controller
placement. The ratios of FTOs are summarized in Table 7.
In HSW, the placement of the leader does not inuence the FTO.
This is due to resilient and tree rules being installed only after a
successful discovery of the entire network. Thus, the FTO depends
only on the output of the routing and tree computation algorithm.
On the contrary, in HHC the placement of the RAFT leader con-
troller inuences the output of the iteratively-built spanning tree,
producing uctuating FTOs for repeated executions. Notably, in
grid-N,x-ring-N topologies, the leader controller placement inu-
ences the FTO uctuations, while in ring-N topologies, dierent
leader placements have no eect on FTO (visual results omitted due
to space considerations). In all evaluated scenarios, HSW results in
lower minimal, average, and maximal FTOs. The average dierence
in FTO between HSW and HHC equals approximately the number
of precongured rules in HHC scheme. Indeed, both bootstrapping
schemes enforce a non-negligible number of forwarding rules. In-
vestigation of methods for shrinking the number of active ow rules
leveraged by the schemes, i.e., by means of ow table compression
[3, 31] should be considered in future studies.
Topology HSW HHC
ring-{4, 8, 16} {2.44, 2.62, 2.77} {2.2, 2.43, 2.43}
grid-{4, 9} {2.44, 2.22} {2.2, 2.07}
1-ring-{5, 7, 10} {2.19, 2.31, 2.36} {2.0, 2.12, 2.16}
2-ring-{6, 8, 11} {2.29, 2.27, 2.44} {2.08, 2.07, 2.34}
Table 7: Ratios of observed average per-switch FTOs. Values
are normalized respective to the FTO in
-controller case for
the same scheme and topology.
Figure 8: TEXT values of the two schemes for congurations deploying
controllers. Y-axis depicts the (per-topology)
normalized TEXT, relative to the lowest obtained mean TEXT, i.e., 6.5sand 33.5sfor 1- and 3-controllers.
Figure 9: Bar charts comparing the average FTO for HSW
and HHC bootstrapping schemes.
Sharma et al. [
] were rst to propose an automatic bootstrapping
scheme and evaluate its performance for various in-band controlled
(IBC) topologies. [
] highlight the advantages of a proactive
protection scheme (using fast-failover groups [
]), allowing the
controller to proactively compute duplicate paths for control plane
ows. On a successful failure discovery, the detecting switch auto-
matically re-routes the incoming trac over the assigned backup
port, without needing to involve the controller in loop. In all three
works, Sharma et al. assume proprietary modications to the DHCP
client hosted in switches with the goal of provisioning controller
list. No multi-controller support was considered.
Schi et al. [
] present a design of a self-organizing multi-
controller control plane that relies exclusively on OpenFlow. Con-
trary to HSW and HHC schemes, the authors do not consider the
necessity of controller state synchronization prior to switch recon-
gurations (ref. Sec.
). In [
], the authors extend their approach
to include a timeout-based fault-tolerance approach where rules
corresponding to failed paths eventually time out, thus preventing
permanent switch cut-os. Instead, we propose constant duplica-
tion of control ows incurring zero-packet-loss in case of failures.
Follow-up works [
] propose a timeout-free approach to ensure
resilience against data plane failures, based on assumption of a
controller-initiated switch discovery and OpenFlow equal role [
controller association with switches. Similar to above works, we
compute and iteratively expand the spanning tree so to enable loop-
less forwarding of control trac, both after disabling (R)STP in
HSW, and in Phase 2 and 3of HHC.
] proposes atomic transactions for coordinated concurrent
switch congurations by multiple controllers. The approach is or-
thogonal to our work but we additionally assume the requirement
for distributed consensus [
], as imposed by ODL [
] and
ONOS [5] implementations.
Heise et al. [
] propose the usage of network calculus, i.e.,
rate- and burst-policed control trac for providing upper bound
guarantees for bootstrapping convergence time. They leverage fast-
failover groups to implement the restoration of control ows in face
of failures. Their bootstrapping concept assumes (R)STP in switches
and no multi-controller support. Bentstuen et al. [
] propose an
approach that relies on intent-based control ow denitions tar-
geting ONOS [
], so to simplify the management of control ows
in a single-controller environment. The authors also stumble upon
a number of practical issues related to IBC bootstrapping and ul-
timately fall back to modication of the OVS’s source code. To
support the existing OpenFlow implementations, workarounds for
these limitations are discussed in this work.
] minimizes congestions in single-controller IBC con-
trol plane by means of a centralized control port load monitoring.
Control port is switched to a more tting port on exceeded threshold
or in case of link failures. The authors deduce that their fail-over ap-
proach results in non-negligible packet loss, related to OpenFlow’s
back-o interval in the case of repeated unsuccessful connection
attempts. We consider this aspect in the design of the (R)STP timer.
In [
], authors propose a method for re-routing IBC control ows
based on observed controller load and IBC channel congestion. To
this end, the authors leverage control ow shifting and splitting.
Both methods are orthogonal to HSW and HHC.
This work describes the design of the rst two bootstrapping schemes
that autonomously bootstrap a multi-controller SDN with a resilient
control plane and with automatic IP and controller list provisioning
to the switches. Besides evidencing the practicability of these two
approaches and quantifying the trade-os they reveal (implemen-
tation complexity, legacy protocols needed, convergence time, ow
table occupancy, network extension time), our work opens the door
towards two important directions.
First, it nally enables SDN in environments where out-of-band
connections are not possible, e.g., industrial networks, and hence
the deployment of recent SDN-based advances for such environ-
ments [15, 19, 22, 45]. In fact, the presented schemes were demon-
strated successfully in an operational industrial network with fail-
safe requirements [35, 45].
Second, having the control and data plane share the same infras-
tructure motivates investigation of proper isolation of both trac
types and ensuring non-starvation of control trac. This is espe-
cially relevant in industrial environments where data plane trac
often has stringent QoS requirements.
Outlook: Our evaluation currently targets industrial topologies
and focuses on evaluation of impact of topology type on the achiev-
able performance, and less so on that of its size. We leave the
investigation of the applicability of our approach to large-scale
topologies for future work. This said, HHC has been successfully
validated with 50-switch topologies in a single broadcast domain.
Finally, supporting IPv6 networks should be a straightforward ex-
tension using mechanisms already developed for IPv4 case, but is
currently left as future work.
We thank the anonymous reviewers and our shepherd Junaid Khalid
for their feedback and useful inputs on our work. We thank Sean
Rohringer and Reinhard Frank for their help in the early stages of
our work. This work has received funding from the European Com-
mission’s Horizon 2020 research and innovation programme under
grant agreement number 780315 SEMIoTICS. This work reects
only the authors’ view and the funding agency is not responsible
for any use that may be made of the information it contains.
2018. IEEE Standard for Local and Metropolitan Area Network–Bridges and
Bridged Networks. IEEE Std 802.1Q-2018 (Revision of IEEE Std 802.1Q-2014) (2018),
Astrit Ademaj, Thomas Enzinger, and Marius Stanica. 2018. TSN System Re-
quirements v0.2. IEC/IEEE.les/public/docs2018/
60802-stanica- tsn-system- requirements-0518- v02.pdf
S. Banerjee and K. Kannan. 2014. Tag-In-Tag: Ecient ow table management in
SDN switches. In 10th International Conference on Network and Service Manage-
ment (CNSM) and Workshop. 109–117.
Ole Ingar Bentstuen and Joakim Flathagen. 2018. On Bootstrapping In-Band
Control Channels in Software Dened Networks. In 2018 IEEE International
Conference on Communications Workshops (ICC Workshops). IEEE, 1–6.
Pankaj Berde, Matteo Gerola, Jonathan Hart, Yuta Higuchi, et al
2014. ONOS:
towards an open, distributed SDN OS. In Proceedings of the third workshop on Hot
topics in software dened networking. ACM, 1–6.
Marco Canini, Iosif Salem, Liron Schi, Elad M Schiller, and Stefan Schmid.
2017. A self-organizing distributed and in-band SDN control plane. In 2017 IEEE
37th International Conference on Distributed Computing Systems (ICDCS). IEEE,
Marco Canini, Iosif Salem, Liron Schi, Elad Michael Schiller, and Stefan Schmid.
2018. Renaissance: A self-stabilizing distributed SDN control plane. In 2018 IEEE
38th International Conference on Distributed Computing Systems (ICDCS). IEEE,
Dick Caro. 2009. Automation Network Selection: A Reference Manual, 2Nd Edition
(2nd ed.). International Society of Automation, USA.
Cisco Systems, Inc. 2017. Spanning Tree Protocol Problems and Related Design
spanning-tree- protocol/10556-16.html. (2017).
Tom St Denis and Simon Johnson. 2007. Chapter 6 - Message - Authentication
Code Algorithms. In Cryptography for Developers, Tom St Denis and Simon
Johnson (Eds.). Syngress, Burlington, 251 – 296.
Josef Dorr. 2018. IEC/IEEE P60802 JWG TSN Industrial Prole: Use Cases Status
Update 2018-05-14. IEC/IEEE.
Michael Eischer and Tobias Distler. 2017. Scalable Byzantine Fault Tolerance on
Heterogeneous Servers. In Dependable Computing Conference (EDCC), 2017 13th
European. IEEE.
K. Elmeleegy, A. L. Cox, and T. S. Eugene Ng. 2009. Understanding and Mitigating
the Eects of Count to Innity in Ethernet Networks. IEEE/ACM Transactions on
Networking 17, 1 (2009), 186–199.
Seth Gilbert and Nancy Lynch. 2002. Brewer’s conjecture and the feasibility
of consistent, available, partition-tolerant web services. Acm Sigact News 33, 2
Jochen W. Guck, Amaury Van Bemten, and Wolfgang Kellerer. 2017. DetServ:
Network models for real-time QoS provisioning in SDN-based industrial envi-
ronments. IEEE Transactions on Network and Service Management 14, 4 (2017),
B. Görkemli, S. Tatlıcıoğlu, A. M. Tekalp, S. Civanlar, and E. Lokman. 2018.
Dynamic Control Plane for SDN at Scale. IEEE Journal on Selected Areas in
Communications 36, 12 (2018), 2688–2701.
Peter Heise. 2018. Real-time guarantees, dependability and self-conguration in
future avionic networks. Ph.D. Dissertation.
Peter Heise, Fabien Geyer, and Roman Obermaisser. 2017. Self-conguring
deterministic network with in-band conguration channel. In Software Dened
Systems (SDS), 2017 Fourth International Conference on. IEEE, 162–167.
Dominik Henneke, Lukasz Wisniewski, and Jürgen Jasperneite. 2016. Analysis of
realizing a future industrial network by means of Software-Dened Networking
(SDN). In 2016 IEEE World Conference on Factory Communication Systems (WFCS).
IEEE, 1–4.
Heidi Howard, Malte Schwarzkopf, Anil Madhavapeddy, and Jon Crowcroft. 2015.
Raft reoated: do we have consensus? ACM SIGOPS Operating Systems Review
49, 1 (2015), 12–21.
Dan Levin, Andreas Wundsam, Brandon Heller, Nikhil Handigol, and Anja Feld-
mann. 2012. Logically centralized?: State distribution trade-os in software
dened networks. In Proceedings of the rst workshop on Hot topics in software
dened networks. ACM.
Dong Li, Ming-Tuo Zhou, Peng Zeng, Ming Yang, YanZhang, and Haibin Yu. 2016.
Green and reliable software-dened industrial networks. IEEE Communications
Magazine 54, 10 (2016), 30–37.
Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Pe-
terson, Jennifer Rexford, Scott Shenker, and Jonathan Turner. 2008. OpenFlow:
enabling innovation in campus networks. ACM SIGCOMM Computer Communi-
cation Review 38, 2 (2008), 69–74.
J. Medved, R. Varga, A. Tkacik, and K. Gray. 2014. OpenDaylight: Towards a
Model-Driven SDN Controller architecture. In Proceeding of IEEE International
Symposium on a World of Wireless, Mobile and Multimedia Networks 2014. 1–6.
Abubakar Siddique Muqaddas, Andrea Bianco, Paolo Giaccone, and Guido Maier.
2016. Inter-controller trac in ONOS clusters for SDN networks. In 2016 IEEE
International Conference on Communications (ICC). IEEE, 1–6.
Abubakar Siddique Muqaddas, Paolo Giaccone, Andrea Bianco, and Guido Maier.
2017. Inter-controller trac to support consistency in ONOS clusters. IEEE
Transactions on Network and Service Management 14, 4 (2017), 1018–1031.
Diego Ongaro and John K Ousterhout. 2014. In search of an understandable
consensus algorithm. In USENIX Annual Technical Conference. 305–319.
P60802 Project: TSN Prole for Industrial Automation (TSN-IA). 2018. Use Cases
IEC/IEEE 60802 v1.3. IEC/IEEE.
Aurojit Panda, Wenting Zheng, Xiaohe Hu, Arvind Krishnamurthy, and Scott
Shenker. 2017. SCL: Simplifying Distributed SDN Control Planes. In 14th USENIX
Symposium on Networked Systems Design and Implementation (NSDI 17). 329–345.
[30] Shifei Qian, Feng Luo, and Jinpeng Xu. 2017. An Analysis of Frame Replication
and Elimination for Time-Sensitive Networking. In Proceedings of the 2017 VI
International Conference on Network, Communication and Computing. ACM, 166–
M. Rifai, N. Huin, C. Caillouet, F. Giroire, D. Lopez-Pacheco, J. Moulierac, and G.
Urvoy-Keller. 2015. Too Many SDN Rules? Compress Them with MINNIE. In
2015 IEEE Global Communications Conference (GLOBECOM). 1–7.
Ermin Sakic, Nemanja Ðerić, and Wolfgang Kellerer. 2018. MORPH: An adaptive
framework for ecient and Byzantine fault-tolerant SDN control plane. IEEE
Journal on Selected Areas in Communications 36, 10 (2018), 2158–2174.
Ermin Sakic and Wolfgang Kellerer. 2018. Impact of Adaptive Consistency on
Distributed SDN Applications: An Empirical Study. IEEE Journal on Selected
Areas in Communications 36, 12 (2018), 2702–2715.
Ermin Sakic and Wolfgang Kellerer. 2018. Response time and availability study of
RAFT consensus in distributed SDN control plane. IEEE Transactions on Network
and Service Management 15, 1 (2018), 304–318.
Ermin Sakic, Vivek Kulkarni, Vasileios Theodorou, Anton Matsiuk, Simon Kuen-
zer, Nikolaos E Petroulakis, and Konstantinos Fysarakis. 2018. VirtuWind: An
SDN and NFV-based architecture for softwarized industrial networks. In Inter-
national Conference on Measurement, Modelling and Evaluation of Computing
Systems. Springer, 251–261.
Liron Schi, Stefan Schmid, and Marco Canini. 2015. Medieval: Towards A
Self-Stabilizing, Plug & Play, In-Band SDN Control Network. In ACM Sigcomm
Symposium on SDN Research (SOSR).
Liron Schi, Stefan Schmid, and Marco Canini. 2016. Ground control to major
faults: Towards a fault tolerant and adaptive SDN control network. In Depend-
able Systems and Networks Workshop, 2016 46th Annual IEEE/IFIP International
Conference on. IEEE, 90–96.
Liron Schi, Stefan Schmid, and Petr Kuznetsov. 2016. In-band synchronization
for distributed SDN control planes. ACM SIGCOMM Computer Communication
Review 46, 1 (2016).
Sachin Sharma, Dimitri Staessens, Didier Colle, Mario Pickavet, and Piet De-
meester. 2013. Automatic bootstrapping of OpenFlow networks. In Local &
Metropolitan Area Networks (LANMAN), 2013 19th IEEE Workshop on. IEEE, 1–6.
Sachin Sharma, Dimitri Staessens, Didier Colle, Mario Pickavet, and Piet De-
meester. 2013. A demonstration of automatic bootstrapping of resilient OpenFlow
networks. In 13th IFIP/IEEE International Symposium on Integrated Network Man-
agement (IM). IEEE, 1066–1067.
Sachin Sharma, Dimitri Staessens, Didier Colle, Mario Pickavet, and Piet De-
meester. 2013. Fast failure recovery for in-band OpenFlow networks. In Design
of Reliable Communication Networks (DRCN) 2013 9th International Conference on
the. IEEE, 52–59.
Specication, OpenFlow Switch. 2015. Version 1.5.1, Standard, Open Networking
Foundation. (2015).
Yu-Lun Su, I-Chih Wang, Yao-Tsung Hsu, and Charles H-P Wen. 2017. FASIC: A
Fast-Recovery, Adaptively Spanning In-Band Control Plane in Software-Dened
Network. In GLOBECOM 2017 IEEE Global Communications Conference. IEEE,
Niels LM Van Adrichem, Benjamin J Van Asten, Fernando A Kuipers, et al
Fast Recovery in Software-Dened Networks. EWSDN 14 (2014), 61–66.
Petra Vizarreta, Amaury Van Bemten, Ermin Sakic, Khawar Abbasi, Nikolaos E
Petroulakis, Wolfgang Kellerer, and Carmen Mas Machuca. 2019. Incentives for a
Softwarization of Wind Park Communication Networks. IEEE Communications
Magazine 57, 5 (2019), 138–144.
Martin Wollschlaeger, Thilo Sauter, and Juergen Jasperneite. 2017. The future
of industrial communication: Automation networks in the era of the internet of
things and industry 4.0. IEEE industrial electronics magazine 11, 1 (2017), 17–27.
Yang Zhang, Eman Ramadan, Hesham Mekky, and Zhi-Li Zhang. 2017. When
Raft Meets SDN: How to Elect a Leader and Reach Consensus in an Unruly
Network. In Proceedings of the First Asia-Pacic Workshop on Networking. ACM,
... However, the dynamic establishment of in-band control channels requires certain autonomous functionality and/or the use of specific protocols to allow the correct exchange of SDN control traffic [18]. For example, some proposals rely on Rapid Spanning Tree Protocol (RSTP) to establish the SDN in-band control channel [19]. Additionally, reconfiguration after failure is also required and might be planned based on local re-routing [20]. ...
... Izzy [24] is also a simulation-only study that requires a certain initial manual configuration and requires an AODV-like mechanism to establish alternative control paths if failures occur. ConForm [25] and Sakic et al. [19] build spanning tree-like in-band control channels rooted in the controller; the latter also establishes alternative paths from the control plane. Finally, Amaru [2] defines a novel discovery mechanism initiated by the control plane to obtain multiple paths to reach the control plane, which allows the required redundancy in case of failures. ...
Full-text available
In-Band enhanced Hybrid Domain Discovery Protocol (ieHDDP) is a novel integral approach for hybrid Software-Defined Networking (SDN) environments that simultaneously provides a topology discovery service and an autonomous control channel configuration in the band. This contribution is particularly relevant since, to the best of our knowledge, it is the first all-in-one proposal for SDN capable of collecting the entire topology information (type of devices, links, etc.) and establishing in-band control channels at once in hybrid SDN environments (composed by SDN/no-SDN, wired/wireless devices), even with isolated SDN devices. ieHDDP facilitates the integration of heterogeneous networks, for example, in 5G/6G scenarios, and the deployment of SDN devices by using a simple exploration mechanism to gather all the required topological information and learn the necessary routes between the control and data planes at the same time. ieHDDP has been implemented in a well-known SDN software switch and evaluated in a comprehensive set of randomized topologies, acknowledging that ieHDDP is scalable in representative scenarios.
... Bootstrapping is supportive for dynamic network extension and legacy routing, and can effectively handle uncertain failures within the control and data plane [18]. A bootstrapping networking device can also serve the purpose of a edge computing node. ...
... Among the SDN emulators, Mininet is the most popular. In the literature, Mininet has been adopted to evaluate policies for deploying SDN controllers [68], enhancing controller's adaptivity [69], automating distributed firewalls [72], managing data flow [73], augmenting Named Data Networking (NDN) [118] and creating inte- 18 Node to node communication within an emulated tactical SDN [117] grated SDN environments [119] in tactical networks. Mininet is lightweight, boots faster and offers higher scalability. ...
Software Defined Networking (SDN) has emerged as a programmable approach for provisioning and managing network resources by defining a clear separation between the control and data forwarding planes. Nowadays SDN has gained significant attention in the military domain. Its use in the battlefield communication facilitates the end-to-end interactions and assists the exploitation of edge computing resources for processing data in the proximity. However, there are still various challenges related to the security and interoperability among several heterogeneous, dynamic, intermittent, and data packet technologies like multi-bearer network (MBN) that need to be addressed to leverage the benefits of SDN in tactical environments. In this chapter, we explicitly analyse these challenges and review the current research initiatives in SDN-enabled tactical networks. We also present a taxonomy on SDN-based tactical network orchestration according to the identified challenges and map the existing works to the taxonomy aiming at determining the research gaps and suggesting future directions.
... In [17], another bootstrapping approach is proposed that assists tactical networks to transmit the control commands and the data traffic using the same underlay network. It enables a data plane node to (i) identify and register with any of the available SDN controllers, (ii) parse the corresponding data flow rules through intermediate Bootstrapping is supportive for dynamic network extension and legacy routing, and can effectively handle uncertain failures within the control and data plane [18]. A bootstrapping networking device can also serve the purpose of a edge computing node. ...
Full-text available
Software Defined Networking (SDN) has emerged as a programmable approach for provisioning and managing network resources by defining a clear separation between the control and data forwarding planes. Nowadays SDN has gained significant attention in the military domain. Its use in the battlefield communication facilitates the end-to-end interactions and assists the exploitation of edge computing resources for processing data in the proximity. However, there are still various challenges related to the security and interoperability among several heterogeneous, dynamic, intermittent, and data packet technologies like multi-bearer network (MBN) that need to be addressed to leverage the benefits of SDN in tactical environments. In this chapter, we explicitly analyse these challenges and review the current research initiatives in SDN-enabled tactical networks. We also present a taxonomy on SDN-based tactical network orchestration according to the identified challenges and map the existing works to the taxonomy aiming at determining the research gaps and suggesting future directions.
... The communication link between two replicas may be realized using any available mean of data-plane forwarding, e.g., provisioned by OpenFlow flow rule configuration in each hop of the path between i and j, or by enabled MAC-learning in non-OpenFlow data plane. The controllers are assumed to synchronize in either in-band or out-of-band [17] manner. We assume non-Byzantine [2], [18] operation of network controllers. ...
Conference Paper
Full-text available
Centralized Software Defined Networking (SDN) controllers and Network Management Systems (NMS) introduce the issue of controller as a single-point of failure (SPOF). The SPOF correspondingly motivated the introduction of distributed controllers, with replicas assigned into clusters of controller instances replicated for purpose of enabling high availability. The replication of the controller state relies on distributed consensus and state synchronization for correct operation. Recent works have, however, demonstrated issues with this approach. False positives in failure detectors deployed in replicas may result in oscillating leadership and control plane unavailability. In this paper, we first elaborate the problematic scenario. We resolve the related issues by decoupling failure detector from the underlying signaling methodology and by introducing event agreement as a necessary component of the proposed design. The effectiveness of the proposed model is validated using an exemplary implementation and demonstration in the problematic scenario. We present an analytic model to describe the worst- case delay required to reliably agree on replica failures. The effectiveness of the analytic formulation is confirmed empirically using varied cluster configurations in an emulated environment. Finally, we discuss the impact of each component of our design on the replica failure- and recovery-detection delay, as well as on the imposed communication overhead.
Demands for wide-area connectivity between enterprise site-edge networks and central office core networks/cloud data centers have grown rapidly. Various software defined wide area network (SD-WAN) solutions have been developed with the primary aim of improving WAN link utilization. However, mechanisms used by existing SD-WAN solutions fail to provide high reliability and performance required by today’s edge to cloud applications. In this article, we present WAN-aware MPTCP which seamlessly aggregates multiple WAN links into a “big pipe” for better WAN resilience thus minimizing application performance degradation under WAN link failures. We leverage the congestion control of MPTCP to balance traffic across multiple WAN links. The key innovation is to combine LAN virtualization at end systems with WAN virtualization at SD-WAN gateways. Through evaluation in both emulated testbeds and real-world deployment, we demonstrate the performance gain of WAN-aware MPTCP in terms of resilience and throughput over existing SD-WAN solutions.
Full-text available
Wind energy is one of the most attractive and one of the fastest growing sources of green energy in the world. With the expansion of wind parks, there is a growing need for an efficient coordination of the diverse energy production systems, as well as a tighter coupling between the production and the consumer side of the grid. Current grid operators suffer unnecessarily high costs due to the lack of an integrated management system towards diverse set of proprietary network protocols, complex and error prone operation of the networks, as well as the rigid security mechanisms. Network softwarization concepts, i.e., Software Defined Networking (SDN) and Network Function Virtualization (NFV), offer a great potential for reducing capital and operational expenditures by providing simplified network management and automated control. Recent works have demonstrated the feasibility of achieving stringent industrial-grade quality of service and fine grain security control with SDN and NFV. In this article, we provide an insight into wind park communication network requirements, analyze the technological benefits of the network softwarization, and demonstrate the economic profits in a case study of a typical Northwestern Europe wind park.
Full-text available
Scalability of the control plane in a Software Defined Network (SDN) is enabled by means of decentralization of the decision-making logic, i.e. by replication of controller functions to physically or virtually dislocated controller replicas. Replication of a centralized controller state also enables the protection against controller failures by means of primary and backup replicas responsible for managing the underlying SDN data plane devices. In this work, we investigate the effect of the the deployed consistency model on scalability and correctness metrics of the SDN control plane. In particular, we compare the strong and eventual consistency, and make a case for a novel adaptive consistency approach. The existing controller platforms rely on either strong or eventual consistency mechanisms in their state distribution. We show how an adaptive consistency model offers the scalability benefits in terms of the total requesthandling throughput and response time, in contrast to the strong consistency model. We also outline how the adaptive consistency approach can provide for correctness semantics, that are unachievable with the eventual consistency paradigm in practice. The adaptability of our approach provides a balanced and tunable trade-off of scalability and correctness for the SDN application implemented on top of the adaptive framework. To validate our assumptions, we evaluate and compare the different approaches in an emulated testbed with an example of a load balancer controller application. The experimental setup comprises up to five extended OpenDaylight controller instances and two network topologies from the area of service provider and data center networks.
Full-text available
As SDN migrates to wide area networks and 5G core networks, a scalable, highly reliable, low latency distributed control plane becomes a key factor that differentiates operator solutions for network control and management. In order to meet the high reliability and low latency requirements under timevarying volume of control traffic, the distributed control plane, consisting of multiple controllers and a combination of out-ofband and in-band control channels, needs to be managed dynamically. To this effect, we propose a novel programmable distributed control plane architecture with a dynamically managed in-band control network, where in-band mode switches communicate with their controllers over a virtual overlay to the data plane with dynamic topology. We dynamically manage the number of controllers, switches and control flows assigned to each controller as well as traffic over control channels achieving both controller and control traffic load-balancing. We introduce “control flow table” (rules embedded in the flow table of a switch to manage in-band control flows) in order to implement the proposed distributed dynamic control plane. We propose methods for off-loading congested controllers and congested in-band control channels using control flow tables. A validation test-bed and experimental results over multiple topologies are presented to demonstrate the scalability and performance improvements achieved by the proposed dynamic control plane management procedures when the controller CPU and/or availability or throughput of in-band control channels become bottlenecks.
Full-text available
Current approaches to tackling the single point of failure in SDN entail a distributed operation of SDN controller instances. Their state synchronization process is reliant on the assumption of a correct decision-making in the controllers. Successful introduction of SDN in the critical infrastructure networks also requires catering to the issue of unavailable, unreliable (e.g. buggy) and malicious controller failures. We propose MORPH, a framework tolerant to unavailability and Byzantine failures, that distinguishes and localizes faulty controller instances and appropriately reconfigures the control plane. Our controller-switch connection assignment leverages the awareness of the source of failure to optimize the number of active controllers and minimize the controller and switch reconfiguration delays. The proposed re-assignment executes dynamically after each successful failure identification. We require 2FM +FA+1 controllers to tolerate FM malicious and FA availability-induced failures. After a successful detection of FM malicious controllers, MORPH reconfigures the control plane to require a single controller message to forward the system state. Next, we outline and present a solution to the practical correctness issues related to the statefulness of the distributed SDN controller applications, previously ignored in the literature. We base our performance analysis on a resource-aware routing application, deployed in an emulated testbed comprising up to 16 controllers and up to 34 switches, so to tolerate up to 5 unique Byzantine and additional 5 availability-induced controller failures (a total of 10 unique controller failures). We quantify and highlight the dynamic decrease in the packet and CPU load and the response time after each successful failure detection.
Full-text available
VirtuWind proposes the application of Software Defined Networking (SDN) and Network Functions Virtualization (NFV) in critical infrastructure networks. We aim at introducing network programmability, reconfigurability and multi-tenant capability both inside isolated and inter-connected industrial networks. Henceforth, we present the design of the VirtuWind architecture that addresses the requirements of industrial communications: granular Quality of Service (QoS) guarantees, system modularity and secure and isolated per-tenant network access. We present the functional components of our architecture and provide an overview of the appropriate realization mechanisms. Finally, we map two exemplary industrial system use-cases to the designed architecture to showcase its applicability in an exemplary industrial wind park network.
Message Authentication Code (MAC) algorithms are a fairly crucial component of most online protocols. They ensure the authenticity of the message between two or more parties to the transaction. MAC algorithms work in much the same context as symmetric ciphers. They are fixed algorithms that accept a secret key that controls the mapping from input to the output. To help developers implement interoperable MAC functions in their products, NIST has standardized two different forms of MAC functions. The first to be developed was the Hash Message Authentication Code (HMAC) that described a method of safely turning a one-way collision resistant hash into a MAC function. The second standard developed by NIST was the cipher message authentication code (CMAC) standard. Oddly enough, CMAC falls under “modes of operations” on the NIST Web site and not a message authentication code. CMAC is intended for message authenticity. Unlike HMAC, CMAC uses a block cipher to perform the MAC function and is ideal in space-limited situations where only a cipher will fit.
Conference Paper
Automotive Ethernet is becoming the backbone network of future vehicle because of its large data bandwidth and high communication speed. In order to satisfy the requirements of vehicle communication, the network needs the characteristic of seamless redundancy. IEEE P802.1 CB is one of the TSN (Time-Sensitive Networking) active projects, which provides seamless redundancy character for Automotive Ethernet by frame replication and elimination at flexible position to improve reliability. In this paper, the operating principle of IEEE P802.1 CB was analyzed, including network topology, redundancy tag, sequence generation function, sequence recovery function, timeout mechanism and latent error detection. A network complying IEEE P802.1 CB was simulated in Visual Studio and 7 situations of packet transmission were tested in CppUTest. The results show that when IEEE P802.1 CB is used over a network that are fixed to a specific topology, and that are protected against congestion loss, it can substantially increase seamless redundancy character for the network and is especially suitable for time-critical system.