Conference PaperPDF Available

Advanced study of SDN/OpenFlow controllers

Authors:

Abstract and Figures

This paper presents an independent comprehensive analysis of the efficiency indexes of popular open source SDN/OpenFlow controllers (NOX, POX, Beacon, Floodlight, MuL, Maestro, Ryu). The analysed indexes include performance, scalability, reliability, and security. For testing purposes we developed the new framework called hcprobe. The test bed and the methodology we used are discussed in detail so that everyone could reproduce our experiments. The result of the evaluation show that modern SDN/OpenFlow controllers are not ready to be used in production and have to be improved in order to increase all above mentioned characteristics.
Content may be subject to copyright.
Advanced Study of SDN/OpenFlow controllers
Alexander Shalimov
Applied Research Center for
Computer Networks
Moscow State University
ashalimov@arccn.ru
Dmitry Zuikov
Applied Research Center for
Computer Networks
dzuikov@arccn.ru
Daria Zimarina
Moscow State University
zimarina@lvk.cs.msu.su
Vasily Pashkov
Applied Research Center for
Computer Networks
Moscow State University
vpashkov@arccn.ru
Ruslan Smeliansky
Applied Research Center for
Computer Networks
Moscow State University
smel@arccn.ru
ABSTRACT
This paper presents an independent comprehensive anal-
ysis of the efficiency indexes of popular open source SD-
N/OpenFlow controllers (NOX, POX, Beacon, Floodlight,
MuL, Maestro, Ryu). The analysed indexes include per-
formance, scalability, reliability, and security. For testing
purposes we developed the new framework called hcprobe.
The test bed and the methodology we used are discussed
in detail so that everyone could reproduce our experiments.
The result of the evaluation show that modern SDN/Open-
Flow controllers are not ready to be used in production and
have to be improved in order to increase all above mentioned
characteristics.
Categories and Subject Descriptors
C.2.1 [Computer-Communication Networks]: Network
Architecture and Design; C.4 [Performance of Systems]
General Terms
Design, Measurement, Performance, Reliability, Security
Keywords
SDN, OpenFlow, Controllers evaluation, Throughput, La-
tency, Haskell
1. INTRODUCTION
Software Defined Networking (SDN) is the most discussed
topic of networking technology of recent years [1]. It brings a
lot of new capabilities and allows to solve many hard prob-
lems of legacy networks. The approach proposed by the
SDN paradigm is to move network’s intelligence out from
the packet switching devices and to put it into the logically
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CEE-SECR ’13 Moscow, Russia
Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00.
centralized controller. The forwarding decisions are done
first in the controller, and then moves down to the overseen
switches which simply execute these decisions. This gives
us a lot of benefits like global controlling and viewing whole
network at a time that helpful for automating network op-
erations, better server/network utilization, and etc. Under-
standing all benefits Google and Microsoft recently switched
to SDN in their data centers [2, 3].
In SDN/OpenFlow paradigm switches provide common
and vendor-agnostic protocol for remote controller called
OpenFlow [4]. This protocol provides the controller a way
to discover the OpenFlow-compatible switches, define the
matching rules for the switching hardware and collect statis-
tics from switching devices. A switch presents a flow ta-
ble abstraction: each flow table entry contains a set of
packet fields to match, and an action (such as send-out-port,
modify-field, or drop). When an OpenFlow Switch receives
a packet it has never seen before, for which it has no match-
ing flow entries, it sends this packet to the controller via
a packet in message. The controller then makes a decision
on how to handle this packet. It can drop the packet, send
packet to a port via a packet out message, or it can add a
flow entry directing the switch on how to forward similar
packets in the future via flow mod message.
An SDN/OpenFlow controller for networks is like an op-
erating system for computers. But the current state of the
SDN/OpenFlow controller market may be compared to early
operating systems for mainframes, which existed 40-50 years
ago. Operating systems were control software that provided
libraries of support code to assist programs in runtime oper-
ations just as today’s controllers are special control software,
which interacts with switching devices and provides a pro-
grammatic interface for user-written network management
applications.
Early operating systems were extremely diverse, with each
vendor or customer producing one or more operating sys-
tems specific to their particular mainframe computers. Ev-
ery operating system could have radically different models
of command and operating procedures. The question of op-
erating system efficiency was of vital importance and, ac-
cordingly, was studied a lot [5]. However, it was difficult to
reproduce efficiency evaluation experiments applied to oper-
ating systems because of a lack of methodology descriptions,
a mixture of running applications that created the workload
in the experiments, unique hardware configurations, etc [6].
The same is true for today’s SDN/OpenFlow controllers.
At present, there are more than 30 different Open-
Flow controllers created by different vendors/universi-
ties/research groups, written in different languages, and us-
ing different runtime multi-threading techniques. Thus, the
issue of efficiency evaluation has emerged.
The latest evaluation of SDN/OpenFlow controllers, pub-
lished in 2012 [7], covered a limited number of controllers
and focused on controller performance only. During the
last year new controllers appeared, and the old ones were
updated. This paper was intended as impartial compara-
tive analysis of the effectiveness of the most popular and
widely used controllers. In our research we treat the term
controller effectiveness” as a list of indexes, namely: perfor-
mance, scalability, reliability, and security. Later on we will
introduce the meaning of every index from this list.
For testing controllers we used a well known tool
cbench [8]. However, we were not satisfied with the capabil-
ities of cbench, and so we developed our own framework
for SDN/Open-Flow controllers testing, which we called
hcprobe [9]. We developed our own solution because the
capabilities of cbench, the traditional benchmark used for
controllers evaluation, cover only a small range of possible
test cases. It is perfectly fitting for rough performance test-
ing, but it does not provide the fine-tuning of test param-
eters. We use hcprobe to create a set of advanced test-
ing scenarios, which allow us to get a deeper insight into
the controller reliability and security issues: the number of
failures/faults during long-term testing, uptime for a given
workload profile, and processing malformed OpenFlow mes-
sages. With cbench and hcprobe we performed an experi-
mental evaluation of seven freely available OpenFlow con-
trollers: NOX [10], POX [11], Beacon [12], Floodlight [13],
MuL [14], Maestro [15], and Ryu [16].
2. HCPROBE
Similar to conventional benchmarks, hcpobe emulates any
number of OpenFlow switches and hosts connected to a con-
troller. Using hcprobe, one can test and analyse several in-
dexes of controller operation in a flexible manner. Hcprobe
allows to specify patterns for generating OpenFlow messages
(including malformed ones), set traffic profile, etc. It is writ-
ten in Haskell and allows users to easily create their own
scenarios for controller testing.
The key features of hcprobe are:
correct OpenFlow and network stack packet genera-
tion;
implementation in a high-level language, which makes
it easier to extend;
API for custom tests design;
introducing an embedded domain-specific language
(eDSL) for creating tests.
The architecture of hcprobe (see Figure 1):
Network datagram: The library, which provides
typeclass-based interface for easy creation of packets
for network protocols: Ethernet frames, ARP pack-
ets, IP packets, UDP packets, TCP packets. It allows
to set a custom value for any field of the header and
generate a custom payload.
Figure 1: The hcrobe architecture
OpenFlow: The library for parsing and creating
OpenFlow messages.
Configuration: The library that provides the rou-
tines for configuration of the test’s parameters using
command line options or a configuration file.
FakeSwitch: Basic fake switch”, which implements
the initial handshake, manages the interaction with the
controller and provides callback/queue-based interface
for writing tests.
eDSL: Provides easier interface for creating tests.
Using these modules, it is possible to create specified tests
for SDN/OpenFlow controllers, including functional, secu-
rity and performance tests. The most convenient way to cre-
ate tests is to use the embedded domain-specific language.
Hcprobe domain-specific language provides the following
abilities:
create OpenFlow switches;
write programs describing the switch logic and run pro-
grams for different switches simultaneously;
create various types of OpenFlow messages with cus-
tom field values and also generate custom payload for
PacketIn messages;
aggregate statistics on message exchange between
OpenFlow controller and switches.
For a basic example, which illustrates the above features,
see Figure 2.
Figure 2: The hcprobe test example (incorrect Pack-
etIn OF message)
So hcprobe provides the framework for creating various
test cases to study the behaviour of OpenFlow controllers
processing different types of messages. One can generate all
types of switch-to-controller OpenFlow messages and replay
different communication scenarios, even the most sophisti-
cated ones, which cannot be easily reproduced using hard-
ware and software OpenFlow switches. The tool could be
useful for developers to perform functionality, regression and
safety testing of controllers.
3. EXPERIMENTAL SETUP
This section explains our testing methodology and tech-
nical details of experimental evaluation.
3.1 Test bed
The test bed consists of two servers connected with a
10Gb link. Each server has two processors Intel Xeon E5645
with 6 cores (12 threads), 2.4GHz and 48 GB RAM. Both
servers are running Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-
23-generic x86 64) with default network settings. One of
the servers is used to launch the controllers. We bind all
controllers to the cores of the same processor in order to
decrease the influence of communication between the pro-
cessors. The second server is used for traffic generation ac-
cording to a certain test scenario. We use two server setup
instead of a single server configuration where both a con-
troller and a traffic generator run on the same server. Our
previous measurements have shown that the throughput of
internal traffic is rather unpredictable and unstable, with
throughput surging or dropping by an order. Also, using
two servers we can guarantee that traffic generation doesn’t
affect the controller operation.
3.2 Controllers
Based on the study of available materials on twenty four
SDN/OpenFlow controllers, we chose the following seven
open source controllers:
NOX [10] is a multi-threaded C++-based controller
written on top of Boost library.
POX [11] is a single-threaded Python-based controller.
It is widely used for fast prototyping of network appli-
cations in research.
Beacon [12] is a multi-threaded Java-based controller
that relies on OSGi and Spring frameworks.
Floodlight [13] is a multi-threaded Java-based con-
troller that uses Netty framework.
MUL [14] is a multi-threaded C-based controller writ-
ten on top of libevent and glib.
Maestro [15] is a multi-threaded Java-based controller.
Ryu [16] is Python-based controller that uses gevent
wrapper of libevent.
Initially we included in our study Trema [18], Flower [17]
and Nettle [19] controllers. But after several experiments we
understood that they were not ready for evaluation under
heavy workloads.
Each tested controller runs an L2 learning switching appli-
cation provided by the controller. There are several reasons
for this choice. It is quite simple and at the same time repre-
sentative. It uses all of the controller’s internal mechanisms,
and also shows how effective the chosen programming lan-
guage is by implementing single hash lookup.
We relied on the latest available sources of all controllers
dated April, 2013. We run all controllers with the rec-
ommended settings for performance and latency testing, if
available.
4. METHODOLOGY
Our testing methodology includes performance and scala-
bility measurements as well as advanced functional analysis
such as reliability and security. The meaning of each term
is explained below.
4.1 Performance/Scalability
Performance of an SDN/OpenFlow controller is defined
by two characteristics: throughput and latency. The goal
is to obtain maximum throughput (number of outstanding
packets, flows/sec) and minimum latency (response time,
ms) for each controller. The changes in this parameters
when adding more switches and hosts to the network or
adding more CPU cores to the server where the controller
runs show the scalability of the controller.
We analyse the correlation between controller’s through-
put and the following parameters:
1. the number of switches connected to the controller (1,
4, 8, 16, 64, 256), having a fixed number of hosts per
switch (105);
2. the number of unique hosts per switch (103, 104, 105,
106, 107), having a fixed number of switches (32).
3. the number of available CPU cores (1–12).
The latency is measured with one switch, which sends
requests, each time waiting for the reply from the controller
before sending the next request.
All experiments are performed with cbench. Each cbench
run consists of 10 test loops with the duration of 10 secs.
We run each experiment three times and take the average
number as the result.
The scripts with the methodology realization can be found
in the repository: https://github.com/ARCCN/ctltest.
4.2 Reliability
By reliability we understand the ability of the controller
to work for a long time under an average workload without
accidentally closing connections with switches or dropping
OpenFlow messages from the switches.
To evaluate the reliability, we measure the number of fail-
ures during long-term testing under a given workload profile.
The traffic profile is generated using hcprobe. We use flow
setup rate recorded in Stanford campus in 2011 as a sample
workload profile (Figure 3). This profile stands for a typical
workload in a campus or office network during a 24-hour
period: the highest requests rate is seen in the middle of the
day, and by night the rate decreases.
In our test case we use five switches sending OpenFlow
PacketIn messages with the rate varying from 2000 to 18000
requests per second. We run the test for 24 hours and record
the number of errors during the test. By error we mean
either a session termination or a failure to receive a reply
from the controller.
Figure 3: The sample workload profile
4.3 Security
To identify any security issues, we have also examined how
the controllers manipulate malformed OpenFlow messages.
In our test cases we send OpenFlow messages with incor-
rect or inconsistent fields, which could lead to a controller
failure.
4.3.1 Malformed OpenFlow header.
Incorrect message length. PacketIn messages with modi-
fied length field in OpenFlow header have been generated.
According to the protocol specification [4], this field stands
for the whole message length (including the header). Its
value was set as follows: 1) length of OpenFlow header;
2) length of OpenFlow header plus ’a’, where ’a’ is smaller
than the header of the encapsulated OpenFlow message; 3)
a number larger than the length of the whole message. This
test case examines how the controller parses the queue of
incoming messages.
Invalid OpenFlow version. All controllers under test sup-
port OpenFlow protocol v1.0. In this test case we start the
session with announcing OpenFlow v1.0 during the hand-
shake between the switch and the controller, then set an
invalid OpenFlow protocol version in PacketIn messages.
Incorrect OpenFlow message type. We examine how the
controller processes the OpenFlow messages, in which the
value of the ’type’ field of the general OpenFlow header
does not correspond to the actual type of the encapsulated
OpenFlow message. As an example we send messages with
OFPT PACKET IN in the ’type’ field, but containing Port-
Status messages.
4.3.2 Malformed OpenFlow message.
PacketIn message: incorrect protocol type in the encap-
sulated Ethernet frame. This test case shows how the con-
troller’s application processes an Ethernet frame with a mal-
formed header. We put the code of the ARP protocol
(0x0806) in the Ethernet header, though the frame contains
the IP packet.
Port status message: malformed ’name’ field in the port
description. The ’name’ field in the port description struc-
ture is an ASCII string, so a simple way to corrupt this field
is not to put \0’ at the end of the string.
5. RESULTS
In this section we present the results of the experimen-
tal study of the controllers performed with cbench (perfor-
mance/scalability tests) and hcprobe (reliability and secu-
rity tests).
5.1 Performance/Scalability
5.1.1 Throughput
CPU cores scalability (see Figure 4). Python controllers
(POX and Ryu) do not support multi-threading, so they
show no scalability across CPU cores. Maestro’s scalabil-
ity is limited to 8 cores as the controller does not run with
more than 8 threads. The performance of Floodlight in-
creases steadily in line with the increase in the number of
cores, slightly differing on 8-12 cores. Beacon shows the best
scalability, achieving the throughput of nearly 7 billion flows
per second.
The difference in throughput scalability between multi-
threaded controllers depends on two major factors: the first
one is the algorithm of distributing incoming messages be-
tween threads, and the second one is the mechanism or the
libraries used for network interaction.
NOX, MuL and Beacon pin each switch connection to a
certain thread, this strategy performs well when all switches
send approximately equal number of requests per second.
Maestro distributes incoming packets using round-robin al-
gorithm, so this approach is expected to show better results
with an unbalanced load. Floodlight relies on Netty library,
which also associates each new connection with a certain
thread.
Figure 4: The average throughput achieved with dif-
ferent number of threads (with 32 switches, 105hosts
per switch)
Correlation with the number of connected switches (see
Figure 5). The performance of POX and Ryu does not de-
pend on the number of switches, as they do not support
multi-threading. For multi-threaded controllers adding more
switches leads to better utilization of available CPU cores,
so their throughput increases until the number of connected
switches becomes larger than the number of cores. Due
to the round-robin packets distribution algorithm, Maestro
shows better results on a small amount of switches, but the
drawback of this approach is that its performance decreases
when a large number of switches (256) is connected.
Correlation with the number of connected hosts (see Fig-
ure 6). The number of hosts in the network has immaterial
Figure 5: The average throughput achieved with dif-
ferent number of switches (with 8 threads, 105hosts
per switch)
influence on the performance of most of the controllers under
test. Beacon’s throughput decreases from 7 to 4 billion flows
per second with 107hosts, and the performance of Maestro
goes down significantly when more than 106hosts are con-
nected. This is caused by specific details of Learning Switch
application implementation, namely, the implementation of
its lookup table.
Figure 6: The average throughput achieved with dif-
ferent number of hosts (with 8 threads, 32 switches)
5.1.2 Latency
The average response time of the controllers demonstrates
insignificant correlation with the number of connected hosts.
For the average response time with one connected switch and
105hosts see Table 1. The smallest latency has been demon-
strated by MuL and Beacon controllers, while the largest
latency is typical of python-based controller POX.
NOX 91,531
POX 323,443
Floodlight 75,484
Beacon 57,205
MuL 50,082
Maestro 129,321
Ryu 105,58
Table 1: The minimum response time (106secs/
flow)
The results of performance/scalability testing show that
Beacon controller achieves the best throughput (about 7 bil-
lion flows/sec) and has a potential for further throughput
scalability if the number of available CPU cores is increased.
The scalability of other multi-threaded controllers is limited
by 4-8 CPU cores. Python-based controllers POX and Ryu
are more suitable for fast prototyping than for enterprise
deployment.
5.2 Reliability
The experiments have shown that most of the controllers
successfully cope with the test load, although two con-
trollers, MuL and Maestro, start to drop PacketIns after
several minutes of work: MuL has dropped 660,271,177 mes-
sages and closed 214 connections, and Maestro has dropped
463,012,511 messages without closing the connections. For
MuL, failures are caused by problems with Learning Switch
module that cannot add new entries to the table, which leads
to packet loss and the closing of connections with switches.
5.3 Security
5.3.1 Malformed OpenFlow header
Incorrect message length (1). Most of the controllers will
not crash on receiving this type of malformed messages but
close the connection with the switch that sends these mes-
sages. The two controllers that crash upon receiving mes-
sages with incorrect value of length field are Maestro and
NOX. Maestro crashes only when these messages are sent
without delay, however, if we set a delay between two mes-
sages to 0.01 second the controller replies with PacketOuts.
Ryu is the only controller whose work is not affected by mal-
formed messages, it ignores the messages without closing the
connection.
Invalid OpenFlow version (2). POX, MuL and Ryu close
the connection with the switch upon receiving a message
with invalid protocol version. Other controllers process mes-
sages regardless of an invalid protocol version and reply with
PacketOuts with version set to 0x01.
Incorrect OpenFlow message type (3). MuL and Ryu ig-
nore these messages. MuL sends another FeaturesRequest
to the switch and after receiving the reply closes the con-
nection with the switch. Other controllers parse them as
PacketIn messages and reply with PacketOuts.
5.3.2 Malformed OpenFlow message
PacketIn message (4). MuL closes the connection with
the switch after sending an additional FeaturesRequest and
receiving the reply. Other controllers try to parse encapsu-
lated packets as ARP and reply with a PacketOut message.
POX is the only controller which detects invalid values of
ARP header fields.
Port status message (5). All controllers process messages
with malformed ASCII strings correctly, reading the number
of characters equal to the field length.
The summary of the security tests results is shown in the
Table 2, where: red cell the controller crashed, orange
cell the controller closed the connection, green cell the
controller successfully passed the test. If the cell is yellow,
the controller processed the message without crashing or
closing the connection, but the error was not detected, which
is a possible security vulnerability.
Thus, the experiments demonstrate that sending mal-
1 2 3 4 5
NOX
POX
Floodlight
Beacon
MuL
Maestro
Ryu
Table 2: Security tests results
formed OpenFlow messages to a controller can cause the
termination of a TCP session with the switch or even the
controller’s failure resulting in a failure of a network segment
or even of the whole network. The expected behaviour for
the OpenFlow controller on receiving a malformed message
is ignoring it and removing from the incoming queue with-
out affecting other messages. If the controller proceeds the
malformed message without indicating an error, this could
possibly cause incorrect network functioning. So it is impor-
tant to verify not only the OpenFlow headers, but also the
correctness of the encapsulated network packets.
6. CONCLUSION
We present an impartial experimental analysis of the cur-
rently available open source OpenFlow controllers: NOX,
POX, Floodlight, Beacon, MuL, Maestro, and Ryu.
The key results of the work:
The list of the controllers under the test has been ex-
tended as compared to previous studies.
We present a detailed description of the experiment
setup and methodology.
Controller performance indexes have been upgraded
using advanced hardware with a 12 CPU threads
configuration (previous research used maximum 8
threads). Maximum throughput is demonstrated by
Beacon, with 7 billion flows per second.
Reliability analysis showed that most of the controllers
are capable of coping with an average workload during
long-term testing. Only MuL and Maestro controllers
are not ready for the 24/7 work.
Security analysis highlighted a number of possible se-
curity vulnerabilities of tested controllers which can
make them targets for the malware attacks.
The results show that these aspects have to be taken into
account during development of new SDN/OpenFlow con-
trollers and should be improved for the current controllers.
The current controllers have bad scaling over the cores and
will not able to meet increasing demands in communication
for future data centers. The security and reliability have
highest priorities for enterprise networks.
We also present hcprobe the framework, which per-
fectly fits the demands of fine-grained functional and secu-
rity testing of controllers. It is a good base for a full test
suite for testing SDN/OpenFlow controllers on compliance
to the OpenFlow specification (like OFTest for OpenFlow
switches [20]).
7. REFERENCES
[1] M. Casado, T. Koponen, D. Moon, S. Shenker.
Rethinking Packet Forwarding Hardware. In Proc. of
HotNets, 2008
[2] S. Jain, A. Kumar, S. Mandal. B4: experience with a
globally-deployed software defined wan. In proceedings
of ACM SIGCOMM conference, ACM, 2013
[3] C. Hong, S. Kandula, R. Mahajan. Achieving high
utilization with software-driven WAN. In proceedings of
ACM SIGCOMM conference, ACM, 2013
[4] Open Networking Foundation. OpenFlow Switch
Specication https://www.opennetworking.org/sdn-
resources/onf-specifications/openflow
[5] D. Ferrary. Computer Systems Performance Evaluation.
N.J.:Prentice-Hall, Englewood Cliffts,1978
[6] D. Ferrary, M. Liu. A general purpose software
measurement tools. Software Practice and Experience,
v.5, 1985
[7] Amin Tootoonchian, Sergey Gorbunov, Martin Casado,
Rob Sherwood. On Controller Performance in
Software-Defined Networks. In Proc. Of HotIce, April,
2012
[8] Cbench. http://docs.projectfloodlight.org/display
/floodlightcontroller/Cbench
[9] Hcprobe. https://github.com/ARCCN/hcprobe/
[10] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado,
N. McKeown, and S. Shenker. NOX: towards an
operating system for networks. SIGCOMM Computer
Communication Review 38, 3 (2008), 105-110.
[11] POX. http://www.noxrepo.org/pox/about-pox/
[12] Beacon.
https://openflow.stanford.edu/display/Beacon
[13] Floodlight. http://Floodlight.openflowhub.org/
[14] Mul. http://sourceforge.net/p/mul/wiki/Home/
[15] Zheng Cai, Maestro: Achieving Scalability and
Coordination in Centralized Network Control Plane,
Ph.D. Thesis, Rice University, 2011
[16] Ryu. http://osrg.github.com/ryu/
[17] FlowER. https://github.com/travelping/flower
[18] Trema. http://trema.github.com/trema/
[19] Nettle. http://haskell.cs.yale.edu/nettle/
[20] Big Switch Networks OFTest Validating OpenFlow
Swtiches. http://www.projectfloodlight.org/oftest/
... While network traffic flow rules have been created by the controller, these devices are set up and observed by the measurement layer. There are several switch-transferring models that employ protocols like OpFlex [200], ForCES [201], OpenFlow [202], Protocol-Oblivious Forwarding (POF) [203], Path-Computation Element Path-Computation Client (PCEPCC) [204], OpenState [205], and others. ...
... On the other hand, using protocols like OpenFlow, ForCES, OpFlex, and others, the southbound interface serves as a bridge between the infrastructure layer and the control layer. It is used to transmit raw facts from the data forwarding components to the controller and to convey flow rules from the control layer to the infrastructure layer equipment [202]. Additionally, in order to have an overall perspective of the network, east-westbound interfaces like ALTO [212], Hyperflow [213], ONOS [210], Onix [214], etc. allow communication between the physically scattered controllers. ...
... Knowledge Store knowledge KIF [166], OIL [167], OWL [169], RDFS [168], RDF [161] Rule modeling and dissemination RuleML [171], RIF [172], SWRL [173] Knowledge querying only KGQL [181], KQML [182], SQWRL [180] Knowledge querying and modifying SPARQL [178], GraphQL [179] Rule/knowledge assessment RETE [174], Bossam [175], Jess [176], Drools [177] Management Network management OF-CONFIG [184], SNMP [185], PACMAN [187], CONMan [186] Network monitoring Payless [188], HONE [189], OpenNetMon [190], OpenSample [191] Data collection IQP [192,235], packet sampling [193], adaptive data collection [194], sensor measurement collection [195] Data storage YANG [196], CIM [197] Data Forwarding models OpenFlow [202], ForCES [201], OpFlex [200], POF [203], PCE-PCC [204], OpenState [205] Control Northbound API Adhoc [208], RESTful [209], intent-based [210], language-based API [211] East-Westbound API ALTO [212], Hyperflow [213], ONOS [210], Onix [214] Southbound API OpenFlow [202], ForCES [201], OpFlex [200], POF [203], PCE-PCC [204], OpenState [205] Logically and physically centralized control NOX [208], Trema [215], Ryu [216], Meridian [217] Logically centralized and physically distributed control SMaRtLight [218], HyperFlow [213], ONOS [210], Onix [214], Kandoo [219], Orion [220] Hybrid control DevoFlow [221], Fibbing [222], HybridFlow [223] Logically and physically distributed control DISCO [224], D-SDN [225], Cardigan [226] Application Policy definition and update Procera [211], Nettle [232], Frenetic [233], Kinetic [234] 3. ...
Article
Full-text available
Knowledge-Defined Networking (KDN) necessarily consists of a knowledge plane for the generation of knowledge, typically using machine learning techniques, and the dissemination of knowledge, in order to make knowledge-driven intelligent network decisions. In one way, KDN can be recognized as knowledge-driven Software-Defined Networking (SDN), having additional management and knowledge planes. On the other hand, KDN encapsulates all knowledge-/intelligence-/cognition-/machine learning-driven networks, emphasizing knowledge generation (KG) and dissemination for making intelligent network decisions, unlike SDN, which emphasizes logical decoupling of the control plane. Blockchain is a technology created for secure and trustworthy decentralized transaction storage and management using a sequence of immutable and linked transactions. The decision-making trustworthiness of a KDN system is reliant on the trustworthiness of the data, knowledge , and AI model sharing. To this point, a KDN may make use of the capabilities of the blockchain system for trustworthy data, knowledge, and machine learning model sharing, as blockchain transactions prevent repudiation and are immutable, pseudo-anonymous, optionally encrypted, reliable, access-controlled, and untampered, to protect the sensitivity, integrity, and legitimacy of sharing entities. Furthermore, blockchain has been integrated with knowledge-based networks for traffic optimization, resource sharing, network administration, access control, protecting privacy, traffic filtering, anomaly or intrusion detection, network virtualization, massive data analysis, edge and cloud computing, and data center networking. Despite the fact that many academics have employed the concept of blockchain in cognitive networks to achieve various objectives, we can also identify challenges such as high energy consumption, scalability issues, difficulty processing big data, etc. that act as barriers for integrating the two concepts together. Academicians have not yet reviewed blockchain-based network solutions in diverse application categories for diverse knowledge-defined networks in general, which consider knowledge generation and dissemination using various techniques such as machine learning, fuzzy logic, and meta-heuristics. Therefore, this article fills a void in the content of the literature by first reviewing the diverse existing blockchain-based applications in diverse knowledge-based networks, analyzing and comparing the existing works, describing the advantages and difficulties of using blockchain systems in KDN, and, finally, providing propositions based on identified challenges and then presenting prospects for the future.
... OpenFlow is the most widely used protocol for the Southbound API standardized by the Open Network Foundation (ONF), which is non-vendor-specific, allowing OpenFlowenabled devices from different vendors to be interoperable [291]. OpenFlow standardization is driven by the decoupling of the control and data planes. ...
... Forwarding Models As reviewed in Section 5.2.2, there are numerous proposals for the implementation of switch forwarding models using protocols such as OpenFlow [291], ForCES [292], OpFlex [293], POF [294], PCE-PCC [295], OpenState [296], etc. OpenFlow and ForCES both have a completely logically separated control plane from the data plane, whereas in OpenFlow, the control plane is typically physically separated from the forwarding devices, while in ForCES, the control plane can exist in the same device. In OpFlex, a part of the control plane is redistributed to forwarding devices in order to improve the scalability of KDN, even though the policies are logically centralized. ...
Article
Full-text available
Traditional networking is hardware-based, having the control plane coupled with the data plane. Software-Defined Networking (SDN), which has a logically centralized control plane, has been introduced to increase the programmability and flexibility of networks. Knowledge-Defined Networking (KDN) is an advanced version of SDN that takes one step forward by decoupling the management plane from control logic and introducing a new plane, called a knowledge plane, decoupled from control logic for generating knowledge based on data collected from the network. KDN is the next-generation architecture for self-learning, self-organizing, and self-evolving networks with high automation and intelligence. Even though KDN was introduced about two decades ago, it had not gained much attention among researchers until recently. The reasons for delayed recognition could be due to the technology gap and difficulty in direct transformation from traditional networks to KDN. Communication networks around the globe have already begun to transform from SDNs into KDNs. Machine learning models are typically used to generate knowledge using the data collected from network devices and sensors, where the generated knowledge may be further composed to create knowledge ontologies that can be used in generating rules, where rules and/or knowledge can be provided to the control, management, and application planes for use in decision-making processes, for network monitoring and configuration, and for dynamic adjustment of network policies, respectively. Among the numerous advantages that KDN brings compared to SDN, enhanced automation and intelligence, higher flexibility, and improved security stand tall. However, KDN also has a set of challenges, such as reliance on large quantities of high-quality data, difficulty in integration with legacy networks, the high cost of upgrading to KDN, etc. In this survey, we first present an overview of the KDN architecture and then discuss each plane of the KDN in detail, such as sub-planes and interfaces, functions of each plane, existing standards and protocols, different models of the planes, etc., with respect to examples from the existing literature. Existing works are qualitatively reviewed and assessed by grouping them into categories and assessing the individual performance of the literature where possible. We further compare and contrast traditional networks and SDN against KDN. Finally, we discuss the benefits, challenges, design guidelines, and ongoing research of KDNs. Design guidelines and recommendations are provided so that identified challenges can be mitigated. Therefore, this survey is a comprehensive review of architecture, operation, applications, and existing works of knowledge-defined networks.
... Changing the stated scale, on the other hand, would result in a different conclusion. Advanced research on OpenFlow Controllers in SDN was conducted in this work [18]. The efficacy of NOX, Beacon, POX, Mul, Floodlight, Ryu, and Maestro which are commonly used SDN controllers, is compared. ...
Article
Software Defined Networking (SDN) is a modern network architectural model that manages network traffic using software. SDN is a networking scenario that modifies the conventional network design by combining all control features into a single place and making all choices centrally. Controllers are the "brains" of SDN architecture since they are responsible for making control decisions and routing packets at the same time. The capacity for centralized decision-making on routing improves the performance of the network. SDN's growing functionality and uses have led to the development of many controller systems. Every SDN controller idea or design must prioritize the control plane since it is the most crucial part of the SDN architecture. Studies have been done to examine, analyze, and evaluate the relative advantages of the many controllers that have been created in recent years. In this paper, finding the perfect controller based on derived needs (for example, the controller must have a "Java" or "Python" interface), a matching process compares controller features with requirements.
... SDN technology was the most recognized subject in the previous decade in the network world because of the solution that promised to provide for large scaled networks. Centralized monitor control using software plans can increase network speed and provide a better quality of service, both in voice and data transmission [35,36]. Compared to classic networks, SDN can provide improved network architecture of three layers under the same infrastructure. ...
Article
Full-text available
Emergency Communication Systems (ECS) are network-based systems that may enable people to exchange information during crises and physical disasters when basic communication options have collapsed. They may be used to restore communication in off-grid areas or even when normal telecommunication networks have failed. These systems may use technologies such as Low-Power Wide-Area(LPWAN) and Software-Defined Wide Area Networks (SD-WAN), which can be specialized as software applications and Internet of Things (IoT) platforms. In this article, we present a comprehensive discussion of the existing ECS use cases and current research directions regarding the use of unconventional and hybrid methods for establishing communication between a specific site and the outside world. The ECS system proposed and simulated in this article consists of an autonomous wireless 4G/LTE base station and a LoRa network utilizing a hybrid IoT communication platform combining LPWAN and SD-WAN technologies. The LoRa-based wireless network was simulated using Network Simulator 3 (NS3), referring basically to firm and sufficient data transfer between an appropriate gateway and LP-WAN sensor nodes to provide trustworthy communications. The proposed scheme provided efficient data transfer posing low data losses by optimizing the installation of the gateway within the premises, while the SD-WAN scheme that was simulated using the MATLAB simulator and LTE Toolbox in conjunction with an ADALM PLUTO SDR device proved to be an outstanding alternative communication solution as well. Its performance was measured after recombining all received data blocks, leading to a beneficial proposal to researchers and practitioners regarding the benefits of using an on-premises IoT communication platform.
... Therefore, different SDN distributed controllers have been presented to give some amount of performance, security, availability, and scalability as indicated in Figure 2. Controllers such as Hyperflow [38], Kandoo [39], and, ONOS [22] provide a series of distributed controllers and each controller has an equivalent global view of network topology [40]. The distributed controller controls the entire network while preserving sophisticated requirements such as performance metrics, security, load balancing, efficiency, good features, stable architecture, availability, fault tolerance, and efficient convergence time [23,[33][34][35][36][37][38][39][40][41]. ...
Article
Full-text available
The increasing need for automated networking platforms like the Internet of Things, as well as network services like cloud computing, big data applications, wireless networks, mobile Internet, and virtualization, has driven existing networks to their limitations. Software-defined network (SDN) is a new modern programmable network architectural technology that allows network administrators to control the entire network consistently and logically centralized in software-based controllers and network devices become just simple packet forwarding devices. The controller that is the network's brain, is mostly based on the OpenFlow protocol and has distinct characteristics that vary depending on the programming language. Its function is to control network traffic and increase network resource efficiency. Therefore, selecting the right controllers and monitoring their performance to increase resource usage and enhance network performance metrics is required. For network performance metrics analysis, the study proposes an implementation of SDN architecture utilizing an open-source OpenDaylight (ODL) distributed SDN controller. The proposed work evaluates the deployment of distributed SDN controller performance on three distinct customized network topologies based on SDN architecture for node-to-node performance metrics such as delay, throughput, packet loss, and bandwidth use. The experiments are conducted using the Mininet emulation tool. Wireshark is used to collect and analyse packets in real-time. The results obtained from the comparison of networks are presented to provide useful guidelines for SDN research and deployment initiatives.
... To evaluate the controller performance in a detailed way, the paper (Cabarkapa D. et al., 2021) presented different performance aspects of the RYU and POX controller, such as throughput and latency, under simple tree-based and complex fat-tree-based network topologies. Work (Shalimov A. et al., 2013) presented a framework named HCprobe to compare seven different SDN controllers. To compare the effectiveness of these controllers, the authors performed additional measurements like scalability, reliability, and security along with latency and throughput. ...
Chapter
Full-text available
Purpose: The purpose of this paper is to analyze some of the most important international legal instruments governing the fight against corruption. The existing international legal framework includes international and regional agreements and other legal acts adopted under the auspices of the United Nations, the European Union, the Council of Europe, the Organization for Economic Cooperation and Development, the Organization of American States, the African Union and other important international organizations. Design/Methods/Approach: Using the comparative, normative, teleological and linguistic method, the author analyzes the main provisions of international legal instruments concerning the use of common terms and definitions of corruption, its prohibition and incrimination, jurisdiction of judicial bodies, determination of legal responsibility, sanctions, monitoring of preventive and other measures, all in order to get a more realistic picture of the functional relationship that exists between the various legal regimes that regulate corruption in the international legal field as one of the most problematic forms of crime in the contemporary world. Findings: The paper finds that there are certain differences between international legal instruments that regulate corruption offenses. This knowledge may be important for further harmonization of the international legal framework on the fight against corruption. This finding can be useful for the consistent incorporation of international anti-corruption standards into national legislation, in order to avoid situations where corrupt acts are treated unequally due to the application of different legal standards at the national level, which may be crucial for their incrimination and punishment especially when corruption acquires transnational characteristics. Thus, for example, by implementing the standards present in the OECD Anti-Bribery Convention, States can opt for a much narrower approach that calls exclusively for the incrimination of so-called “active bribery”. On the other hand, if States implement standards from some other international legal instruments, such as the Criminal Law Convention on Corruption of the Council of Europe, then States will sanction a number of different corrupt activities with their internal legislation. Considering that in modern conditions, corrupt activities are taking more and more forms of transnational organized crime, according to the authors, only institutionalized mechanisms of international police and judicial cooperation can help in their suppression and punishment. Originality/Value: The scientific value of this paper derives from a comparative legal analysis of the most important international legal instruments and mechanisms used against corruption at the international legal level. The results obtained by the author during the analysis may be important in the implementation of international legal standards on the prevention and punishment of corruption in the domestic legal order. The paper has some original value as it points to the harmful consequences of non-application or inconsistent application of adopted international legal standards on the fight against corruption for social security, good governance and rule of law, which are basic preconditions for developing any democratically stable and economically sustainable societies.
... The error of the solution does not exceed 5-7% on average. The heuristic insertion method based on the "nearest neighbor" principle, as well as its offshoot, "taboo search", were considered in [23,24]. However, despite the simplicity of the solution, these approaches are based on formally unfounded considerations, so it is difficult to prove that the heuristic algorithm for each set of initial data finds solutions that are optimal. ...
Article
Full-text available
The article proposes an approach to ensuring the functioning of Software-Defined Networks (SDN) in cyber attack conditions based on the analytical modeling of cyber attacks using the method of topological transformation of stochastic networks. Unlike other well-known approaches, the proposed approach combines the SDN resilience assessment based on analytical modeling and the SDN state monitoring based on a neural network. The mathematical foundations of this assessment are considered, which make it possible to calculate the resilience indicators of SDN using analytical expressions. As the main indicator, it is proposed to use the correct operation coefficient for the resilience of SDN. The approach under consideration involves the development of verbal models of cyber attacks, followed by the construction of their analytical models. In order to build analytical models of cyber attacks, the method of topological transformation of stochastic networks (TTSN) is used. To obtain initial data in the simulation, the SDN simulation bench was justified and deployed in the EVE-NG (Emulated Virtual Environment Next Generation) virtual environment. The result of the simulation is the time distribution function and the average time for the cyber attack implementation. These results are then used to evaluate the SDN resilience indicators, which are found by using the Markov processes theory. In order to ensure the resilience of the SDN functioning, the article substantiates an algorithm for monitoring the state of controllers and their automatic restructuring, built on the basis of a neural network. When one is choosing a neural network, a comparative evaluation of the convolutional neural network and the LSTM neural network is carried out. The experimental results of analytical modeling and simulation are presented and their comparative evaluation is carried out, which showed that the proposed approach has a sufficiently high accuracy, completeness of the obtained solutions and it took a short time to obtain the result.
Article
Full-text available
Network monitoring allows network administrators to facilitate network activities and to resolve issues in a timely fashion. Monitoring techniques in software-defined networks are either (i) active, where probing packets are sent periodically, or (ii) passive, where traffic statistics are collected from the network forwarding elements. The centralized nature of software-defined networking implies the implementation of monitoring techniques imposes additional overhead on the network controller. We propose Graph Modeling for OpenFlow Switch Monitoring (GMSM), which is a lightweight monitoring technique. GMSM constructs a flow-graph overview using two types of asynchronous OpenFlow messages: packet-in and flow-removed, which improve monitoring and decision making. It classifies new flows based on the class of service. Experimental findings suggest that using GMSM leads to a decrease in network overhead resulting from the communication between the controller and the switches, with a reduction of 5.7% and 6.7% compared to state-of-the-art approaches. GMSM reduces the controller’s CPU utilization by more than 2% compared to other monitoring methods. Overhead reduction comes with a slight reduction of approximately 0.17 units in the estimation accuracy of links utilization because GMSM allows the user to monitor the network subject to a selected class of service, as opposed to having an exact view of the network utilization.
Chapter
In today’s world and surrounding cyber-physical systems, the most adopted communication devices are Internet-enabled devices and are embedded with control functions for gaining various self-capabilities. The Internet of Things (IoT) is emerged to address and manage the challenges of connecting and communicating with a massive volume of Internet-enabled devices running various services and applications. These devices can collect data from a set of sensors monitoring physical objects and they are embedded with actuators controlling the machine systems. These quick developments were the reason for the rise of the 5G mobile network to support these large data transfers. Moreover, due to the rapid evolution in the communications technology, dynamicity of IoT networks, and the emerging usage of many heterogeneous IoT devices together, the control and management architectures adopted by the traditional Internet are not able to efficiently and effectively manage networking operations and required QoS of running services and applications. Accordingly, this raises the need to have the Software Defined Network (SDN) paradigm. SDN definition is based on the dissociation of the control plane from the data plane. This separation is the reason for ensuring flexibility and simplification of network management through control plane programmability. The control layer is often comprised of a centralized unit known as an SDN controller, which is regarded as the network's brain since it holds all of the network's policies and regulations. In this chapter, the control layer is designed and studied according to the number of controllers used, their services support, and their cooperation to provide efficient data communications in the data plane. There are different SDN control plane architectures, for managing operations and cooperation of comprised controllers in such control plane. Centralized control, distributed control, and Logically Centralized-Physically Distributed (LC-PD) control planes are the most well-known control plane architectures. The LC-PD control plane architecture is investigated in an IoT environment that uses 5G cellular network technology. The tests conducted show that implementing LC-PD control plane architecture into 5G systems increases network performance and QoS of operating network services.KeywordsSoftware defined network5GLC-PD control planeQoSIoT
Article
Purpose of research. Is investigation of a software method for balancing data in a distributed network via an Nginx proxy server. Methods. In a computer network load balancer of data is an important network parameter. Due to load balancer in the network, the transmission delay may decrease or increase, the spread from the average jitter value. Thus, load balancer in the network affects the time characteristics and network bandwidth. Load balancer can be managed and optimized in both software and hardware ways. The article focuses on load balancer of data at the application level of applications. Hardware load balancer, which is solved within the framework of network equipment itself, for example, in switches, is briefly considered. This is handled by the queue manager in the Ethernet switch, which manages the bandwidth and queues. Cyclic algorithms are described, as well as an algorithm with time selection of frames in dispatcher of switch that implement effective hardware load balancer. Software load balancer of data in the network is considered. A web server and an Nginx reverse proxy server were used as software load balancer, 3 Docker containers based on Asp.net applications running on different environments. Results. The network was configured and the cyclic load balancer algorithm was used in the Nginx server. A research of a network with a different number of environments in the network, web servers, data requests was conducted. The cyclic load balancer of data in Nginx is more efficient than the random algorithm, this has been shown during experiments. Conclusion. Hardware and software load balancer algorithms in a distributed network were considered and investigated. Cyclic load balancer of data has made it possible to increase the network bandwidth, its efficiency and performance.
Article
Full-text available
For routers and switches to handle ever-increasing bandwidth requirements, the packet “fast-path ” must be handled with specialized hardware. There have been two approaches to building such packet forwarding hardware.
Conference Paper
We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge. These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work.
Conference Paper
Cloud computing realises the vision of utility computing. Tenants can benefit from on-demand provisioning of computational resources according to a pay-per-use model and can outsource hardware purchases and maintenance. Tenants, however, have only limited ...
Conference Paper
We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge. These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work.
Conference Paper
We present SWAN, a system that boosts the utilization of inter-datacenter networks by centrally controlling when and how much traffic each service sends and frequently re-configuring the network's data plane to match current traffic demand. But done simplistically, these re-configurations can also cause severe, transient congestion because different switches may apply updates at different times. We develop a novel technique that leverages a small amount of scratch capacity on links to apply updates in a provably congestion-free manner, without making any assumptions about the order and timing of updates at individual switches. Further, to scale to large networks in the face of limited forwarding table capacity, SWAN greedily selects a small set of entries that can best satisfy current demand. It updates this set without disrupting traffic by leveraging a small amount of scratch capacity in forwarding tables. Experiments using a testbed prototype and data-driven simulations of two production networks show that SWAN carries 60% more traffic than the current practice.
A general purpose software measurement tools. Software Practice and Experience
  • D Ferrary
  • M Liu
  • Ferrary D.
D. Ferrary, M. Liu. A general purpose software measurement tools. Software Practice and Experience, v.5, 1985