Content uploaded by Fouad Benamrane
Author content
All content in this area was uploaded by Fouad Benamrane on Jul 06, 2015
Content may be subject to copyright.
Performances of OpenFlow-Based Software-
Defined Networks: An overview
Fouad Benamrane, Mouad Ben mamoun, and Redouane Benaini
LRI, Faculty of Sciences of Rabat, Mohammed V University, Rabat, Morocco
Email: benamranefouade@gmail.com, {ben_mamoun, benaini}@fsr.ac.ma
Abstract—Software Defined Networking (SDN) has gained
significant attention from network researchers and industry
in recent years. Indeed, the SDN concept provides many
advantages such as programmability and easy management
of the network. However, it generates new challenges as
scalability and performances issues, understanding in-depth
the performances and limitations of the SDN concept is a
prerequisite to its implementation and deployment in real
networks. In this paper, we aim to present in a
comprehensive way, the most important works that focus on
performances of SDN. As SDN separates the control plane
from the data plane, we first present research efforts made
to enhance the performances of data plane devices, then, we
give an overview of different solutions proposed to improve
controller performances. We provide also an overview on
recent control plane architectures with multiple controllers
that have been proposed to meet performances and
scalability constraints. Finally, we present the different
techniques and tools used in literature to evaluate the
performances of software defined networks.
I. INTRODUCTION
Software-Defined Networking (SDN) is a new concept
of network architecture that has emerged in last years.
Contrary to traditional architectures where the control
plane and the data plane coexist in network devices such
as routers and switches, this concept basically consists to
move the control plane outside network devices and
leaves only the data plane inside. The control plane is
supported by a software application called: “Controller”.
Network devices become simple packet forwarding
devices that can be programmed through rules that are set
by the controller. In SDN, communication between the
control plane (the controller or controllers) and the data
plane (networking devices) is mainly established today by
the OpenFlow protocol.
After the proposal of the SDN concept and the
implementation of OpenFlow by researchers from
Stanford University towards 2008, SDN has aroused
great interest and it becomes an important topic for
research, development, and standardization in the
networking area. This is due, on the one hand to the
limitations of current network architectures to meet the
requirements of new IT trends such as cloud computing,
virtualization, Internet of things and the explosion of
mobile devices and on the other hand, to the potential and
the various benefits, which Software-Defined Networks
may offer.
In fact, the coupling between the control plane and the
data plane in network elements prevents quick and easy
deployment of new network functionalities and services
because they must be implemented directly in
infrastructure. However, this depends on the production
cycles of manufacturers which may be very long. Thus,
traditional networks are relatively static and lack
flexibility to adapt to the dynamic nature of traffic,
applications, cloud and virtual environments. The interest
of SDN is the ability to program network devices in a
unified and centralized way through software applications.
This gives a large flexibility to administrators to
dynamically control, manage, secure and optimize their
network resources. Moreover, the ability to program the
network by operators, enterprises and users will
accelerate innovation in networks. The introduction of
new network services will be done at software
development speed without need to wait for new
hardware products from manufacturers.
The SDN approach certainly has several advantages, as
explained above. However, some questions and issues
have occurred with this approach. In fact, the first
propositions and deployments of SDN networks use a
single centralized controller. This poses the problem of
having a single point of failure, a concern about the
performance of the controller and its capacity to handle a
large amount of flow requests and by consequence the
ability to scale when the network size grows. The first
Openflow controller has been NOX, it has a flow install
time less than 10 ms and can handles up to 30000 flow
requests per second [1]. Many other Openflow controllers
were developed after with considerably better
performance like Floodlight [2], OpenDayLight [3]. In
this article we provide an overview of the progress that
has been made regarding the performance of the
controller and techniques that have enabled this progress.
On the other hand, to avoid the problem of a single point
of failure and address the scalability issue, control planes
with multiple controllers have been proposed. But this
generates new questions and issues such as: the number
of needed controllers, the placement of controllers and
the communication between controllers. In this context,
we can distinguish two main propositions of control plane
designs: logically centralized and logically distributed
control planes. Both propositions have their advantages
and disadvantages. Later in this paper, we present these
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
329
© 2015 ACADEMY PUBLISHER
doi:10.4304/jnw.10.6.329-337
propositions and discuss their advantages and
disadvantages.
Since the emergence of Software Defined Networking
concept, a lot of works have investigated this topic in the
networking area. Several recent papers [4], [5], [6], and
[7] provide interesting surveys about SDN and the
performed works. These papers present the different
contributions that have helped to promote programmable
networks before SDN. They explain in details the concept,
the motivations and the architecture of SDN. They give
all information about the Openflow implementation and
its different releases. They present and discuss the current
deployments and challenges and also the future directions
for SDN.
Our goal in this paper is different, we aim to give to
the reader especially interested in the performances of
SDN networks, an overview of the main studies and
enhancement propositions that have been made in the
literature regarding the performances of SDN networks.
In fact, on the one hand, the survey papers mentioned
above cover all aspects of SDN networks and do not
focus on the performance issues and the improvement
solutions. The information on performance is embedded
in a large amount of information and details about SDN.
On the other hand, some articles are exclusively devoted
to performances but they are specific to a controller or a
performance aspect. For instance, the paper [8] is
dedicated to performances of the NOX-MT controller, the
paper [9] analyses the impact of the latency between an
openflow switch and its controller, moreover, authors [10]
compare the throughput of different controller, and shows
the impact of multi-threading on their performances.
The paper is organized as following: the first section
describes briefly the concept of SDN and OpenFlow
protocol. The second section is devoted to the
performances of OpenFlow switches. The third section
treats the performances of Openflow controllers. The
fourth section surveys the emerging propositions based
on multiple controllers. Section V describes the various
techniques that have been used to evaluate the
performances of OpenFlow networks. The last section is
dedicated to a conclusion and some ideas for future works.
II. SDN AND OPENFLOW
In this section we review the SDN architecture and the
Openflow protocol, then we present briefly the main
performance metrics associated to an OpenFlow-Based
Software-Defined Network.
A. SDN Architecture
As illustrated in figure 1, SDN separates the network
architecture into three layers:
Transmission layer (or data plane): composed by soft
switches, its main function is to execute the orders
provided by the remote controller.
Control layer (or control plane): composed by one or
more controllers which constitute the core of the
infrastructure and concentrate all the intelligence of the
network. This layer offers a unified and centralized view
of the network, and it hides the complexity of the
underlying physical network to the applications.
Application layer: as a result of the centralized view
provided by the control plane, this layer constitutes a
platform for the implementation of all kinds of new
services and applications designed specifically for users
needs.
B. OpenFlow Protocol
The communication between the control plane and the
data plane, in the SDN architecture is essentially
performed by the OpenFlow protocol. It’s an open source
protocol normalized by the Open Networking Foundation
(ONF) [32], funded by the leaders of IT industry like
Cisco, Facebook, Google, HP, Microsoft, …
Before presenting the components of the Openflow
protocol, note that there is several versions of this
protocol. Indeed, since its first release and thanks to the
growth of interest for SDN/OpenFlow, several versions
have been developed with the objective to improve the
capabilities of previous versions. The main additions
proposed in each version are summarized below:
Figure 1. SDN architecture
1.1: Support for MPLS, Qos, VLANs, multipath,
multiple tables, virtual ports, port groups
1.2: Support for extensible headers (in match,
packet_in, set_field), IPv6
1.3: Support for tunneling, per-flow traffic meters,
Provider Backbone Bridging, and notion of a group,
multiple controllers (Master, Slave or Equal).
1.4: Provides better management of switches by
controllers by adding a number of new interactions
between them. “Role status” keeps controllers
synchronized with regard to which is primary versus
secondary. Group/meter change notifications sent to all
controllers associated with a switch.
In OpenFlow-Based Software-Defined Networks, the
traffic is treated as flows. Each OpenFlow-compliant
switch contains one or more flow tables that include
informations about flows. Each flow table entry has the
following fields:
Match field: defines all information about the flow that
are used to match incoming packets.
Statistics field: maintains the number of packets and
bytes for each flow.
330
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
© 2015 ACADEMY PUBLISHER
Action field: defines how the packets belonging to a
flow should be processed and forwarded.
With the OpenFlow protocol, the switches use flow
tables to forward packets, the controller may set up new
forwarding rules, if necessary, into a switch flow table at
any time, the communication between the controller and
switches is established over a secure channel.
Figure 2 illustrates the communication process
between an OpenFlow switch and a controller when a
new rule must be implemented in the flow table: when a
packet arrives to a switch, its header fields are examined
and compared with the match fields in the flow table
entries. If a Match is identified, the switch executes the
action specified in the corresponding flow table entry.
Otherwise, if there is no match (1), a flow request is sent
to the controller (2), then the controller decides an action
and sends a new forwarding rule to the switch (3). Finally,
the switch's flow table is updated (4).
Figure 2. Switch-Controller communication process
Obviously, the traffic for which the intervention of the
controller is necessary to install a new rule will be
penalized in terms of delay compared to the traffic that
can be transmitted directly by the switch. In addition, if
the controller is extremely solicited by the switches, it
could become a bottleneck and affects the performances
of the whole network.
C. Key Performance Metrics
From the above subsection, it appears that the
SDN/OpenFlow architecture raises the problem of
controller performance and its capacity to handle the
requests of a large number of network devices. Two
performance metrics are from primary importance:
Throughput: number of flow requests handled per
second.
Latency: delay to respond to flow requests.
Currently, there are several available OpenFlow
controllers, for example, NOX, Floodlight, Ryu and ODL
(OpenDayLight). The main difference between these
controllers lies on their performances. For instance, NOX
the first developed controller could handle just 30K flows
per second and has a flow install time (latency) of 10 ms.
More recent controllers like beacon could achieve a
throughput of 7 million flows per second. To achieve
these good results, many studies, propositions and
solutions have been made in the literature to understand
and improve the performances of OpenFlow-Based
Software-Defined Networks. Our goal in this paper is to
present and discuss the main performed studies. In the
remainder of this paper, we classify them in three
categories:
Performance studies of openflow switches
Performance studies of openflow controllers
Distributed controllers studies
We note that the distributed controllers’ studies may be
treated under the performance studies of Openflow
controllers’ category, but we prefer to discuss them in a
separate section due to their importance.
III. PERFORMANCES OF OPENFLOW SWITCHES
Recently, the flow-based switch models (OpenFlow)
emerge with more flexibility in comparison to Ethernet
switches. It is a promising technology that can enhance
network virtualization towards a more flexible future
Internet architecture. Two types of OpenFlow switches
can be identified. The first one is the hardware-based
commercial switch, which typically uses TCAMs and
flow table to store all information about flows. The
second one is the software-based switch that uses Linux
systems to perform the OpenFlow switch operations. In
this section, we mention a study that compares the
OpenFlow switches and the traditional switches in terms
of performances, and then present some interesting
contributions that have been made to the data plane of an
OpenFlow network. We classify them in two categories:
software-based enhancements and hardware-based
enhancements.
A. Comparison with Traditional Switches
Since the arrival of the OpenFlow switch model, a
comparison in terms of performance with traditional
switches has been needed. In their paper [11], authors
evaluate the achievable performance of an OpenFlow
switch and a traditional L2 and L3 switch by comparing
the results of single flows against multiple flows
experiment. Two parameters are considered for the
evaluation, throughput and latency. In single-flow case,
authors focus on the exact match packet with different
size, they find that starting from 256-byte packets, all
three forwarding techniques are able to achieve the full
wire speed, maximum throughput and the latencies are
quite small (less than 30 us). However in multi-flow case
they consider the effects of the forwarding table size, the
performance degradation of L3 and OpenFlow switches is
quite limited, and the effect on L2 switch is disastrous. In
all the experiments, the OpenFlow switch offers very
good performance in both cases compared to traditional
switches.
B. Software-based Enhancements
OpenFlow switches have two types of lookup tables:
hash and linear. The hash table called also the exact
match table uses a hashing function, and it contains up to
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
331
© 2015 ACADEMY PUBLISHER
131072 entries. The linear table exploits wildcards in the
packet header fields to match packets to flows; it’s a
small table and contains only 100 entries. In [12], the
authors introduce the Flow Director technique. It is used
for directing incoming packets to received queues. The
Flow Director is considered as an additional lookup table,
which stores a mapping between receiving queue and
output port. They observe that unlike the linear table size,
the hash table size as well as Flow Director table size
does not affect OpenFlow switching performance, and the
modified OpenFlow switch improves switching
throughput up to 25% compared to the throughput of
regular software-based OpenFlow switching.
Another contribution that we mention in this
subsection concerns the counters and statistics that
remain in an OpenFlow switch. Mogul et al. [13] have
announced that among advantages of SDN network is
that it allows a fine-grained flow control by maintaining
per-flow counters (e.g.; Received packets, Received bytes,
and Duration) in the data plane, and making them visible
to the external controllers. They propose Software-
Defined Counters (SDC) to migrate all SDN counters
from the ASIC to switch-local CPU, consequently they
propose theoretically SDC for increasing flexibility,
efficiently access to counters, and reduction in ASIC
space and complexity.
C. Hardware-Based Enhancements
An OpenFlow switch makes generally two important
decisions: queue-management and scheduling operations.
The first concerns how long can the queue grow? and the
second consists on deciding which packet should be sent
next when an outgoing link is free. An interesting work in
[14] treats these two questions. The authors propose to
extend SDN’s flexibility to cover queuing and scheduling
decisions made in the data plane, this by adding a small
FPGA to the fast path of a hardware switch, with simple
interface to the switch’s packet queues. This solution
allows queuing and scheduling to be reconfigured by the
network operator, and it adjusts queuing and scheduling
behavior with application objectives. As a result they
achieve both high performance and flexibility to the data
plane.
At least two studies have proposed additional ways to
take profit of a CPU being connected to the switch.
Authors in [15] attempt to increase the size of both
forwarding table and packet buffer, by proposing some
modifications to the current OpenFlow switch design.
The main idea focuses on combining ASIC and CPU
processing. In their solution, the switch is not a dummy
component; it swaps the flows, monitors the queues and it
decides the traffic redirection. They propose to use CPU
as a traffic co-processor in switches to address the limited
size of forwarding table and packet buffer. This makes
the network devices more programmable and offer more
network functions. A prototype is developed and a
3.9Gb/s throughput is achieved. In the same direction,
Luo et al in [16] implement network processor based
acceleration cards to perform OpenFlow switching. They
show a 20 percent reduction on packet delay compared to
conventional designs.
From the previous studies, we see that various
parameters may influence the performance of an
OpenFlow-enabled switch, including the processing time
in the modern all-in-one ASIC, lookup table delay,
queue-management and scheduling operations. The
proposed enhancements lead to better performances of
OpenFlow switches, and they encourage the enterprises
to use them in production networks. In the next section,
we present performance studies of Openflow controllers.
IV. PERFORMANCES OF OPENFLOW CONTROLLERS
SDN controllers are special control software which
interacts with switching devices and provides a
programmatic interface. Nowadays there are more than
30 different OpenFlow controllers created by different
groups, written in different languages and having
different performances. The performance of an SDN
controller is characterized by several metrics, but,
throughput and latency are the most considered. Since the
first developed controller NOX, several studies and
improvement solutions have been done concerning the
performances of controllers. These solutions are
essentially based on the use of virtualization and multi-
threading techniques. We began this section by
mentioning some interesting performance studies, after
we present the improvement solutions.
A. Some Performance Studies
To evaluate the best controller in term of latency, the
authors of [10] calculate the latency considered as the
average response time with one connected switch and 105
hosts. They show that the smallest latency has been
demonstrated by MuL and Maestro controllers, while the
largest latency is typical of python-based controllers:
POX and Ryu.
In [9], the authors consider FloodLight which is a high
performing OpenFlow controller that can handle a large
amount of flows from a large amount of equipments, and
then show that the link between a switch and its
controller is of primary importance for the whole
performances of the network. In fact, the controller
cannot process requests faster than it receives them. In
this study, the authors focus on how the controller and the
network perform with bandwidth and latency issues on
the control link. They adjust the bandwidth with a traffic
shaper and the latency by increasing the time a packet has
to wait in the egress queue to reach the controller.
Increasing the bandwidth leads to a more reactive
network due to the controller working at full capacity.
Conversely, a low bandwidth will cause some packet
losses as the egress queues fill up. The latency has a very
different effect on the performances. In particular, the
authors show that having a high throughput with a high
latency causes bad performances.
As each switch has to do a handshake with the
controller before being able to send requests, no operation
is possible during a time window. For instance, when the
latency increases, the time to complete the handshake
phase increases until 7s for a latency of 100ms in a 32
switches network. Hence, with a high latency, the packets
have to wait longer, thus increasing the probability of
332
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
© 2015 ACADEMY PUBLISHER
packet loss, and it reduces the overall performance of the
network. In conclusion, the bandwidth arbitrates how
many flows the controller can process, as well as the loss
rate if the system is under heavy load, while latency
drives the overall behavior of the network.
B. Virtualization-Based Improvements
Network virtualization is a particular abstraction of a
physical network that allows supporting multiple logical
networks to run on a common physical substrate. SDN
effectively separates the data plane and the control plane,
whereas virtual networks separate logical and physical
networks. The two concepts are certainly distinct, but
SDN is a useful tool for implementing virtual networks.
To benefit from SDN and network virtualization, and
improve the performance of an SDN network, researchers
add new layer between control-plane and data-plane
called Flowvisor [17]. Flowvisor basically virtualizes
network control by letting experimental traffic run in
parallel on the production network with the real user, and
real production traffic. Flowvisor specifies some subset
of the traffic that he is willing to let run over that
experimental network control. This is achieved using a
concept called flow space, where some subset of traffic
flows, based on IP address, port, might be specified as
being controlled by an experimental network controller,
as opposed to the production network control.
Whereas FlowVisor can be compared to a full
virtualization technology, FlowN [18] is a virtualization
solution where each tenant (instance) has the illusion of
having its own address space, topology, and controller.
This enables the tenants to deploy any network
abstraction and application on top of the controller
platform, and otherwise it allows a unique shared
controller platform to be used for managing multiple
domains in a cloud environment. By comparing FlowN
and Flowvisor in a scalable virtual networks, they
conclude that FlowN has a higher overhead (Latency) due
to the database but scales better than FlowVisor (in case
of 100 virtual networks or greater).
C. Multithreading-Based Improvements
Authors in [8] attempt to understand the implication of
using multi-threading techniques in SDNs. They have
introduced a new multi-threaded controller called NOX-
MT (new release of the NOX OpenFlow controller). The
main conclusion is that NOX-MT improves the controller
throughput by more than 30 times. Indeed, NOX-MT
takes full advantage of multi-core processors using multi-
threading techniques that can run multi-tasks
concurrently and handling different tasks at the same time,
hence making optimal use of the available resources.
An important study that investigates the impact of
multithreading on performances is given [7]. The authors
perform a comparison between different controllers in
term of performance, especially the throughput. In fact,
two major factors distinguish controllers, the first one is
the algorithm of distributing incoming messages between
threads, and the second one is the mechanism or the
libraries used for network interaction. Using different
number of threads (from 1 up to 12) shows that single
threaded controllers (Nox and Ryu) are very limited
regarding the throughput because they cannot handle a
large number of flows, however the Beacon controller
which is multi-threaded can handle a large number of
flows per second. Knowing that Python controllers (POX
and Ryu) do not support multi-threading, they show no
scalability despite a number of CPU cores. Maestro's
scalability is limited to 8 cores as the controller does not
run with more than 8 threads. The performance of
Floodlight increases steadily in line with the increase in
the number of cores. Finally, Beacon achieves a
throughput of nearly 7 million flows per second and
shows the best scalability.
From the previous studies, we conclude that the
characteristics and performances of SDN controllers may
be very different. So, it's important to carefully choose
the OpenFlow controller.
At the end of this section, we give a summary of some
famous controllers and their characteristics in the table 1.
In particular, we specify the OpenFlow version that they
support, their throughput, if they are multi-threaded or
not, if they can be distributed or not (distributed
controllers will be considered in the next section). Also
we classify all controllers by their ability to support
OpenStack, which is an open source platform for creating
and managing public or private cloud.
V. PERFORMANCES IMPROVEMENT USING MULTIPLE
CONTROLLERS
With the application of SDN/OpenFlow in large
networks, the controller could become a performance
bottleneck due to the large amount of incoming flow
requests. To prevent bottlenecks and more generally to
improve performances, architectures with distributed
controllers have been proposed for SDN networks. In this
section, we describe various design options for a control
plane with multiple controllers.
TABLE I. SDN CONTROLLER’S CHARACTERISTICS
NOX
POX
Ryu
Floodlight
ODL
Language
C++
Python
Python
Java
Java
Developer
Nicira
Nicira
NTT, OSRG group
BigSwitch
Linux Foundation
OpenFlow
1.0, CPQD:1.1, 1.2, 1.3
1.0
1.0,1.1,1.3,1.4
1.0, Floodlight plus:1.3
1.0, 1.3
Throughput in kilo flows/s
30
30
33
1500
–
Multi-threaded
No
No
No
Yes
Yes
OpenStack
No
No
Yes
Yes
Yes
Distributed
No
No
Yes
Yes
Yes
Learning curve
Moderate
Easy
Moderate
Steep
Steep
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
333
© 2015 ACADEMY PUBLISHER
A. Number and Location of Controllers
Using multiple controllers offers many advantages, but
also raises several design questions such as the number of
controllers needed in a given topology and their
placement. In [19], the authors focus on the WAN
deployments where latency dominates. The best
controller placement relies on propagation latency that
bounds the control reactions that can be executed at
reasonable speed and stability with a remote controller.
They conclude that there are no placement rules that
applies to every network and that the controllers’ number
depends on the network topology and the metric choice.
The authors of [10] focus on the reliability of the network,
by measuring the number of failures during long-term
testing and under a given workload profile. The
experiments have shown that most of the controllers
successfully cope with the test load, although two
controllers, MuL and Maestro, start to drop PacketINs
after several minutes of work.
The controller responsiveness is the primary factor to
decide if additional controllers should be deployed. For
wide-area SDN deployments, multiple controllers are
often required, and the placement of these controllers
influences every aspect of an SDN. Authors in [20]
search for the most reliable controller placements for
SDNs, they first present a novel metric that reflects the
reliability of the SDN control network, called expected
percentage of valid control paths when network failures
happen. They show that placement of controllers should
be carefully chosen and depends on the specific algorithm
used, and a greedy algorithm provides solutions that are
close to optimal. Another similar study [21] attempts to
optimizes the reliability of SDN control network. They
define reliability metric as the same defined previously,
where the control path loss is the number of broken
control paths due to network failures. The optimization
target is then to minimize the expected percentage of
control path loss. To evaluate reliability, they use the
brute force algorithm, and they measure the cost of each
placement of K controllers, and they also keep the best
one. They conclude that using a few controllers reduces
reliability. However, for all topologies past a certain
amount of controllers, adding more controllers has an
adverse effect. Hence the best controller number, K, is in
between [0.035n, 0.117n] where K is the number of
controllers and N the number of nodes.
Recently, proposals have been made to physically
distribute controllers. We present next two main classes
of solutions that aim to avoid having a Single Point Of
Failure (SPOF) for the entire network and allowing a
scalable architecture. The first category proposes a
physically distributed control-plane, but, it is logically
centralized by synchronizing all network information and
balancing load among several controllers. The second
category suggests a logically distributed control-plane,
where each controller manages its domain and distribute
useful information to other instances within the cluster
and communicate if necessary with the neighboring
domain.
B. Logically Centralized Control Plane
This kind of solutions is focused on improving the
performance of SDN networks by sharing the charge
between controllers. Each controller must synchronize all
information about its portion of the network. We can
benefit from these solutions in data centers where the
controller instances share a huge amount of information
to ensure fine-grained network wide consistency. The
first attempt that suggests using a physically distributed
control-plane is Hyperflow [22]. In this project, all
controllers synchronize their network wide view using a
distributed file system. To facilitate cross-controller
communication, they employ the Publish/Subscriber
messaging communication. As a result, Hyperflow can
handle more flow events while keeping the flow setup
latency minimal. With the same concept, Onix [23]
provides a distributed system which runs on a cluster. It's
responsible for control logic programmatic access to the
network, moreover it distributes network state to other
instances within the cluster, the Onix team designs four
components in a network that is controlled by Onix,
physical infrastructure (switch and router), connectivity
infrastructure (control channel), Onix and Control logic.
Similarly, KANDOO [24] introduces a hierarchical
distribution controller based on two layers of control-
plane, the bottom layer contains a group of local
controllers that manage local applications with no
knowledge of network wide state, and a centralized root
controller builds the top layer, its role is to determine the
desired network behavior and to run non-local
applications. As a result they reduce the number of events
that are received at the control plane of the network. All
these solutions are physically distributed but logically
centralized, they propose a different architecture and
layer of a SDN network, though they offer a simplified
central view of the network and decrease the look-up
overhead by enabling communication with local
controllers. However, there are not adapted to large
networks with several Autonomous Systems (AS), and
they require extensive traffic between controllers to keep
a network-wide view.
C. Logically Distributed Control Plane
Another approach to use multiple controller is logically
distribute the control plane, Thales Communications
proposes a logically DIstributed SDN COntrollers
(DISCO) [25], DISCO controllers administrate their own
network domain and communicate with each other using
a unique manageable control channel called Agent.
Hence, it provides an open distributed control plane for
multi-domain network based on a unique message-
oriented communication. The general success behind
DISCO is that they separate control plane on two
domains, the intra-domain part, round up the main
functionalities of the controllers, and the inter-domain
piece, manages the communication with other controllers
by sending selected Agent type necessary for reservation,
topology state and disruption operations. This solution
has been tested in inter-domain topology interruption,
end-to-end priority service request and VM migration
334
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
© 2015 ACADEMY PUBLISHER
Figure 3. Distributed architecture of SDN networks
cases. For instance, the VM migration process can be
feasible using Reachability agent between domains.
Currently, the IETF is working on developing an east-
west protocol called SDNi to achieve interconnection
between SDN controllers. In this project the authors
describe the interfaces for exchanging information among
multiple SDN domains in order to synchronize network
database and coordinate their decisions (Figure 3).
Like any distributed architecture, each controller
collects the local physical state and exchanges its network
information between neighboring controllers in order to
distribute its domain state to another controller domain.
The main advantage of this type of solutions is its
adaptation to large distributed networks that contain
several AS such as Internet. But, many open questions
remain, like the protocol of communication between
controllers, synchronization cost, AS policy agreement.
In this context, we work on a special interface that could
answer some above questions. This interface is
implemented between SDN controllers and share useful
information based on the communication modes of each
controller in the network. These mode depend on the
desirable behavior of the controller, its critical position,
and its performance.
VI. PERFORMANCE EVALUATION TECHNIQUES OF
OPENFLOW NETWORKS
The three general techniques to evaluate the
performances of a network are analytical models,
simulations and measurements on the physical network or
an experimentation platform. In this section, we give an
overview on the techniques that were effectively used in
the particular case of OpenFlow Networks. We have
already mentioned that the three techniques were applied
in this context, but the most and common used technique
is to consider a simulation or emulation tool.
A. Analytical Models
To the best of our knowledge, there are a very few
performances studies of SDN/OpenFlow networks based
on analytical models. We mention here two works. The
first work based on queuing theory [26] allows to
evaluate the probability of dropping packets and the
packet delay in the system, however, the model is quite
basic and does not capture all the complexity of the SDN
architecture. The second work [27] is based on network
calculus formalism and it allows analyzing delay and
queue size boundaries of SDN switches and controllers.
B. Experimentation Platforms
In order to help researchers to test and validate new
mechanisms and applications in the domain of
SDN/OpenFlow, many experimentation platforms with
real hardware have been developed such as GENI [28] in
USA, AKARI [29] in Japan, FEDERICA [30], NOVI and
OFELIA [31] in Europe. For example, OFELIA
community creates real-world experimental networking
substrate, and it provides for external experimenters a lot
of island equipment (i2CAT, IBBT, ETHZ) for free,
where each island supplies different OpenFlow switches
and servers capabilities. Performances metrics measured
on this kind of large testbeds are close to reality and
allow having a good idea on the behavior of the studied
system before eventually moving to the stage of its
industrialization. Many projects are tested under these
platforms such us Openflow protocol [32] in GENI and
data plane performance [11] in FEDERICA.
C. Simulation and Emulation Tools
Several simulation and emulation tools have been
developed to implement OpenFlow-based networks on a
single machine and to test new applications. For example,
Mininet [33] an SDN emulation environment developed
by the Stanford University, it can be used to deploy a
virtual network and estimate performance metrics for
various network topologies and sizes. Another option is
to use the NS3 [34] network simulator that supports
OpenFlow within its environment.
For performance analysis of OpenFlow switches, an
open framework called OFLOPS [35] has been developed.
It permits the development of tests for OpenFlow
switches such as CPU utilization, packet counters, then it
determines the bottlenecks between the switch and the
remote control application. On the other hand, for
performance analysis of controllers, an OpenFlow
controller benchmarker called Cbench [36], has also been
developed. Cbench emulates a bunch of Open-Flow
switches connect to a controller then computes the
performance metrics like throughput, response time, and
latency of an SDN controller. Another tool proposes a
software-defined traffic measurement architecture called
OpenSketch [37], it collects measurement information
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
335
© 2015 ACADEMY PUBLISHER
from data plane and the control plane. In the data plane,
OpenSketch provides a simple three-stage pipeline
(hashing, filtering, and counting). However, in the control
plane, OpenSketch provides a measurement library that
automatically configures the pipeline and it allocates
resources for different measurement task.
VII. CONCLUSION
In this paper, we attempt to highlight the most
important studies that provide a solution to improve the
performances of an SDN network for both data plane and
control plane. In fact, since the emergence of Software
Defined Networks, several works have focused on their
performance. This does not mean they pose more
performance problems than traditional networks, but, it
rather reflects a great interest towards this new
technology which allows programmability and ease
management of networks.
Through the different presented studies, we note that
the CPU processing rate, throughput, and the processing
delays are the key factor that enhances the performance
of the data plane. Also, we find that control plane which
is the core layer in SDN architecture, received many
attentions regarding the performance, and recent
controllers are becoming more efficient. However, we
think that multiple controller deployments, especially for
WAN, require deeply analysis and more investigations:
The impact of coordination tasks across multiple
controllers on performance, scalability and elasticity,
must be studied in-depth.
New sharing data mechanisms are needed to allow
controllers to share information concerning their
networks more completely with more useful information
such as the traffic charge of neighboring domains.
Ensuring end-to-end Quality of Service (QoS) in
distributed architecture.
The SDN paradigm and OpenFlow protocol are
advantageous for WAN networks as shown by Google
that has deployed a centralized Traffic Engineering using
SDN/OpenFlow in its inter-data center [38]. This may be
an important future research direction.
REFERENCE:
[1] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N.
McKeown, and S. Shenker, “NOX: towards an operating
system for networks,” ACM SIGCOMM Comput. Commun.
Rev., vol. 38, no. 3, pp. 105–110, 2008.
[2] “Floodlight OpenFlow Controller -Project Floodlight.”
[Online]. Available:
http://www.projectfloodlight.org/floodlight/. [Accessed:
27-Apr-2015].
[3] “OpenDaylight - An Open Source community and
Meritocracy for Software-Defined Networking.”
[4] B. A. A. Nunes, M. Mendonca, X.-N. Nguyen, K.
Obraczka, and T. Turletti, “A Survey of Software-Defined
Networking: Past, Present, and Future of Programmable
Networks,” IEEE Commun. Surv. Tutor., vol. 16, no. 3, pp.
1617–1634, Third 2014.
[5] W. Xia, Y. Wen, C. H. Foh, D. Niyato, and H. Xie, “A
Survey on Software-Defined Networking,” IEEE Commun.
Surv. Tutor., vol. 17, no. 1, pp. 27–51, Firstquarter 2015.
[6] D. Kreutz, F. M. Ramos, P. Esteves Verissimo, C. Esteve
Rothenberg, S. Azodolmolky, and S. Uhlig, “Software-
defined networking: A comprehensive survey,” Proc. IEEE,
vol. 103, no. 1, pp. 14–76, 2015.
[7] A. Lara, A. Kolasani, and B. Ramamurthy, “Network
Innovation using OpenFlow: A Survey,” IEEE Commun.
Surv. Tutor., vol. 16, no. 1, pp. 493–512, First 2014.
[8] A. Tootoonchian, S. Gorbunov, Y. Ganjali, M. Casado,
and R. Sherwood, “On Controller Performance in
Software-defined Networks,” in Proceedings of the 2Nd
USENIX Conference on Hot Topics in Management of
Internet, Cloud, and Enterprise Networks and Services,
Berkeley, CA, USA, 2012, pp. 10–10.
[9] K. Phemius and M. Bouet, “OpenFlow: Why latency does
matter,” in 2013 IFIP/IEEE International Symposium on
Integrated Network Management (IM 2013), 2013, pp.
680–683.
[10] A. Shalimov, D. Zuikov, D. Zimarina, V. Pashkov, and R.
Smeliansky, “Advanced Study of SDN/OpenFlow
Controllers,” in Proceedings of the 9th Central &
Eastern European Software Engineering Conference in
Russia, New York, NY, USA, 2013, pp. 1:1–1:6.
[11] A. Bianco, R. Birke, L. Giraudo, and M. Palacin,
“OpenFlow Switching: Data Plane Performance,” in 2010
IEEE International Conference on Communications (ICC),
2010, pp. 1–5.
[12] V. Tanyingyong, M. Hidell, and P. Sjodin, “Improving PC-
based OpenFlow switching performance,” in 2010
ACM/IEEE Symposium on Architectures for Networking
and Communications Systems (ANCS), 2010, pp. 1–2.
[13] J. C. Mogul and P. Congdon, “Hey, You Darned Counters!:
Get off My ASIC!,” in Proceedings of the First Workshop
on Hot Topics in Software Defined Networks, New York,
NY, USA, 2012, pp. 25–30.
[14] A. Sivaraman, K. Winstein, S. Subramanian, and H.
Balakrishnan, “No Silver Bullet: Extending SDN to the
Data Plane,” in Proceedings of the Twelfth ACM Workshop
on Hot Topics in Networks, New York, NY, USA, 2013, pp.
19:1–19:7.
[15] G. Lu, R. Miao, Y. Xiong, and C. Guo, “Using CPU As a
Traffic Co-processing Unit in Commodity Switches,” in
Proceedings of the First Workshop on Hot Topics in
Software Defined Networks, New York, NY, USA, 2012, pp.
31–36.
[16] Y. Luo, P. Cascon, E. Murray, and J. Ortega, “Accelerating
OpenFlow Switching with Network Processors,” in
Proceedings of the 5th ACM/IEEE Symposium on
Architectures for Networking and Communications
Systems, New York, NY, USA, 2009, pp. 70–71.
[17] R. Sherwood, M. Chan, A. Covington, G. Gibb, M. Flajslik,
N. Handigol, T.-Y. Huang, P. Kazemian, M. Kobayashi, J.
Naous, S. Seetharaman, D. Underhill, T. Yabe, K.-K. Yap,
Y. Yiakoumis, H. Zeng, G. Appenzeller, R. Johari, N.
McKeown, and G. Parulkar, “Carving Research Slices out
of Your Production Networks with OpenFlow,”
SIGCOMM Comput Commun Rev, vol. 40, no. 1, pp. 129–
130, Jan. 2010.
[18] D. Drutskoy, E. Keller, and J. Rexford, “Scalable Network
Virtualization in Software-Defined Networks,” IEEE
Internet Comput., vol. 17, no. 2, pp. 20–27, Mar. 2013.
[19] B. Heller, R. Sherwood, and N. McKeown, “The
Controller Placement Problem,” in Proceedings of the
First Workshop on Hot Topics in Software Defined
Networks, New York, NY, USA, 2012, pp. 7–12.
[20] Y. HU, W. WANG, X. GONG, X. QUE, and S. CHENG,
“On the placement of controllers in software-defined
336
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
© 2015 ACADEMY PUBLISHER
networks,” J. China Univ. Posts Telecommun., vol. 19,
Supplement 2, pp. 92–171, Oct. 2012.
[21] Y. Hu, W. Wendong, X. Gong, X. Que, and C. Shiduan,
“Reliability-aware controller placement for Software-
Defined Networks,” in 2013 IFIP/IEEE International
Symposium on Integrated Network Management (IM 2013),
2013, pp. 672–675.
[22] A. Tootoonchian and Y. Ganjali, “HyperFlow: A
distributed control plane for OpenFlow,” in Proceedings of
the 2010 internet network management conference on
Research on enterprise networking, 2010, pp. 3–3.
[23] T. Koponen, M. Casado, N. Gude, J. Stribling, L.
Poutievski, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue, T.
Hama, and S. Shenker, “Onix: A Distributed Control
Platform for Large-scale Production Networks,” in
Proceedings of the 9th USENIX Conference on Operating
Systems Design and Implementation, Berkeley, CA, USA,
2010, pp. 1–6.
[24] S. Hassas Yeganeh and Y. Ganjali, “Kandoo: A
Framework for Efficient and Scalable Offloading of
Control Applications,” in Proceedings of the First
Workshop on Hot Topics in Software Defined Networks,
New York, NY, USA, 2012, pp. 19–24.
[25] K. Phemius, M. Bouet, and J. Leguay, “DISCO:
Distributed multi-domain SDN controllers,” in 2014 IEEE
Network Operations and Management Symposium (NOMS),
2014, pp. 1–4.
[26] K. Mahmood, A. Chilwan, O. N. Østerbø, and M. Jarschel,
“On the Modeling of OpenFlow-based SDNs: The Single
Node Case,” ArXiv Prepr. ArXiv14114733, 2014.
[27] S. Azodolmolky, R. Nejabati, M. Pazouki, P. Wieder, R.
Yahyapour, and D. Simeonidou, “An analytical model for
software defined networking: A network calculus-based
approach,” in 2013 IEEE Global Communications
Conference (GLOBECOM), 2013, pp. 1397–1402.
[28] “GENI.” .
[29] H. Harai, “AKARI architecture design for new generation
network,” in Summer Topical Meeting, 2009. LEOSST ’09.
IEEE/LEOS, 2009, pp. 155–156.
[30] M. Campanella and F. Farina, “The FEDERICA
Infrastructure and Experience,” Comput Netw, vol. 61, pp.
176–183, Mar. 2014.
[31] “Ofelia - Home.” [Online]. Available: http://www.fp7-
ofelia.eu/. [Accessed: 24-Jun-2015].
[32] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar,
L. Peterson, J. Rexford, S. Shenker, and J. Turner,
“OpenFlow: enabling innovation in campus networks,”
ACM SIGCOMM Comput. Commun. Rev., vol. 38, no. 2,
pp. 69–74, 2008.
[33] B. Lantz, B. Heller, and N. McKeown, “A Network in a
Laptop: Rapid Prototyping for Software-defined Networks,”
in Proceedings of the 9th ACM SIGCOMM Workshop on
Hot Topics in Networks, New York, NY, USA, 2010, pp.
19:1–19:6.
[34] Carneiro, G. J., “NS3.” .
[35] C. Rotsos, N. Sarrar, S. Uhlig, R. Sherwood, and A. W.
Moore, “OFLOPS: An Open Framework for Openflow
Switch Evaluation,” in Proceedings of the 13th
International Conference on Passive and Active
Measurement, Berlin, Heidelberg, 2012, pp. 85–95.
[36] R. Sherwood and Y. KOK-KIONG, Cbench: an open-flow
controller benchmarker. 2010.
[37] M. Yu, L. Jose, and R. Miao, “Software Defined Traffic
Measurement with OpenSketch,” in Proceedings of the
10th USENIX Conference on Networked Systems Design
and Implementation, Berkeley, CA, USA, 2013, pp. 29–42.
[38] S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A.
Singh, S. Venkata, J. Wanderer, J. Zhou, M. Zhu, J. Zolla,
U. Hölzle, S. Stuart, and A. Vahdat, “B4: Experience with
a Globally-deployed Software Defined Wan,” in
Proceedings of the ACM SIGCOMM 2013 Conference on
SIGCOMM, New York, NY, USA, 2013, pp. 3–14.
Fouad Benamrane received his B.Sc. degree (2009) in physics
at the Faculty of Sciences of Rabat (FSR), Morocco, and his
M.Sc. degree (2011) in telecommunication sciences at the
National School of Applied Sciences of Fez, Morocco (ENSAF).
He is currently a Ph.D. candidate at the FSR. His current
research focuses on the performance and scalability of software-
defined networks and the OpenFlow protocol.
Mouad Ben Mamoun is currently a Professor in the
Department of Computer Science at Mohammed V University,
Rabat, Morocco. He received his M.Sc. and Ph.D. degrees in
Computer Science from the University of Versailles, France, in
1998 and 2002. His research interests are about performance
evaluation of networks.
Redouane BENAINI is an Assistant Professor in Computer
Networking at Mohamed V University, Rabat, Morocco, since
2006. Before that, he served as a Temporary Assistant Professor
at Marne-la-Vallée University in Paris and in the National
School of Engineers at Bourges, France. He received a PhD
degree in Computer Science from TELECOM SudParis, in 2005.
His main activities are focused on network protocols.
JOURNAL OF NETWORKS, VOL. 10, NO. 6, JUNE 2015
337
© 2015 ACADEMY PUBLISHER