ArticlePDF Available

Network Virtualization and Software Defined Networking for Cloud Computing: A Survey

Authors:

Abstract and Figures

Network virtualization is the key to the current and future success of cloud computing. In this article, we explain key reasons for virtualization and briefly explain several of the networking technologies that have been developed recently or are being developed in various standards bodies. In particular, we explain software defined networking, which is the key to network programmability. We also illustrate SDN¿s applicability with our own research on OpenADN - application delivery in a multi-cloud environment.
Content may be subject to copyright.
IEEE Communications Magazine • November 2013
24 0163-6804/13/$25.00 © 2013 IEEE
This work was supported
in part by a grant from
Cisco University Research
Program and NSF CISE
Grant #1249681.
INTRODUCTION
The Internet has resulted in virtualization of all
aspects of our life. Today, our workplaces are
virtual, we shop virtually, get virtual education,
entertainment is all virtual, and of course, much
of our computing is virtual. The key enabler for
all virtualizations is the Internet and various
computer networking technologies. It turns out
that computer networking itself has to be virtual-
ized. Several new standards and technologies
have been developed for network virtualization.
This article is a survey of these technologies.
WHY VIRTUALIZE?
There are many reasons why we need to virtualize
resources. The five most common reasons are:
1Sharing: When a resource is too big for a
single user, it is best to divide it into multi-
ple virtual pieces, as is the case with today’s
multi-core processors. Each processor can
run multiple virtual machines (VMs), and
each machine can be used by a different
user. The same applies to high-speed links
and large-capacity disks.
2Isolation: Multiple users sharing a resource
may not trust each other, so it is important
to provide isolation among users. Users
using one virtual component should not be
able to monitor the activities or interfere
with the activities of other users. This may
apply even if different users belong to the
same organization since different depart-
ments of the organization (e.g., finance and
engineering) may have data that is confi-
dential to the department.
3Aggregation: If the resource is too small, it
is possible to construct a large virtual
resource that behaves like a large resource.
This is the case with storage, where a large
number of inexpensive unreliable disks can
be used to make up large reliable storage.
4Dynamics: Often resource requirements
change fast due to user mobility, and a way
to reallocate the resource quickly is
required. This is easier with virtual
resources than with physical resources.
5Ease of management: Last but probably the
most important reason for virtualization is
the ease of management. Virtual devices
are easier to manage because they are soft-
ware-based and expose a uniform interface
through standard abstractions.
VIRTUALIZATION IN COMPUTING
Virtualization is not a new concept to computer
scientists. Memory was the first among the com-
puter components to be virtualized. Memory was
an expensive part of the original computers,so
virtual memory concepts were developed in the
1970s. Study and comparison of various page
replacement algorithms was a popular research
topic then. Today’s computers have very sophis-
ticated and multiple levels of caching for memo-
ry. Storage virtualization was a natural next step
with virtual disks, virtual compact disk (CD)
drives, leading to cloud storage today. Virtual-
ization of desktops resulted in thin clients, which
resulted in significant reduction of capital as well
as operational expenditure, eventually leading to
virtualization of servers and cloud computing.
Computer networking is the plumbing of
computing, and like plumbing in all beautiful
buildings, networking is the key to many of the
features offered by new computing architectures.
Virtualization in networking is also not a new
concept. Virtual channels in X.25-based telecom-
munication networks and all subsequent net-
works allow multiple users to share a large
physical channel. Virtual local area networks
(VLANs) allow multiple departments of a com-
pany to share a physical LAN with isolation.
Similarly, virtual private networks (VPNs) allow
ABSTRACT
Network virtualization is the key to the cur-
rent and future success of cloud computing. In
this article, we explain key reasons for virtualiza-
tion and briefly explain several of the networking
technologies that have been developed recently
or are being developed in various standards bod-
ies. In particular, we explain software defined
networking, which is the key to network pro-
grammability. We also illustrate SDN’s applica-
bility with our own research on OpenADN —
application delivery in a multi-cloud environ-
ment.
CLOUD NETWORKING AND COMMUNICATIONS
Raj Jain and Subharthi Paul, Washington University
Network Virtualization and
Software Defined Networking for
Cloud Computing: A Survey
JAIN_LAYOUT_Layout 10/28/13 1:53 PM Page 24
IEEE Communications Magazine • November 2013 25
companies and employees to use public net-
works with the same level of security they enjoy
in their private networks.
However, there has been significant renewed
interest in network virtualization fueled primari-
ly by cloud computing. Several new standards
have been developed and are being developed.
Software defined networking (SDN) also helps
in network virtualization. These recent standards
and SDN are the topics of this article.
We discuss several recent network virtualiza-
tion technologies. Software defined networking
is discussed in detail. Our own research on open
application delivery using SDN is described.
Finally, a summary follows.
NETWORK VIRTUALIZATION
A computer network starts with a network inter-
face card (NIC) in the host, which is connected
to a layer 2 (L2) network (Ethernet, WiFi, etc.)
segments. Several L2 network segments may be
interconnected via switches (a.k.a. bridges) to
form an L2 network, which is one subnet in a
layer 3 (L3) network (IPv4 or IPv6). Multiple L3
networks are connected via routers (a.k.a. gate-
ways) to form the Internet. A single data center
may have several L2/L3 networks. Several data
centers may be interconnected via L2/L3 switch-
es. Each of these network components — NIC,
L2 network, L2 switch, L3 networks, L3 routers,
data centers, and the Internet — needs to be vir-
tualized. There are multiple, often competing,
standards for virtualization of several of these
components. Several new ones are being devel-
oped.
When a VM moves from one subnet to anoth-
er, its IP address must change, which compli-
cates routing. It is well known that IP addresses
are both locators and system identifiers, so when
a system moves, its L3 identifier changes. In
spite of all the developments of mobile IP, it is
significantly simpler to move systems within one
subnet (within one L2 domain) than between
subnets. This is because the IEEE 802 addresses
used in L2 networks (both Ethernet and WiFi)
are system identifiers (not locators) and do not
change when a system moves. Therefore, when a
network connection spans multiple L2 networks
via L3 routers, it is often desirable to create a
virtual L2 network that spans the entire network.
In a loose sense, several IP networks together
appear as one Ethernet network.
VIRTUALIZATION OF NICS
Each computer system needs at least one L2
NIC (Ethernet card) for communication. There-
fore, each physical system has at least one physi-
cal NIC. However, if we run multiple VMs on
the system, each VM needs its own virtual NIC.
As shown in Fig. 1, one way to solve this prob-
lem is for the “hypervisor” software that pro-
vides processor virtualization also implements as
many virtual NICs (vNICs) as there are VMs.
These vNICs are interconnected via a virtual
switch (vSwitch) which is connected to the physi-
cal NIC (pNIC). Multiple pNICs are connected
to a physical switch (pSwitch). We use this nota-
tion of using p-prefix for physical and v-prefix
for virtual objects. In the figures, virtual objects
are shown by dotted lines, while physical objects
are shown by solid lines.
Virtualization of the NIC may seem straight-
forward. However, there is significant industry
Sidebar 1. Genesis of cloud computing.
Although discussions of providing computing as a utility have been around
for quite some time, the real physical implementation of cloud computing
came when Amazon announced Elastic Computing 2 (EC2) on August 25,
2006. The (unverified) folklore is that when Amazon’s CEO visited the compa-
ny data center, he was amazed by the number of computers. Since data cen-
ters, like most other computing facilities, are designed to avoid crashes when
overloaded, the normal utilization of systems is low. The Amazon CEO there-
fore asked to figure out a way to manage the hardware in a programmatic
manner where all the management could be done easily remotely using appli-
cation programming interfaces (APIs). This allowed them to rent out the
unused capacity; so began the computer rental business we now call cloud
computing. The concept was immediately successful since it relieved cus-
tomers of all the headaches of managing equipment that has to be continu-
ously updated to keep up with the latest technologies. Sharing an
underutilized resource is good for cloud service customers as well as for the
cloud service providers.
Figure 1. Three approaches to NIC virtualization.
vM1 vM2
pNIC
vNIC1
vSwitch
pM
vNIC2
vM1 vM2
pNIC
pSwitch
vNIC1
VEPA
pM
vNIC2
vM1 vM2
Hypervisor
vNIC1
vSwitch
(a) (b) (c)
pNIC
pM
vNIC2
JAIN_LAYOUT_Layout 10/28/13 1:53 PM Page 25
IEEE Communications Magazine • November 2013
26
competition. Different segments of the network-
ing industry have come up with competing stan-
dards. Figure 1 shows three different approaches.
The first approach, providing a software vNIC
via hypervisor, is the one proposed by VM soft-
ware vendors. This virtual Ethernet bridge
(VEB) approach has the virtue of being trans-
parent and straightforward. Its opponents point
out that there is significant software overhead,
and vNICs may not be easily manageable by
external network management software. Also,
vNICs may not provide all the features today’s
pNICs provide. So pNIC vendors (or pNIC chip
vendors) have their own solution, which provides
virtual NIC ports using single-route I/O virtual-
ization (SR-IOV) on the peripheral-component
interconnect (PCI) bus [1]. The switch vendors
(or pSwitch chip vendors) have yet another set
of solutions that provide virtual channels for
inter-VM communication using a virtual Ether-
net port aggregator (VEPA), which passes the
frames simply to an external switch that imple-
ments inter-VM communication policies and
reflects some traffic back to other VMs in the
same machine. IEEE 802.1Qbg [2] specifies both
VEB and VEPA.
VIRTUALIZATION OF SWITCHES
A typical Ethernet switch has 32–128 ports. The
number of physical machines that need to be
connected on an L2 network is typically much
larger than this. Therefore, several layers of
switches need to be used to form an L2 network.
IEEE Bridge Port Extension standard 802.1BR
[3], shown in Fig. 2, allows forming a virtual
bridge with a large number of ports using port
extenders that are simple relays and may be
physical or virtual (like a vSwitch).
VIRTUAL LANSINCLOUDS
One additional problem in the cloud environ-
ment is that multiple VMs in a single physical
machine may belong to different clients and thus
need to be in different virtual LANs (VLANs).
As discussed earlier, each of these VLANs may
span several data centers interconnected via L3
networks, as shown in Fig. 3.
Again, there are a number of competing pro-
posals to solve this problem. VMware and sever-
al partner companies have proposed virtual
extensible LANs (VXLANs) [4]. Network virtu-
alization using generic routing encapsulation
(NVGRE) [5] and the Stateless Transport Tun-
neling (STT) protocol [6] are two other propos-
als being considered in the Network
Virtualization over L3 (NVO3) working group
of the Internet Engineering Task Force (IETF).
VIRTUALIZATION FOR
MULTI-SITE DATA CENTERS
If a company has multiple data centers located
in different parts of a city, it may want to be able
to move its VMs anywhere in these data centers
quickly and easily. That is, it may want all its
VMs to be connected to a single virtual Ethernet
spanning all these data centers. Again, a medi-
um access control (MAC) over IP approach like
the ones proposed earlier may be used. Trans-
parent Interconnection of Lots of Links (TRILL)
[8], which was developed to allow a virtual LAN
to span a large campus network, can also be
used for this.
NETWORK FUNCTION VIRTUALIZATION
Standard multi-core processors are now so fast
that it is possible to design networking devices
using software modules that run on standard
processors. By combining many different func-
tional modules, any networking device — L2
switch, L3 router, application delivery controller,
and so on — can be composed cost effectively
and with acceptable performance. The Network
Function Virtualization (NFV) group of the
European Telecommunications Standards Insti-
tute (ETSI) is working on developing standards
to enable this [9].
Figure 2. IEEE 802.1BR bridge port extension.
pBridge
Port extender Port extender
Port extender Port extender
vBridge
Sidebar 2. Growth of cloud computing.
The second recent development that is partly responsible for the growth of
cloud computing and is fueling a need for networking innovations is smart
phone apps. On June 29, 2007, Apple announced the iPhone with the associ-
ated app store. Although there were several generations of smart phones
before then, the app store was a marketing innovation that changed the
landscape for application developers. Today, all businesses including banks,
retail stores, and service providers have their own apps, and each of these
apps needs to serve a global audience. Cloud computing provides an easy
way for these application service providers to obtain computing services
worldwide. However, networking features required for application partition-
ing over multiple clouds owned by multiple cloud service providers are still
lacking. Hence, there is a need for virtualization of the Internet, as discussed
further in this article.
JAIN_LAYOUT_Layout 10/28/13 1:53 PM Page 26
IEEE Communications Magazine • November 2013 27
SOFTWARE DEFINED NETWORKING
Software defined networking is the latest revolu-
tion in networking innovations. All components
of the networking industry, including network
equipment vendors, Internet service providers,
cloud service providers, and users, are working
on or looking forward to various aspects of SDN.
This section provides an overview of SDN.
SDN consists of four innovations:
1.Separation of the control and data planes
2.Centralization of the control plane
3.Programmability of the control plane
4.Standardization of application programming
interfaces (APIs)
Each of these innovations is explained briefly
below.
SEPARATION OF CONTROL AND DATA PLANE
Networking protocols are often arranged in
three planes: data, control, and management.
The data plane consists of all the messages that
are generated by the users. To transport these
messages, the network needs to do some house-
keeping work, such as finding the shortest path
using L3 routing protocols such as Open Short-
est Path First (OSPF) or L2 forwarding proto-
cols such as Spanning Tree. The messages used
for this purpose are called control messages and
are essential for network operation. In addition,
the network manager may want to keep track of
traffic statistics and the state of various network-
ing equipment. This is done via network man-
agement. Management, although important, is
different from control in that it is optional and is
often not done for small networks such as home
networks.
One of the key innovations of SDN is that
the control should be separated from the data
plane. The data plane consists of forwarding the
packets using the forwarding tables prepared by
the control plane. The control logic is separated
and implemented in a controller that prepares
the forwarding table. The switches implement
data plane (forwarding) logic that is greatly sim-
plified. This reduces the complexity and cost of
the switches significantly.
CENTRALIZATION OF THE CONTROL PLANE
The U.S. Department of Defense funded
Advanced Research Project Agency Network
(ARPAnet) research in the early 1960s to
counter the threat that the entire nationwide
communication system could be disrupted if the
telecommunication centers, which were highly
centralized and owned by a single company at
that time, were to be attacked. ARPAnet
researchers therefore came up with a totally dis-
tributed architecture in which the communica-
tion continues and packets find the path (if one
exists) even if many of the routers become non-
operational. Both the data and control planes
were totally distributed. For example, each
router participates in helping prepare the rout-
ing tables. Routers exchange reachability infor-
mation with their neighbors and neighbors’
neighbors, and so on. This distributed control
paradigm was one of the pillars of Internet
design and unquestionable up until a few years
ago.
Centralization, which was considered a bad
thing until a few years ago, is now considered
good, and for good reason. Most organizations
and teams are run using centralized control. If
an employee falls sick, he/she simply calls the
boss, and the boss makes arrangements for the
work to continue in his/her absence. Now con-
sider what would happen in an organization that
is totally distributed. The sick employee, say
John, will have to call all his co-employees and
tell them that he/she is sick. They will tell other
employees that John is sick. This will take quite
a bit of time before everyone will know about
John’s sickness, and then everyone will decide
what, if anything, to do to alleviate the problem
until John recovers. This is quite inefficient, but
is how current Internet control protocols work.
Centralization of control makes sensing the state
and adjusting the control dynamically based on
state changes much faster than with distributed
protocols.
Of course, centralization has scaling issues
but so do distributed methods. For both cases,
we need to divide the network into subsets or
areas that are small enough to have a common
control strategy. A clear advantage of central-
ized control is that the state changes or policy
changes propagate much faster than in a totally
distributed system. Also, standby controllers can
be used to take over in case of failures of the
main controller. Note that the data plane is still
fully distributed.
Figure 3. Different virtual machines may be in different VLANs.
L3 networks Hypervisor VTEP IP2
VM2-1
VLAN 34 VM2-1
VLAN 74
VM2-3
VLAN 98 VM2-4
VLAN 22
Server 2
Hypervisor VTEP IP1
VM2-1
VLAN 22 VM2-1
VLAN 34
VM2-3
VLAN 74 VM2-4
VLAN 98
Server 1
Software defined
networking is the
latest revolution in
networking innova-
tions. All compo-
nents of networking
industry, including
network equipment
vendors, Internet ser-
vice providers, cloud
service providers,
and users, are work-
ing on or looking
forward to various
aspects of SDN.
JAIN_LAYOUT_Layout 10/28/13 1:53 PM Page 27
IEEE Communications Magazine • November 2013
28
PROGRAMMABLE CONTROL PLANE
Now that the control plane is centralized in a
central controller, it is easy for the network man-
ager to implement control changes by simply
changing the control program. In effect, with a
suitable API, one can implement a variety of
policies and change them dynamically as the sys-
tem states or needs change.
This programmable control plane is the most
important aspect of the SDN. A programmable
control plane in effect allows the network to be
divided into several virtual networks that have
very different policies and yet reside on a shared
hardware infrastructure. Dynamically changing
the policy would be very difficult and slow with a
totally distributed control plane.
STANDARDIZED APIS
As shown in Fig. 4, SDN consists of a central-
ized control plane with a southbound API for
communication with the hardware infrastructure
and a northbound API for communication with
the network applications. The control plane can
be further subdivided into a hypervisor layer and
a control system layer. A number of controllers
are already available. Floodlight [10] is one
example. OpenDaylight [11] is a multi-company
effort to develop an open source controller. A
networking hypervisor called FlowVisor [12] that
acts as a transparent proxy between forwarding
hardware and multiple controllers is also avail-
able.
The main southbound API is OpenFlow [13],
which is being standardized by the Open Net-
working Foundation. A number of proprietary
southbound APIs also exist, such as OnePK [14]
from Cisco. These later ones are especially suit-
able for legacy equipment from respective ven-
dors. Some argue that a number of previously
existing control and management protocols, such
as Extensible Messaging and Presence Protocol
(XMPP), Interface to the Routing System
(I2RS), Software Driven Networking Protocol
(SDNP), Active Virtual Network Management
Protocol (AVNP), Simple Network Management
Protocol (SNMP), Network Configuration (Net-
Conf), Forwarding and Control Element Separa-
tion (ForCES), Path Computation Element
(PCE), and Content Delivery Network Intercon-
nection (CDNI), are also potential southbound
APIs. However, given that each of these was
developed for another specific application, they
have limited applicability as a general-purpose
southbound control API.
Northbound APIs have not been standard-
ized yet. Each controller may have a different
programming interface. Until this API is stan-
dardized, development of network applications
for SDN will be limited. There is also a need for
an east-west API that will allow different con-
trollers from neighboring domains or in the
same domain to communicate with each other.
FLOW-BASED CONTROL
Over the last 30 years (since the standardization
of the first Ethernet standard), disk and memory
sizes have grown exponentially using Moore’s
law, and so have the file sizes. The packet size,
however, has remained the same (approximately
1518-byte Ethernet frames). Therefore, much of
the traffic today consists of a sequence of pack-
ets rather than a single packet. For example, a
large file may require transmission of hundreds
of packets. Streaming media generally consists of
a stream of packets exchanged over a long peri-
od of time. In such cases, if a control decision is
made for the first packet of the flow, it can be
reused for all subsequent packets. Thus, flow-
based control significantly reduces the traffic
between the controller and the forwarding ele-
ment. The control information is requested by
the forwarding element when the first packet of
a flow is received and is used for all subsequent
packets of the flow. A flow can be defined by
any mask on the packet headers and the input
port from which the packet was received. A typi-
cal flow table entry is shown in Fig. 5. The con-
trol table entry specifies how to handle the
Figure 4. Software defined networking APIs.
Applications
Control
plane
Data
plane
FlowVisor
Forwarding
HW
Forwarding
HW Forwarding
HW
Forwarding
HW
OpenFlow
FloodlightOpen daylight
Network controller software
ASP1
Application
Northbound API
East-West API
Southbound API
ASP2
Application
ASP3
Application
Now that the control
plane is centralized
in a central con-
troller, it is easy for
the network manag-
er to implement con-
trol changes by
simply changing the
control program. In
effect, with a suit-
able API, one can
implement a variety
of policies and
change them
dynamically as the
system states or
needs change.
JAIN_LAYOUT_Layout 10/28/13 1:53 PM Page 28
IEEE Communications Magazine • November 2013 29
packets with the matching header. It also con-
tains instructions about which statistics to collect
about the matching flows.
SDN IMPACT AND FUTURE
Networking industry has shown enormous inter-
est in SDN. SDN is expected to make the net-
works programmable and easily partitionable
and virtualizable. These features are required for
cloud computing where the network infra-
structure is shared by a number of competing
entities. Also, given simplified data plane, the
forwarding elements are expected to be very
cheap standard hardware. Thus, SDN is expect-
ed to reduce both capital expenditure and opera-
tional expenditure for service providers, cloud
service providers, and enterprise data centers
that use lots of switches and routers.
SDN is like a tsunami that is taking over
other parts of the computing industry as well.
More and more devices are following the soft-
ware defined path with most of the logic imple-
mented in software over standard processors.
Thus, today we have software defined base sta-
tions, software defined optical switches, software
defined routers, and so on.
Regardless of what happens to current
approaches to SDN, it is certain that the net-
works of tomorrow will be more programmable
than today. Programmability will become a com-
mon feature of all networking hardware so that
a large number of devices can be programmed
(aka orchestrated) simultaneously. The exact
APIs that will become common will be decided
by transition strategies since billions of legacy
networking devices will need to be included in
any orchestration.
It must be pointed out that NFV and SDN
are highly complementary technologies. They are
not dependent on each other.
OPEN APPLICATION
DELIVERY USING SDN
While current SDN-based efforts are mostly
restricted to L3 and below (network traffic), it
may be extended to manage L3 and above appli-
cation traffic as well. Application traffic manage-
ment involves enforcing application deployment
and delivery policies on application traffic flows
that may be identified by the type of application,
application deployment context (application par-
titioning and replication, intermediary service
access for security, performance, etc.), user and
server contexts (load, mobility, failures, etc.),
and application QoS requirements. This is
required since delivering modern Internet-scale
applications has become increasingly complex
even inside a single private data center.
The application service may be replicated over
multiple hosts. Also, the service may be parti-
tioned for improved performance, with each par-
tition hosted on a different group of servers. A
service may be partitioned based on:
Content: For example, even for the same
service (e.g. videos.google.com) accounting
messages, recommendation requests and
video requests are all sent to different serv-
er groups.
Context: User context, network context, or
server context may require the application
messages to be routed differently.
An example of user context is a mobile smart
phone user vs. a desktop user. An example of
network context is the geographical location of
the user and the state of the network links. An
example of server context is the load on various
servers and whether they are up/down. Further-
more, most services require multiple TCP seg-
ments, where accessing the service actually
requires going through a sequence of middle
boxes providing security (e.g., firewalls, IDS),
Sidebar 3. Technology hype.
Every new technology is like a new marriage. Before marriage, life is made of
dreams. Both sides think there is this other person who has all the right quali-
ties he/she needs, and that all his/her problems will be solved by this person.
After marriage both parties realize that not all their beliefs were correct. There
was a bit of hype. Similarly, all new technologies have a hype phase. This is
when a lot of money is invested in the technology. As a result, the best the
technology can do is developed. This is the time when researchers, startups,
and all vendors need to pay attention since this is the opportunity to make an
impact. This is when the fates of many companies are decided. Often, several
competing approaches to get the same effect are developed, and the one
requiring the least changes gets accepted. For example, asynchronous transfer
mode (ATM) promised to solve many problems in networking. The key feature
was guaranteed quality of service (QoS). Multiprotocol label switching (MPLS)
offered this feature without a major replacement of legacy architecture, and
survived while ATM went away.
Figure 5. OpenFlow table entries.
In
port VLAN
ID SA DA Type
L2
SA DA Prot
L3
Src Dst
and mask
L4
Forward to port n
Encapsulate and forward to controller
Drop
Send to normal processing pipeline
Modify fields
Match fields Priority Counters Instructions Timeouts Cookie
Packet + byte counters
JAIN_LAYOUT_Layout 10/28/13 1:53 PM Page 29
IEEE Communications Magazine • November 2013
30
transformation/translation (e.g., transcoders,
data compression) and performance enhance-
ment (e.g., SSL off loaders, WAN optimizers)
functions to the service deployment. In general,
a user-server connection is no longer end-to-end;
it consists of many segments. Each of these seg-
ments can be served by multiple destinations
(based on replication, partitioning). The applica-
tion service providers (ASPs) therefore imple-
ment complex application policy routing (APR)
mechanisms inside their private data centers.
PROBLEM STATEMENT
Most applications now (including games on
smart phones) need to serve global audiences
and need servers located all around the world.
They can easily get computing and storage facili-
ties using cloud services from multiple cloud
providers distributed throughout the world.
However, the problem of routing using ASPs’
policies in a very dynamic multi-cloud environ-
ment is not possible since Internet service pro-
viders (ISPs) offer no service to dynamically
route messages to a different server using an
ASP’s policies.
SOLUTION APPROACH
Our vision is to design a new session-layer
abstraction called Open Application Delivery
Network (OpenADN) [15] that allows ASPs to
express and enforce application traffic manage-
ment policies and application delivery con-
straints at the granularity of application messages
and packets. It allows them to achieve all the
application delivery services they use today in
private data centers in the global multi-cloud
environment. OpenADN is based on the stan-
dardized data plane, diversified control plane
design framework proposed by SDN. Using
OpenADN-aware data plane entities, ISPs can
offer application delivery services to ASPs.
To achieve this we combine the following six
innovations: OpenFlow, SDN, session splicing,
cross-layer communication, indirection, MPLS-
like application flow labels (which we call APLS,
application label switching).
As shown in Fig. 6, OpenADN allows ASPs’
controllers to communicate with the ISP’s con-
troller and provide the ISP with their server
policies and server states so that the ISP’s con-
troller can program the control plane according-
ly. In addition to requiring a northbound API,
OpenADN also requires some extensions to the
southbound API — OpenFlow.
KEY FEATURES OF OPENADN
1.OpenADN takes network virtualization to
the extreme of making the global Internet
look like a virtual single data center to each
ASP.
2.Proxies can be located anywhere on the
global Internet. Of course, they should be
located in proximity to users and servers for
optimal performance.
3.Backward compatibility means that legacy
traffic can pass through OpenADN boxes,
and OpenADN traffic can pass through
legacy boxes.
4.No changes to the core Internet are neces-
sary since only some edge devices need to
be OpenADN/SDN/OpenFlow-aware. The
remaining devices and routers can remain
legacy.
5.Incremental deployment can start with just a
few OpenADN-aware OpenFlow switches.
6.Economic incentives for first adopters are
to be found by ISPs that deploy a few of
these switches, and those ASPs that use
Open ADN will benefit immediately from
the technology.
7.ISPs keep complete control over their net-
work resources, while ASPs keep complete
control over their application data, which
may be confidential and encrypted.
SUMMARY
The key messages of this article are:
1.Cloud computing is a result of advances in
virtualization in computing, storage, and
networking.
2.Networking virtualization is still in its infan-
cy. Numerous standards related to network
virtualization have recently been developed
in the IEEE and Internet Engineering Task
Force (IETF), and several are still being
developed.
3 One of the key recent developments in this
direction is software defined networking.
The key innovations of SDN are separation
of the control and data planes, centraliza-
tion of control, programmability, and stan-
dard southbound, northbound, and
east-west APIs. This will allow a large num-
ber of devices to easily be orchestrated
(programmed).
4 OpenFlow is the standard southbound API
being defined by Open Networking Forum.
5 We are working on OpenADN, which is a
network application based on SDN that
enables application partitioning and deliv-
ery in a multi-cloud environment.
Figure 6. In OpenADN, ASPs’ controllers convey their policies to an ISP’s
controller in the control plane.
State
policies
State
policies
ASP2ASP1 ASP 2’s
controller
ISP’s
controller
ASP 1’s
controller
ISP
Middle-boxes
OpenADN aware
Legacy
(OpenADN unaware)
JAIN_LAYOUT_Layout 10/28/13 1:53 PM Page 30
IEEE Communications Magazine • November 2013 31
REFERENCES
[1] PCI-SIG, “Single Root I/O Virtualization and Sharing 1.1
Specification,” http://www.pcisig.com/members/down-
loads/specifications/iov/sr-iov1_1_20Jan10.pdf, available
only to members.
[2] IEEE Std. 802.1Qbg-2012, “IEEE Standard for Local and
Metropolitan Area Networks — Media Access Control
(MAC) Bridges and Virtual Bridged Local Area Networks
— Amendment 21: Edge Virtual Bridging,” July 5,
2012, http://standards.ieee.org/getieee802/down-
load/802.1Qbg-2012.pdf, p. 191.
[3] R. Perlman et al., “Routing Bridges (RBridges): Base Pro-
tocol Specification,” IEEE RFC 6325, July 2011, 99
pages, http://tools.ietf.org/html/rfc6325.
[4] M. Sridharan et al., “NVGRE: Network Virtualization
Using Generic Routing Encapsulation,” IETF Draft draft-
sridharan-virtualization-nvgre-03.txt, Aug. 2013,
http://tools.ietf.org/html/draft-sridharan-virtualization-
nvgre-03, pp. 17.
[5] M. Mahalingam et al., “VXLAN: A Framework for Over-
laying Virtualized Layer 2 Networks over Layer 3 Net-
works,” IETF Draft draft-mahalingam-dutt-dcops-vxlan-
04.txt, May 8, 2013, 22 pages, http://tools.ietf.org/html/
draft-mahalingam-dutt-dcops-vxlan-04.
[6] B. Davie, Ed., J. Gross, “A Stateless Transport Tunneling
Protocol for Network Virtualization (STT),” IETF Draft
draft-davie-stt-03.txt, Mar. 12, 2013, 19 pages,
http://tools.ietf.org/html/draft-davie-stt-03.
[7] IEEE Std 802.1BR-2012, “IEEE Standard for Local and
Metropolitan Area Networks—Virtual Bridged Local
Area Networks — Bridge Port Extension,” July 16,
2012, 135 pages, http://standards.ieee.org/getieee802/
download/802.1BR-2012.pdf
[8] T. Narten et al., “Problem Statement: Overlays for Net-
work Virtualization,” IETF Draft draft-ietf-nvo3-overlay-
problem-statement-04, July 31, 2013, 24 pages,
http://datatracker.ietf.org/doc/draft-ietf-nvo3-overlay-
problem-statement/.
[9] ETSI, “NFV Whitepaper,” Oct 22, 2012, http://portal.etsi.
org/NFV/NFV_White_Paper.pdf
[10] Floodlight OpenFlow Controller, http://www.project-
floodlight.org/floodlight/.
[11] OpenDaylight, http://www.opendaylight.org/resources
[12] Flowvisor Wiki, https://github.com/OPENNETWORK-
INGLAB/flowvisor/wiki.
[13] Open Networking Foundation, “OpenFlow Switch
Specification, V1.3.2,” Apr. 25, 2013, 131 pages,
https://www.opennetworking.org/sdn-resources/onf-
specifications/openflow.
[14] Cisco’s One Platform Kit (onePK), http://www.cisco.
com/en/US/prod/iosswrel/onepk.html
[15] S. Paul and R. Jain, “OpenADN: Mobile Apps on Global
Clouds Using OpenFlow and Software Defined Net-
working,” 1st Int’l. Wksp. Management and Security
Technologies for Cloud Computing, Dec. 7, 2012.
BIOGRAPHIES
RAJ JAIN [F] (jain@cse.wustl.edu) is a Fellow of ACM and
AAAS, a winner of the ACM SIGCOMM Test of Time Award
and CDAC-ACCS Foundation Award 2009, and ranks
among the top 100 in CiteseerX’s list of Most Cited
Authors in Computer Science. He is currently a professor in
the Department of Computer Science and Engineering at
Washington University. Previously, he was one of the co-
founders of Nayna Networks, Inc., a next-generation
telecommunications systems company in San Jose, Califor-
nia. He was a senior consulting engineer at Digital Equip-
ment Corporation in Littleton, Massachusetts, and then a
professor of computer and information sciences at Ohio
State University, Columbus. He is the author of Art of Com-
puter Systems Performance Analysis, which won the 1991
Best-Advanced How-to Book, Systems Award from the
Computer Press Association.
SUBHARTHI PAUL [S] (pauls@cse.wustl.edu) received his B.S.
degree from the University of Delhi, India, and his Master’s
degree in software engineering from Jadavpur University,
Kolkata, India. He is presently a doctoral student in the
Department of Computer Science and Engineering at
Washington University. His primary research interests are in
the area of future Internet architectures.
The key innovations
of SDN are separa-
tion of control and
data plane, central-
ization of control,
programmability,
standard south-
bound, northbound,
and east-west APIs.
This will allow a
large number of
devices to be easily
orchestrated
(programmed).
JAIN_LAYOUT_Layout 10/28/13 1:53 PM Page 31
... They allow anyone to participate as the user's developer or community member with that user's permission. All transactions are fully transparent and take place in a blockchain network, which means anyone who examines the transaction details with the permission of the node [12]. Public blockchains are designed to be fully decentralized systems with no one individual or entity controlling which transactions are recorded in the blockchain. ...
... For each mobile station in the cellular base station, a channel allocation is requested. And allocate a base station and a mobile station using channel allocation for customers [12]. The art stipend is acceptable to the main runners and tries to reduce the possibility of blocking calls. ...
... Our research uses the public blockchain, which is an open-source technology. They allow anyone to participate as a user's developer or community member with that user's permission [12]. All transactions are fully transparent and take place in a blockchain network, which means anyone who examines the transaction details with the permission of the node. ...
Article
Full-text available
This research includes the growing popularity of the Internet of Things (IoT) in recent years. Internet of Things (IoT) security and privacy remain a major challenge, mainly due to IoT networks' massive scale and distributed nature. Blockchain-based approaches provide decentralized security and privacy, yet they involve significant energy, delay, and computational overhead unsuitable for most resource-constrained IoT devices. The Internet of Things is a real-time application for high data rates, and blockchain technology is real-time data storage technology. It is an effective way to transfer data from the communication node to the system in the IoT network using blockchain technology, the most recent each IoT node discovers and adjusts for better communication. It is necessary to have the confidence to integrate/maximize the channel's output of the wireless sensor networks' features. The change can be implemented. However, from time to time the channel for the information does not learn to transmit to the protocol. Changing the channel characteristics in these networks like IoTs for achieving and increasing the data transmission and network throughput of wireless sensor networks. Therefore, this research recommends a completely computerized nature-understanding and by-origin procedure designed which may completely automate data transfer communication between multiusers by completely applying channel time spectral and time attributes of wireless network and store the data through blockchain technology then fetch data from any node when you want. For this purpose, designed Protocol channel swapping, data storage from time to time through blockchain technology, and following the shortest path for communication. The planned designed protocol of this WSN is rare as it is understanding and proposed the issue to maximize the WSN intensity established upon the WSN network metrics. This network also permits every node inside that IoT to regularly identify the adjoining network station characteristics so that they will exchange networks which means swapping adjoining channels to realize maximum data transfer.
... A powerful complement to SDN is NFV [11], [14]. The aim of virtualization is to virtually implement network devices and functions, which are then called Virtualized Network Functions (VNF) [15]. ...
... Later the proper path for delivering Invite messages is determined. Autoscaling is a technique for the dynamic adjustment of resources with respect to demand [14]. For this approach, we utilize horizontal scaling, which increases the scalability of the network and overall reduces the possibility of overload. ...
Preprint
Full-text available
VoIP is becoming a low-priced and efficient replacement for PSTN in communication industries. With a widely growing adoption rate, SIP is an application layer signaling protocol, standardized by the IETF, for creating, modifying, and terminating VoIP sessions. Generally speaking, SIP routes a call request to its destination by using SIP proxies. With the increasing use of SIP, traditional configurations pose certain drawbacks, such as ineffective routing, un-optimized management of proxy resources (including CPU and memory), and overload conditions. This paper presents OpenSIP to upgrade the SIP network framework with emerging technologies, such as SDN and NFV. SDN provides for management that decouples the data and control planes along with a software-based centralized control that results in effective routing and resource management. Moreover, NFV assists SDN by virtualizing various network devices and functions. However, current SDN elements limit the inspected fields to layer 2-4 headers, whereas SIP routing information resides in the layer-7 header. A benefit of OpenSIP is that it enforces policies on SIP networking that are agnostic to higher layers with the aid of a Deep Packet Inspection (DPI) engine. Among the benefits of OpenSIP is programmability, cost reduction, unified management, routing, as well as efficient load balancing. The present study implements OpenSIP on a real testbed which includes Open vSwitch and the Floodlight controller. The results show that the proposed architecture has a low overhead and satisfactory performance and, in addition, can take advantage of a flexible scale-out design during application deployment.
... Some methods, such as data anonymization, differential privacy, and federated learning, can be adopted to achieve intent-driven O&M without infringing on the privacy of underlying pipeline content (Jain and Paul, 2013;Kim and Feamster, 2013;Hu et al., 2014). Intent-driven O&M can describe intents at an abstract level, ensuring that high-level intents can describe user and business objectives without involving specific underlying details. ...
Article
Full-text available
The sixth-generation (6G) mobile network implements the social vision of digital twins and ubiquitous intelligence. Contrary to the fifth-generation (5G) mobile network that focuses only on communications, 6G mobile networks must natively support new capabilities such as sensing, computing, artificial intelligence (AI), big data, and security while facilitating Everything as a Service. Although 5G mobile network deployment has demonstrated that network automation and intelligence can simplify network operation and maintenance (O&M), the addition of external functionalities has resulted in low service efficiency and high operational costs. In this study, a technology framework for a 6G autonomous radio access network (RAN) is proposed to achieve a high-level network autonomy that embraces the design of native cloud, native AI, and network digital twin (NDT). First, a service-based architecture is proposed to re-architect the protocol stack of RAN, which flexibly orchestrates the services and functions on demand as well as customizes them into cloud-native services. Second, a native AI framework is structured to provide AI support for the diverse use cases of network O&M by orchestrating communications, AI models, data, and computing power demanded by AI use cases. Third, a digital twin network is developed as a virtual environment for the training, pre-validation, and tuning of AI algorithms and neural networks, avoiding possible unexpected losses of the network O&M caused by AI applications. The combination of native AI and NDT can facilitate network autonomy by building closed-loop management and optimization for RAN.
... The edge switches can be manipulated as TCP proxies, involving the attackers in a 3-way TCP handshake, after which they can connect to the controller or get blocked [85]. In [86], discussed a port hopping technique in which they are dynamically mapped to unused ports, increasing the attack cost as well as incurring delay in serving the legitimate hosts. ...
Article
Full-text available
Software Defined Networks (SDN) offer advantages over traditional networks, such as programmability, flexibility, and scalability, making them ideal for implementing and managing new networks while lowering associated expenses. In this article, we will examine and assess various approaches, looking at the benefits and limitations of each solution based on factors such as efficiency, user satisfaction, delay, and other relevant factors. In addition, we have conducted extensive analysis on SDN technology, including the most recent research and applications in areas such as 5G, Wi-Fi networks, IoT-based automated vehicle technology, satellite networks, smart grids, green and renewable energy, and AI. However, we should remember that even with all these applications, these networks are still susceptible to Denial of Service (DoS) and Distributed DoS (DDoS) attacks, which can cause serious disruption to network operations. This article also provides a thorough overview of the threat landscape for SDN and DoS attacks, highlighting the various attacks and their potential impact on network operations and sensitive data security. To mitigate the risks associated with these attacks, it is crucial to have effective solutions in place. We must constantly research and develop new strategies and approaches to counter DoS attacks in SDN as attackers continually discover new vulnerabilities in these networks. Furthermore, we highlight various detection and mitigation strategies for DoS attacks in SDN and emphasize the importance of constantly innovating and developing new approaches to secure SDN. Moreover, we delve into the future of SDN security and provide valuable insights for network administrators, security professionals, and researchers in devising effective strategies to protect SDNs from DoS attacks.
... Examples of these languages are Resource Description Framework (RDF) and Web Ontology Language (OWL). To ensure that all agents with a basic understanding of Semantic Web technologies can communicate and utilise one other's data and Services effectively, these technologies can be utilised to establish uniform semantics of privacy information and regulations [14]. ...
Article
Full-text available
Cloud computing has rapidly grown and changed how organizations approach data and security management. Automating cloud security and putting proper data governance frameworks in place is highly critical for any enterprise as a means of achieving enterprise agility, resilience, and regulatory compliance in multi-clouds. This white paper talks about strategies and best practices that can be considered to automate cloud security and address the complexities around data governance in multi-cloud network environments. It highlights the tools, challenges, and frameworks that will help businesses secure their cloud infrastructure and manage data responsibly across diverse cloud platforms.
... In respect to this, network virtualization with SDN is an important paradigm that ensures the efficient usage of network resources. It can provide several features such as sharing of resources by breaking down the larger ones into multiple virtualized pieces, isolation of resources for better monitoring of data privacy and interferencefree network access among users, aggregation for combining smaller resources into a single virtual resource, dynamism for fast deployment and reliable scalability in order to deal with the users' mobility, ease in resource management for debugging, testing and rapid deployment purposes [3]. On the other hand, developing appropriate scheduling mechanisms for resource allocation plays a fundamental role and help to meet quality-of-service (QoS) requirements of applications used by MOs' users. ...
Preprint
Software-Defined Networking (SDN) paradigm provides many features including hardware abstraction, programmable networking and centralized policy control. One of the main benefits used along with these features is core/backhaul network virtualization which ensures sharing of mobile core and backhaul networks among Mobile Operators (MOs). In this paper, we propose a virtualized SDN-based Evolved Packet System (EPS) cellular network architecture including design of network virtualization controller. After virtualization of core/backhaul network elements, eNodeBs associated with MOs become a part of resource allocation problem for Backhaul Transport Providers (BTPs). We investigate the performance of our proposed architecture where eNodeBs are assigned to the MOs using quality-of-service (QoS)-aware and QoS-unaware scheduling algorithms under the consideration of time-varying numbers and locations of user equipments (UEs) through Monte-Carlo simulations. The performances are compared with traditional EPS in Long Term Evolution (LTE) architecture and the results reveal that our proposed architecture outperforms the traditional cellular network architecture.
... With the VLAN ID field defined to be 12 bit, there is a strict limit of 4096 VLANs on a single Ethernet network which worked fine in the pre-cloud era but is grossly insufficient now as modern clouds greatly rely on virtualization. Some other important deficiencies of VLAN technology in the context of cloud computing and multi-tenant DCs include the lack of address space isolation (which is problematic for multi-tenant DCs) and 'equal-cost-multipath' (ECMP) support (which is inefficient from the DC operator's point of view) [3]. ...
Preprint
The networking industry, compared to the compute industry, has been slow in evolving from a closed ecosystem with limited abstractions to a more open ecosystem with well-defined sophisticated high level abstractions. This has resulted in an ossified Internet architecture that inhibits innovation and is unnecessarily complex. Fortunately, there has been an exciting flux of rapid developments in networking in recent times with prominent trends emerging that have brought us to the cusp of a major paradigm shift. In particular, the emergence of technologies such as cloud computing, software defined networking (SDN), and network virtualization are driving a new vision of `networking as a service' (NaaS) in which networks are managed flexibly and efficiently cloud computing style. These technologies promise to both facilitate architectural and technological innovation while also simplifying commissioning, orchestration, and composition of network services. In this article, we introduce our readers to these technologies. In the coming few years, the trends of cloud computing, SDN, and network virtualization will further strengthen each other's value proposition symbiotically and NaaS will increasingly become the dominant mode of commissioning new networks.
... These are all one way to do SDN but alternatives are equally good. Networks can be programmed and policies can be changed without separation of data and control plane, without centralization of control, and without OpenFlow [4,5]. ...
Preprint
Full-text available
Software defined Networks (SDNs) have drawn much attention both from academia and industry over the last few years. Despite the fact that underlying ideas already exist through areas such as P2P applications and active networks (e.g. virtual topologies and dynamic changes of the network via software), only now has the technology evolved to a point where it is possible to scale the implementations, which justifies the high interest in SDNs nowadays. In this article, the JISA Editors invite five leading scientists from three continents (Raouf Boutaba, David Hutchison, Raj Jain, Ramachandran Ramjee, and Christian Esteve Rothenberg) to give their opinions about what is really new in SDNs. The interviews cover whether big telecom and data center companies need to consider using SDNs, if the new paradigm is changing the way computer networks are understood and taught, and what are the open issues on the topic.
Article
Full-text available
There comes a time when the limitations of IPv4 meet the infinite potential of IPv6 in the world of the digital. In this study we take a deep dive into the exact switch from IPv4 to IPv6, shedding light on how the next generation protocol tackles the pressure points of scalability, security, and performance in cloud building. By combining an expansive 128 bit address space with built in IPsec for reliable security, IPv6 does away with address exhaustion constraints of IPv4 while reducing packet handling overhead through streamlined packet implementation. Analysis further indicates that IPv6 is ahead of IPv4 in terms of Ipv6 performs better in areas like reduced latency, better throughput, and full compatibility with cutting edge technologies like the Internet of Things (IoT), 5G, and AI based systems. While there are challenges to full adoption in the form of legacy incompatibility, Limited readiness on ISP's parts, and a financial cost to moving, however the path cannot be bypassed, after all. Despite these challenges IPv6 remains the cornerstone of an internet future that can be deployed securely and scalable. Finally, this research provides organizations with the actionable insights and developed phased strategies to embrace IPv6, mitigating short term difficulties while abundant long term benefits. IPv6 is not an upgrade IPv6 is a need, because at the core of tomorrow's cloud ecosystems, IPv6 will be needed.
Preprint
This paper provides a global picture about the deployment of networked processing services for genomic data sets. Many current research make an extensive use genomic data, which are massive and rapidly increasing over time. They are typically stored in remote databases, accessible by using Internet. For this reason, a significant issue for effectively handling genomic data through data networks consists of the available network services. A first contribution of this paper consists of identifying the still unexploited features of genomic data that could allow optimizing their networked management. The second and main contribution of this survey consists of a methodological classification of computing and networking alternatives which can be used to offer what we call the Genomic-as-a-Service (GaaS) paradigm. In more detail, we analyze the main genomic processing applications, and classify not only the main computing alternatives to run genomics workflows in either a local machine or a distributed cloud environment, but also the main software technologies available to develop genomic processing services. Since an analysis encompassing only the computing aspects would provide only a partial view of the issues for deploying GaaS system, we present also the main networking technologies that are available to efficiently support a GaaS solution. We first focus on existing service platforms, and analyze them in terms of service features, such as scalability, flexibility, and efficiency. Then, we present a taxonomy for both wide area and datacenter network technologies that may fit the GaaS requirements. It emerges that virtualization, both in computing and networking, is the key for a successful large-scale exploitation of genomic data, by pushing ahead the adoption of the GaaS paradigm. Finally, the paper illustrates a short and long-term vision on future research challenges in the field.
Conference Paper
Full-text available
In recent years, there has been an explosive growth in mobile applications (apps), most of which need to serve global audiences using cloud based computing facilities. Cloud computing provides unique opportunities for the application service providers (ASPs) to manage and optimize application delivery over geographically distributed computing resources. We propose the design of an open application delivery network (openADN) platform which allows ISPs to offer custom application delivery services. This is possible using Software Defined Networks (SDN), which provides new opportunities for designing control architectures for networks by providing cleaner abstraction between the network control and data planes. We make a case for augmenting the flow abstraction layer of SDN to add adequate support for application-level flows. Using several other recent innovations such as cross-layer communication, ID/Locator split, MPLS-like label switching, OpenADN allows ISPs to offer load balancing, fault tolerance, and other similar middle-box services to ASPs. We validate our claims about the usefulness of OpenADN through the implementation of a use-case scenario designed over a prototype switch implementation. The proposed design is evolutionary in the sense that it can coexist and is backward compatible with the current Internet and can be deployed incrementally now with a small number of new devices. Those ISPs that deploy these OpenADN aware OpenFlow switches and those ASPs that connect to these switches will be able to benefit immediately from the technology. ISPs and CSPs (Cloud Service Providers) can also offer middle-box services to ASPs. Best of all, this can be done now while the SDN technology is still evolving.
NVGRE: Network Virtualization Using Generic Routing Encapsulation IETF Draft draft-sridharan-virtualization-nvgre-03.txt
  • M Sridharan
M. Sridharan et al., " NVGRE: Network Virtualization Using Generic Routing Encapsulation, " IETF Draft draft-sridharan-virtualization-nvgre-03.txt, Aug. 2013, http://tools.ietf.org/html/draft-sridharan-virtualization-nvgre-03, pp. 17.
VXLAN: A Framework for Over-laying Virtualized Layer 2 Networks over Layer 3 Net-works, " IETF Draft draft-mahalingam-dutt-dcops-vxlan-04.txt
  • M Mahalingam
M. Mahalingam et al., " VXLAN: A Framework for Over-laying Virtualized Layer 2 Networks over Layer 3 Net-works, " IETF Draft draft-mahalingam-dutt-dcops-vxlan-04.txt, May 8, 2013, 22 pages, http://tools.ietf.org/html/ draft-mahalingam-dutt-dcops-vxlan-04.
Problem Statement: Overlays for Net-work Virtualization IETF Draft draft-ietf-nvo3-overlay-problem-statement-04
  • T Narten
T. Narten et al., " Problem Statement: Overlays for Net-work Virtualization, " IETF Draft draft-ietf-nvo3-overlay-problem-statement-04, July 31, 2013, 24 pages, http://datatracker.ietf.org/doc/draft-ietf-nvo3-overlay-problem-statement/.
Routing Bridges (RBridges): Base Protocol Specification
  • perlman
R. Perlman et al., "Routing Bridges (RBridges): Base Protocol Specification," IEEE RFC 6325, July 2011, 99 pages, http://tools.ietf.org/html/rfc6325.
A Stateless Transport Tunneling Protocol for Network Virtualization (STT), " IETF Draft draft-davie-stt-03.txt
  • B Davie
  • J Gross
B. Davie, Ed., J. Gross, " A Stateless Transport Tunneling Protocol for Network Virtualization (STT), " IETF Draft draft-davie-stt-03.txt, Mar. 12, 2013, 19 pages, http://tools.ietf.org/html/draft-davie-stt-03.
IEEE Standard for Local and Metropolitan Area Networks -Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks -Amendment 21: Edge Virtual Bridging
  • Ieee Std
IEEE Std. 802.1Qbg-2012, "IEEE Standard for Local and Metropolitan Area Networks -Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks -Amendment 21: Edge Virtual Bridging," July 5, 2012, http://standards.ieee.org/getieee802/download/802.1Qbg-2012.pdf, p. 191.
IEEE Standard for Local and Metropolitan Area Networks-Virtual Bridged Local Area Networks -Bridge Port Extension
IEEE Std 802.1BR-2012, "IEEE Standard for Local and Metropolitan Area Networks-Virtual Bridged Local Area Networks -Bridge Port Extension," July 16, 2012, 135 pages, http://standards.ieee.org/getieee802/ download/802.1BR-2012.pdf
NVGRE: Network Virtualization Using Generic Routing Encapsulation
  • sridharan
M. Sridharan et al., "NVGRE: Network Virtualization Using Generic Routing Encapsulation," IETF Draft draftsridharan-virtualization-nvgre-03.txt, Aug. 2013, http://tools.ietf.org/html/draft-sridharan-virtualizationnvgre-03, pp. 17.
VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks
  • mahalingam
M. Mahalingam et al., "VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks," IETF Draft draft-mahalingam-dutt-dcops-vxlan-04.txt, May 8, 2013, 22 pages, http://tools.ietf.org/html/ draft-mahalingam-dutt-dcops-vxlan-04.