Conference PaperPDF Available

Virtual Network Function Placement For Resilient Service Chain Provisioning

Authors:

Abstract

Virtualization technologies are changing the way network operators deploy and manage Internet services. In particular in this study we focus on the new Network Function Virtualization (NFV) paradigm, which consists in instantiating Virtual Network Function (VNFs) in Commercial-Off-The-Shelf (COSTS) hardware. Adopting NFV network operators can dynamically instantiate Network Functions (NFs) based on current demands and network conditions, allowing to save capital and operational costs. Typically, VNFs are concatenated together in a sequential order to form Service Chains (SCs) that provide specific Internet Services to the users. In this paper we study different approaches to provide the resiliency of SCs against single-link and single-node failures. We propose three Integer Linear Programming (ILP) models to solve the VNF placement problem with the VNF service chaining while guaranteeing resiliency against single-node/link, single-link and single-node failures. Moreover we evaluate the impact of latency of SCs on the VNFs distribution. We show that providing resiliency against both single-link and single-node failures necessitates the activation of twice the amount of resources in terms of nodes, and that for latency critical services providing resiliency against single-node failures comes at the same cost with respect to resiliency against single-link and single-nodes failures.
Virtual Network Function Placement For Resilient
Service Chain Provisioning
Ali Hmaity, Marco Savi, Francesco Musumeci, Massimo Tornatore, Achille Pattavina
Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milan, Italy
E-mail: firstname.lastname@polimi.it
Abstract—Virtualization technologies are changing the way
network operators deploy and manage Internet services. In
particular in this study we focus on the new Network Function
Virtualization (NFV) paradigm, which consists in instantiating
Virtual Network Function (VNFs) in Commercial-Off-The-Shelf
(COSTS) hardware. Adopting NFV network operators can dy-
namically instantiate Network Functions (NFs) based on current
demands and network conditions, allowing to save capital and
operational costs. Typically, VNFs are concatenated together in
a sequential order to form Service Chains (SCs) that provide
specific Internet Services to the users. In this paper we study
different approaches to provide the resiliency of SCs against
single-link and single-node failures. We propose three Integer
Linear Programming (ILP) models to solve the VNF placement
problem with the VNF service chaining while guaranteeing
resiliency against single-node/link, single-link and single-node
failures. Moreover we evaluate the impact of latency of SCs
on the VNFs distribution. We show that providing resiliency
against both single-link and single-node failures necessitates the
activation of twice the amount of resources in terms of nodes,
and that for latency critical services providing resiliency against
single-node failures comes at the same cost with respect to
resiliency against single-link and single-nodes failures.
I. INTRODUCTION
For network operators, offering new bandwidth-intensive
and latency-constrained (e.g cloud gaming or video stream-
ing)Internet services is a challenging task due to the adoption
of proprietary hardware appliances, the high cost of offering,
maintaining and integrating these services. Network Functions
Virtualization (NFV) is a new architectural paradigm that
was proposed to improve the flexibility of network service
provisioning and reduce the time to market of new services
[1]. NFV can revolutionize how network operators design
their infrastructure by leveraging virtualization to separate
software instances from hardware appliances, and decoupling
functionalities from locations for faster service provisioning.
NFV supports the instantiation of Virtual Network Function
(VNFs) through software virtualization techniques and runs
them on Commercial-Off-The-Shelf (COTS) hardware. Hence,
the virtualization of network functions opens the way to
the provisioning of new services without the installation of
new equipment. It is clear that NFV brings a whole new
dimension to the landscape of telecommunication industry
market due to the possibility of reducing capital investments,
energy consumption by consolidating network functions, and
by introducing tailored services based on customers needs.
Moreover, NFV simplifies service deployment by exploiting
the concept of service chaining [2]: a Service Chain (SC) is a
sequential concatenation of VNFs and/or hardware appliances
to provide a specific Internet service (e.g., VoIP, Web Service,
etc.) to the users.
Before deploying NFV solutions in operational networks sev-
eral challenges regarding performance, availability, security
and survivability need to be tackled. In this work we focus
on SCs resiliency against single-link/node failures. To the
best of our knowledge, this is the first study to investigate
the resiliency of SCs provisioning. Our main objective is
to model a survivable placement of VNFs while minimizing
resources, in terms of number of VNFs instances placed in
the network. We develop three different Integer Linear Pro-
gramming (ILP) models to solve the VNF placement problem
with service chaining while guaranteeing resiliency against
single-node/single-link, single-link and single-node failures.
We show the amount of nodes needed to supply and compare
these results with an Unprotected scenario. Furthermore, we
investigate the effect of latency on the proposed protection
schemes.
The rest of this paper is organized as follows. Section
II discuss the NFV and the service-chaining concept and
overviews existing works appeared in literature. In Section
III we present the network model used, while in Section IV
we present the resilient design scenarios and discuss their
failure prevention potential. In Section V the resilient SCs
provisioning problem is formally stated and the ILP models
are shown. In Section VI we present the case-studies and show
the obtained numerical results. Finally, conclusion and future
work are discussed in Section VII.
II. RE LATE D WO RK S
NFV is still a concept under standardization. Currently,
a number of standardization activities in the NFV area are
carried by ETSI and IETF, [3] [4] [5]. ETSI has defined an
architectural framework that enables VNFs to be deployed and
executed on a Network Functions Virtualization Infrastructure
(NFVI), which comprises commodity servers logically sepa-
rated/partitioned by a software layer. Above the hypervisor
layer, the component in charge of mapping the VNFs to
physical resources, a VNF is mapped to a Virtual Machine
(VM) in the NFVI and its deployment and management
is handled by the Management and Orchestration (MANO)
system [6].
The problem of embedding SCs into a physical infrastruc-
ture can be considered as an extended version of two NP-
𝑣1
𝑣2
𝑣3
𝑣4
𝑣6
𝑣5
Start
Point
VNF
request
2
End
Point
Start
Point
End
Point
S𝐂 𝟏
S𝐂 𝟐
VNF
request
1
VNF1
Phase 1:
Mapping VNFs
to NFV nodes VNF2 VNF3 VNF4
VNF3
VNF4
VNF2
VNF1
Phase 2:
Mapping VNF
requests to
NFV nodes that
host VNFs
VNF
request
3
VNF
request
4
Fig. 1. Two service chains, each having different VNFs, embedded in the
physical network.
hard problems: The Virtual Network Embedding (VNE) [7],[8]
and Location-Routing Problems (LRP) [9]. The similarity
with VNE resides in the fact that SCs can be considered
as virtual networks characterized by a chain topology where
VNFs represent virtual nodes, chained together through virtual
links that must be mapped to a physical path.The similarity
with LRP consists in jointly considering the problem of finding
the optimal placement of VNFs, among a set of potential
locations, along with the routing between VNFs. The LRP
combines this two planning tasks and solves them with the
objective to reduce costs of nodes, edges or paths.
Ref [10] formalizes the VNF and SC concepts and develops
an ILP model fro the optimal placement of VNF and SCs. An
extended version of this model [11], considers that the upscal-
ing of an existing VNF introduces additional cost, whereas
hosting multiple VNFs within the same physical nodes in-
troduces context switching costs. Our model leverages and
extends both the above mentioned works. Ref [12] develops
an ILP model for the efficient placement of VNFs considering
processing-resource sharing. In [13] an Online algorithm that
considers jointly the VM placement and routing is proposed.
Finally authors in [14] focus on the deployment of VNFs in
a hybrid environment where some NFs are virtualized and
others are use specific hardware appliances. In this work we
focus on resiliency of deployed SCs and the impact of the QoS
(Quality of Service) requirements on the protection scheme
adopted. Some research efforts have focused on resiliency of
VMs. Ref [15] presents a VM placement method to achieve
redundancy against host server failures with a minimum set
of servers. The idea is to minimize the resources in order to
provide a certain protection level. With respect to our work
no consideration is made on the resource sharing and the
performance requirements of the VNFs that run on the VMs.
Moreover, the authors focus only on failure that occur within
physical nodes, while we include also failures of physical
links. Finally, Ref [16] proposes a model to describe the
components of services along with a management system to
deploy such information model, with the objective to provide
an automated and resilient deployment. A part from the
differences in the general approach in Ref [16] focuses on
resiliency of single VNF, Whereas we consider the resiliency
of the whole SC.
III. SERVICE CHAINS AND NETWORK MODEL
A. Network model
We model the physical network as a directed graph com-
posed of a set of physical nodes (which can host VNFs or only
act as forwarding nodes) and a set of physical link representing
the set of fiber links. Each physical link is associated with a
bandwidth capacity. The physical nodes equipped with COTS
hardware are referred to as NFV nodes and can have different
amount of processing capacity in terms of number of VMs
that they can host.
B. Service chains model
Service chains are composed by sequential concatenation of
multiple VNFs. To deploy a SC, an operator need to find the
right placement of VNFs into the NFV nodes (VNF placement
process) in the physical network and chain them through a
physical path. Different SCs can share multiple VNFs and
different VNFs can be placed into the same physical NFV
node. As shown in Fig. 1, two SCs composed of different
VNFs have both as start point the physical node v1and as
end point the physical node v6. In addition, VNF1 is shared
among the two SCs and mapped to physical node v2which
shall be equipped with enough processing capacity to host
such VNF.
C. VNF model
Generally, a VNF is an abstracted object that performs
operations on input traffic. Each VNF has a processing capa-
bility which corresponds to the number of CPU Cores that are
assigned to the VM that host that VNF. Moreover, we assume
that each service corresponds to one SC modeled through a
simple line graph composed by a pair of start/end-points, a
set of virtual nodes representing the VNFs and a set of virtual
links chaining consecutive VNFs requests within the SC1. In
order to simplify the modeling, the concept of requests are
decoupled from the VNFs that compose the Service chains.
In other words, as shown in Fig. 1 (phase 1 and 2), a SC is
considered as a chain of VNF requests. In order to deploy
SCs in the network, VNF instances are mapped to NFV nodes
(phase 1) and successively, VNF requests are mapped to those
NFV nodes that hosts the requested VNFs (phase 2). The same
apply for the mapping of end-points, which we assume have
fixed location,known a priori, and that they cannot host VNFs.
Furthermore, we assume the each SC serves aggregated traffic
of a set of users requesting a specific service from a specific
physical location
IV. RESILIENT DESIGN PROTECTION SCHEMES
One important aspect for network operators is to guarantee
service continuity in case of failures. To achieve such objec-
1We use the term virtual node to indicate the start/end point and the VNFs
composing the SC and refer to to the segment used to chain two consecutive
VNFs within the same SC as virtual link.
Start
point VNF1 End
point
VNF2 VNF3 VNF4
(a) Service chain to be embedded
VNF1
VNF2 VNF3
VNF4
VNF1
VNF2
VNF3
VNF4
Backup VNF
Primary VNF
End
point
Start
point
(b) End-to-end protection
VNF1
VNF2
VNF3
VNF4
Working
path
protection
path
Start
point
End
point
(c) Virtual-link protection
VNF1
VNF2
VNF3
VNF1
VNF2
VNF3
VNF4
VNF4
primary path
and backup
path might
share the same
physical link
Start
point
End
point
(d) Virtual-node protection
Fig. 2. Proposed protection schemes.
tive, resiliency must be taken into account in the design phase.
This means deploying redundancy VNFs instances, which
are kept in standby mode and activated upon the occurrence
of a node or link failure that comprises the service of the
primary VNFs. The redundancy schemes depend on the type of
failures. In this section we present the three protection schemes
proposed in this work. The redundancy approaches are divided
in the following two categories:
a) On-Site Redundancy: Critical VNFs supporting criti-
cal services and customers require fast switchover to backup
VNFs in order to ensure availability. In order to ensure latency
expectation, backup VNFs need to be instantiated on-site
(i.e., Centralized Redundancy). Critical VNFs may necessitate
a 1+1 level of redundancy while less critical function can
tolerate a 1:1 redundancy. The main benefits from a centralized
redundancy is to reduce switchover time, which allow to
speed up the recovery process, and reduce the amount of
VNF internal state information that need to be transfered from
primary to backup VNFs. Note that this approach does not
provide resiliency against node failures, since primary and
backup VNFs share the same physical location.
b) Off-Site Redundancy: A off-site redundancy architec-
ture involves having redundant VNFs placed in (hot or cold)
standby mode in selected remote locations or NFVI nodes in
the network operator’s serving region. The intent is to instanti-
ate them when there are failed VNFs in many NFVI-Points-of-
Presence (NFVI-PoP). Moreover, this approach can guarantee
resiliency against link and node failures since backup VNFs do
not share the same physical locations as primary VNFs. Hence,
based on the service criticalness and the resiliency guarantees
targeted the operator can choose between an on-site or an off-
site redundancy approach [17].
In this work we propose three resiliency protection schemes.
The first consists of an end-to-end protection of the entire SC.
The idea behind such design is to have a SC that is resilient
against single-link and single-node failures. To achieve such
goal a primary SC is embedded in the physical network to
support the related service in normal conditions and it is
protected through a backup SC which has its VNFs embedded
in different physical locations. The physical paths used to
chain primary and backup VNFs must be node disjoint. Fig.
2(b) shows an example of such protection scheme, where a
the SC illustrated in Fig. 2(a), composed fo four VNFs, is to
embedded into the physical network. This protection scheme
can be considered as an Off-site redundancy strategy since all
backup VNFs are instantiated in different locations from where
the primary ones are hosted. In this case, both redundancy
strategies 1+1 and 1:1 are possible, depending on the service
latency requirement and operators’ design objective in terms
of resource utilization. Note that both primary and backup
physical paths resulting from the embedding must meet the
latency requirement of the service. We refer to this protection
strategy as End-to-end protection (E2E-P).
The second protection scheme can be considered as an On-
site redundancy protection scheme, with the objective to pro-
tect the virtual links used to concatenate the VNFs of a certain
SC. Hence, providing resiliency against physical link failures.
Each virtual link of the SCs is embedded through two physical
paths, one primary path and one backup path, which must not
share any physical link, while different primary/backup virtual
links of the same SC can share common physical links. An
example of such scenario is shown in Fig. 2(c). We refer to
this protection scheme as Virtual-link protection (Vl-P).
Finally, the last protection scheme provides resiliency
against single-node failure. Each VNF composing the SC is
instantiated in two disjoint physical locations, whereas the
physical paths used to concatenate the primary and backup
VNFs might share physical links. This protection scheme
suits operators’ need when failures occur in nodes with
higher probability with respect to links. An example of this
scenario is shown in Fig. 2(d). We refer to this scenario as
Virtual-node protection (Vn-P).
V. P ROBLEM STAT EM EN T
A. Modeling the physical topology
We model the physical network as a directed graph G=
(V, E )where Vrepresents the set of physical nodes vV,
which can host VNFs or act as forwarding nodes, while E
represents the set of physical links (v, v0)Ewhich model
high-capacity fiber links. Each physical link is associated
with a latency contribution due to signal transmission and
propagation, denoted with λ(v, v0)and a bandwidth capacity
β(v, v0). The physical nodes equipped with COTS hardware
are reffered as NFV nodes and can have different amount of
processing capacity in terms of number of Virtual machine
that they can host. Finally, we consider a processing-related
latency ω(v) : vV, introduced by NFV nodes. This latency
contribution is proportional to the number of SCs sharing the
same VNF, hence, if a VNF is shared among a high number
of SCs, the context switching latency would impact more the
total latency.
B. VNF and service chains Modeling
Generally, a VNF is an abstracted object that performs
operations on input traffic. Each VNF fFhas a processing
capability which corresponds to the number of CPU Cores that
are assigned to the VM that host the VNF f. We assume that
a VNF shared among different SCs must run on a VM with
enough capacity in terms of CPUs and that each VNF require
one CPU core of the VM.
Moreover, we assume that each service corresponds to one SC
modeled through a simple line graph Sc= (EcUcGc)where
Ecis the set of end-points of the SC, Ucis the set of VNF
requests u, while Gcis the set of virtual links (u, u0)chaining
requests uand u0Uc. In order to simplify the modeling the
concept of requests are decoupled from the actual network
functions that compose the Service chains. In other words
VNFs are mapped to requests through a mapping parameter
γc
uthat specify the network function fFrequested by
request uUc, while requests are mapped to physical nodes
through a decision variable. The same apply for the mapping
of end-points, which we assume are fixed location and known a
priori. Furthermore, we assume the each SC serve a set of users
requesting a specific service from a specific physical location,
and that each virtual link composing the SC is characterized
by a bandwidth requirement γ(u, u0) : u, u0Uc, c C.
In addition, each SC is associated with a maximum tolerated
latency, referred to as φ(c) : cC.
TABLE I
PARAMETERS DESCRIPTION FOR THE ILP MODEL
Parameter Domain Description
ηc
ucC, u UcPhysical start/end point
where uis mapped for SC c
γc
ucCNetwork function requests ufor SC c,
uGcγc
uF
βv,v0(v, v 0)EBandwidth capacity of physical link
(v, v0)
λv,v0(v, v 0)ELatency of physical link (v, v0)
ωvE v Vcontest switching latency of node v.
τc
uF c C, u UcVNF frequested by request uin the SC
c
φccCMaximum tolerated latency for SC c
Nreq (f)fFMaximum number of requests of different
SCs that VNF fcan handle
NV M (v)vVMaximum number of virtual machines
that node vcan host
MBig-M parameter
C. ILP models
We now formulate the ILP models for resilient placement of
VNFs. In Table I and Table II we summarize the parameters
and the variables used. Given a physical topology, a set of SCs
TABLE II
VARIABLES DESCRIPTION FOR THE ILP MODELS
Variable Domain Description
mc
u,v ∈ {0,1}cC, u
UcvV
Binary variable equal to 1
iff the primary VNF request
uof SC cis mapped to
physical node v
nc
u,v ∈ {0,1}cC, u
UcvV
Binary variable equal to 1
iff the backup VNF request
uof SC cis mapped to
physical node v
xc
v,v0,x,y,u,u0
{0,1}
cC, (v, v0)
E, x V, y
V, (u, u0)Gc
Binary variable equal to 1
iff the physical link (v, v0)
belongs to the path between
nodes xand ywhere pri-
mary VNFs requests uand
u0for SC care mapped,
otherwise, 0
yc
v,v0,x,y,u,u0∈ {0,1}cC, (v , v0)
E, x V, y
V, (u, u0)Gc
Binary variable equal to 1
iff the physical link (v, v0)
belongs to the path between
xand ywhere backup
VNFs requests uand u0for
SC care mapped, otherwise
0
if,v ∈ {0,1}fF,vVBinary variable equal to 1
iff VNF fis hosted by
physical node votherwise 0
av∈ {0,1}vVBinary variable equal to 1
iff node vhosts at least one
VNF.
to be deployed in the network, we want to find the optimal
placement of VNFs such that:
The number of VNF nodes is minimized.
Latency requirements of SCs are met.
Resiliency is achieved according to the goals of the above
mentioned scenarios (see Fig. 2 of section IV).
Objective function
Minimize X
vV
av(1)
We consider three types of constraints to solve this problems,
namely: Placement constraint, routing constraints and perfor-
mance constraints. Due to space limitation we show only the
constraints for the E2E-P protection scenario and give a brief
description of what differs in the other two scenarios, Vl-P
and Vn-P.
Placement constraints
Constraints (2a) and (2b) force each primary/backup VNF to
be mapped to one single node. Equations 2c) and (2d state
that a corresponding VNF fis mapped to physical node v
only if there is a primary/backup VNF request. Constraint (2e)
enforces that primary and backup VNF request ucannot be
mapped to the same node (node disjointness).
PvVmc
u,v = 1 cC, u Uc(2a)
PvVnc
u,v = 1 cC, u Uc(2b)
if,v X
uUc:γc
u=f
mc
u,v +nc
u,v fF, v V(2c)
X
uUc:γc
u=f
mc
u,v +nc
u,v M·if,v fF, v V(2d)
mc
u,v +nc
u,v 1uUc, c C, v V:v6=ηc
u(2e)
Routing constraints
Constraints (3a) [(3b)] ensure that a physical link (v, v0)can
belong to a path between two nodes xand yfor a virtual
link (u, u0)of the SC conly if two consecutive primary
[backup] VNF requests uand u0are mapped to these nodes,
respectively. Note that equations (3a)-(4d) contain products of
binary variables that we linearize in order to solve the ILP
models.
wc
v,v0,x,y,u,u0mc
u,x·mc
u0,y (3a)
cC, (v, v0)E, x, y V , (u, u0)Gc
pc
v,v0,x,y,u,u0nc
u,x·nc
u0,y (3b)
cC, (v, v0)E, x, y V , (u, u0)Gc
Equations (4a)-(4b) [(4c)-(4d)] are source and destinations
constraints for primary and backup VNF requests, respectively.
They ensure that a virtual link starts in node xwhere primary
[backup] start-point request uof SC cis mapped, and that the
virtual link end in node ywhere primary [backup] end-point
requests u0of SC cis mapped.
P(x,v)E:x,yVwc
x,v,x,y,u,u0·mc
u,x ·mc
u0,y = 1 (4a)
cC, (u, u0)Gc
P(v,y)E:x,yVwc
v,y,x,y,u,u0·mc
u,x ·mc
u0,y = 1 (4b)
cC, (u, u0)Gc
P(x,v)E:x,yVpc
x,v,x,y,u,u0·nc
u,x ·nc
u0,y = 1 (4c)
cC, (u, u0)Gc
P(v,y)E:x,yVpc
v,y,x,y,u,u0·nc
u,x ·nc
u0,y = 1 (4d)
cC, (u, u0)Gc
During the mapping of primary/backup VNF requests on a
physical path between xand yincoming links for the node x
are not considered, constraint (5a), and no outgoing link for
node yis considered (constraint (5b)
X
(v,x)E:vV
wc
v,x,x,y,u,u0=X
(v,x)E:vV
pc
v,x,x,y,u,u0= 0 (5a)
cC, x V, y V:x6=y , (u, u0)Gc
X
(y,v)E:vV
wc
y,v,x,y,u,u0=X
(y,v)E:vV
pc
y,v,x,y,u,u0= 0 (5b)
cC, x V, y V:x6=y , (u, u0)Gc
Constraints (6a)-(6d) are transit constraints for primary/backup
VNF requests. In particular, constraints (6a) and (6b) ensure
that for any intermediate node ωwithin the physical path
between xand y, if one of the incoming links belong to the
primary/backup physical path, then also one of its outgoing
links belong to the physical path. While constraints (6c) [(6d)]
avoid the use of multiple incoming [outgoing] links of the
intermediate node.
X
(v,w)E:vV
wc
v,w,x,y,u,u0=X
(w,v0)E:vV
wc
w,v0,x,y,u,u0(6a)
cC, w V, x, y V:x6=w, y 6=w, (u, u0)Gc
X
(v,w)E:vV
pc
v,w,x,y,u,u0=X
(w,v0)E:vV
pc
w,v0,x,y,u,u0(6b)
cC, w V, x, y V:x6=w, y 6=w, (u, u0)Gc
X
(v,w)E:vV
wc
v,w,x,y,u,u01(6c)
cC, w V, x, y V:x6=w, y 6=w, (u, u0)Gc
X
(v,w)E:vV
pc
v,w,x,y,u,u01(6d)
cC, w V, x, y V:x6=w, y 6=w, (u, u0)Gc
Finally, constraint (7a) ensures that a physical link (v, v0)is
whether part of the primary physical path or in the backup
physical path used for the embedding of all VNF request of
SC c.
X
(u,u0)Gc
wc
v,v0,x,y,u,u0+pc
v,v0,x,y,u,u01(7a)
cC, x, y, v, v0V: (v, v 0)(v0, v)E
Latency and capacity constraints
X
fF
if,v M.avvV(8a)
avX
fF
if,v vV(8b)
X
cC
(u,u0)Gc
x,vV
(wc
v,v0,x,y,u,u0+pc
v,v0,x,y,u,u0)·βu,u0Cv,v0(8c)
(v, v0)E
σc
w=X
vV,uUc
mc
u,v ·ωvcC(8d)
σc
p=X
vV,uUc
nc
u,v ·ωvcC(8e)
X
x,vV
(u,u0)Gc
(v,v0)E
(wc
v,v0,x,y,u,u0·λv,v0) + σc
wφccC(8f)
X
x,vV
(u,u0)Gc
(v,v0)E
(pc
v,v0,x,y,u,u0·λv,v0) + σc
pφccC(8g)
X
fF
if,v NV M (v)vV(8h)
X
cC
uUc:γc
u=f
mc
u,v +nc
u,v Nreq (f)vV, f F(8i)
Constraints (8a)-(8b) select the active NFV nodes. A node is
considered active if it hosts at least one single VNF. Constraint
(8c) ensures that link capacity is not exceeded, whereas con-
straints (8d) and (8e) compute the context switching latency
contribution σc
wand σc
pfor primary and backup embedding of
SC c, respectively. The maximum latency of primary/backup
embedding of SC care constrained in (8f)-(8g). Finally, the
maximum number of VMs that node vcan host is bounded
by (8h), and the number of parallel requests that a given VNF
can serve is constrained in (8i).
D. Modeling other scenarios
With respect to the E2E-P, in the Vl-P we must ensure
that the primary and backup physical path used to map a
certain virtual link of a SC do not share any physical link.
and no node disjointness constraint is required. Here the link
disjointness is applied considering only one single virtual link
at the time. Finally, for the Vn-P scenario, only the node
disjointness constraint apply and no disjointness constraints
between primary/backup physical paths are needed since they
can use the same physical links.
E. Problem complexity
The total number of variables and constraints of the E2E-P
optimization problems can be calculated using the following
formulas:
Nvars =|V| · (2 · |C|˙
|Uc|+ 2 · |C||E||V||Gc|+|F|+ 1) (9)
Nconst =|C| · (2 · |Uc|+|Uc||V|+ 2 · |E||Gc||V|2+
4· |Gc|+|V|2+ 4 · |V|3|Gc|+|V|2|E|+ 4)+
3· |V| · (|F|+ 1) + |E|
(10)
In both equations we observe that the dominant term for
variables and constraints is 2· |E||Gc||V|2. Thus, the problem
complexity, for all proposed protection scenarios, given by the
sum of the number of variables and number of constraints is
in the order of O(|Gc|·|E|·|C|·|V|2).
VI. CA SE S TU DY AN D RE SU LTS
In this section we present and discuss the results of the
ILP models shown in section V. To solve the ILP problems
CPLEX 12.6.1.0 installed on hardware platform equipped with
8×2GHz processor and 8 Gb of RAM. In order to evaluate
the impact of latency requirements on the protection scenarios
we investigated the embedding of two types of services chains:
The first one with stringent latency requirement (On-line Gam-
ing) and second one with non stringent latency requirements
(Web Service). The maximum end-to-end tolerated latency for
these services has be set to 500 ms for Web-service and 60 ms
for Online-Gaming [11]. Table III shows the VNFs composing
both SCs, their bandwidth requirements and maximum allowed
latency. Due to the hardness of the optimization problem, we
considered only two SCs in each optimization run and solved
the ILP models for two homogeneous cases, when 2 SCs
of the same type are embedded in the network, and for one
heterogeneous case, when the two SC types are embedded in
the network. As physical topology we considered the NSFNET
network (14 nodes and 22 bidirectional links). In addition, we
assume that all the physical nodes are NFV nodes and can
act as start/end points of SCs. Each NFV node is assumed to
have the same capacity in terms of VMs they can accomodate.
We set the context switching delay to 4 ms per VNF and
assume that link capacity is 1 Gbps (i.e., link capacity is not
a strict constraint). Moreover, we assume that the bandwidth
requirements of virtual links chaining VNFs is the same for
the whole SC (i.e, data rate do not change at the output of
the VNFs). These results were obtained averaging the results
of 10 instances, for each value of nodes capacity and each
protection scenario, considering different pairs of start/end
points at each instance. Fig. 3(a), Fig. 3(b) and Fig. 3(c)
show the average number of active nodes needed to support of
the proposed protection scenarios for different values of node
capacity (number of VMs that a node can host).
TABLE III
PERFORMANCE REQUIREMENTS FOR THE SERVI CE CHAINS
Service Chain Chained VNFs β φc
Web-Service NAT-FW-TM-WOC-IDPS 100 kbit/s 500 ms
Online-Gaming NAT-FW-VOC-WOC-IDPS 50 kbit/s 60 ms
NAT: Network Address Translator, FW: Firewall, TM:Traffic Monitor,
WOC: WAN Optimization Controller, IDPS: Intrusion Detection Prevention
System, VOC: Video Optimization Controller
A. Impact of latency
Fig. 3(a) presents the number of active nodes for the less
stringent SC in terms of latency (Web-Service). We observe
that all protection scenarios are possible and that the Vl-P
scenario activates the same amount of Unprotected Scenario.
We note that a service with low requirements on latency can be
protected against single-link failures (Vl-P) with no additional
NFV nodes with respect to the Unprotected case (baseline).
On the other hand, providing protection against both single-
link and single failure (E2E-P) requires the activation of twice
the amount of NFV nodes. In case of SCs with high latency
requirements, in Fig. 3(b), we observe that all scenarios lead
to infeasible solutions when only two VMs are allowed per
node, mainly due to the fact that distributing VNFs among high
number of nodes increases the latency of physical paths needed
to chain the VNFs and consequently violates the latency
constraint. We also observe that the unprotected scenario,
considered as baseline case, requires at least three VM per
VNF node to meet latency requirement. Different results were
obtained for the Vl-P case which is infeasible independently
from node capacity. This means that the operator is constrained
to place backup VNFs “Off-site” to provide resiliency against
only single-link failures, when only latency critical SCs are
deployed. In this case, it is preferable to provide resiliency
against both node and link failures (E2E-P) rather than provide
protection against only node failures (Vn-P) since both sce-
narios activates the same number of NFV nodes independently
from node capacity. For the heterogeneous scenario shown
in Fig. 3(c), all protection scenarios are possible with at
0
2
4
6
8
10
12
2 3 4 5 6 7 8
Number of active NFV nodes
Node capacity
UnPro Vl-P Vn-P Tot-P
(a) Web-service
0
2
4
6
8
10
12
2 3 4 5 6 7 8
Number of active NFV nodes
Node capacity
(b) Heterogeneous
0
2
4
6
8
10
12
2 3 4 5 6 7 8
Number of active NFV nodes
Node cpacity
(c) Online-gaming
Fig. 3. Comparison of the proposed protection scenarios for different latency
requirements
least 2 VMs except from the Vl-P scenario which is only
possible starting from 5 VMs. In terms of latency, it means
that deploying SCs with different latency requirements and
sharing VNFs between SCs can guarantee resiliency with a
small number of VMs, and consequently less failure impact
within NFV nodes. On the other hand, for Vn-P and E2E-P
protection scenarios, deploying the same SCs or different SCs
in terms of latency requirements does not impact resources in
terms of NFV nodes as similar results were obtained in both
homogeneous and heterogeneous cases starting from 3 VMs
per node, except from the case of On-line gaming SCs when
2 VMs are allowed per node.
B. Effect of node capacity
As can be seen from Fig. 3(a), 3(b) and 3(c), increasing
node capacity allows to decrease the number of NFV nodes
irrespectively from the type of SCs deployed. In general, we
observe that the number of active nodes is halved for all
protection scenarios, when increasing the number of VMs per
node from 2 to 5. Further increase of node capacity does
not impact the number of active nodes, which means that
VNF consolidation is limited by latency, as consolidating more
VNFs into less nodes would increase the impact of context
switching latency.
0
10
20
30
40
50
60
E2E-P
Vn-P
Vl-P
unpro
E2E-P
Vn-P
Vl-P
unpro
E2E-P
Vn-P
Vl-P
unpro
E2E-P
Vn-P
Vl-P
unpro
E2E-P
Vn-P
Vl-P
unpro
E2E-P
Vn-P
Vl-P
unpro
E2E-P
Vn-P
Vl-P
unpro
2 3 4 5 6 7 8
Average hop cout
Node capacity
E2E-P working
E2e-P backup
Vn-P working
Vn-P backup
Vl-P working
Vl-P backup
Unpro
Fig. 4. primary/backup path lengths with respect to node capacity.
C. Impact of node capacity on the average hop count
We analyzed the impact of node capacity on the aver-
age length of primary/backup physical paths of all proposed
protection strategies. In Fig.4 we show the primary/backup
paths lengths when 2 SCs with low requirements on latency
are deployed. These results were obtained by averaging the
paths lengths of 5 start/end point pairs randomly selected and
tested for all protection scenarios. We observe that at the
increasing of node capacity the length of the primary path
does not change significantly, for all protection strategies.
Different results are observable in case of backup primary
paths, where it is clear that increasing node capacity does not
mean reducing backup paths lengths. This is shown by the fact
that allowing more than 5 VMs per nodes does not reduce the
average backup path length, meaning that a trade-off between
consolidation of VNFs and link capacity exist.
VII. CONCLUSIONS
In this work we proposed three different protection strate-
gies to provide resilient SCs deployment against single-node,
single-link, single-node and single-link failures. We reported
the formulation of one of them through ILP, solved the ILP
models considering a small number of SCs with different
latency requirements, and found that a trade-off between node
capacity and latency of the deployed SCs. In our small-scale
scenario, we conclude that in order to provide resiliency to
SCs against single-link and single-node failures up to 107%
more NFV nodes are needed with respect to the unprotected
scenarios and the case where only single-link failures are
targeted. Future steps of this work aim at developing an
heuristic model to allow solving larger instances (large number
of SCs) in reasonable time. We also aim at extending the
proposed models with a shared protection scheme.
ACKNOWLEDGMENT
This research has received fundings from the European
Community Seventh Framework Program FP7/2013-2015 un-
der grant agreement no. 317762 COMBO project, and from
COST ACTION 15127 RECODIS (Resilient Communication
Services Protecting end-user Applications from Disaster-based
Failures).
REFERENCES
[1] Network Functions Virtualisation, “Draft ETSI GS NFV-SEC 001 v0.2.1
(2014-06),” 2014.
[2] J. Halpern and C. Pignataro, “Service Function Chaining (SFC) Archi-
tecture,” Tech. Rep., 2015.
[3] R. Guerzoni et al., “Network functions virtualisation: an introduction,
benefits, enablers, challenges and call for action, introductory white
paper,” in SDN and OpenFlow World Congress, 2012.
[4] W. Liu, H. Li, O. Huang, M. Boucadair, N. Leymann, Z. Cao, Q. Sun,
and C. Pham, “Service Function Chaining (SFC) general use cases,” Eu-
ropean Telecommunication Stadards Institute (ETSI), Service Functions
Chaining (SFC) framework, Tech. Rep., 2014.
[5] M. Boucadair, C. Jacquenet, R. Parker, D. Lopez, J. Guichard, and
C. Pignataro, “Service function chaining: Framework & architecture,
ETSI-Service Functions Chaining (SFC) framework, Tech. Rep., 2013.
[6] R. Mijumbi, J. Serrat, J. l. Gorricho, S. Latre, M. Charalambides,
and D. Lopez, “Management and orchestration challenges in network
functions virtualization,” IEEE Communications Magazine, vol. 54,
no. 1, pp. 98–105, January 2016.
[7] A. Fischer, J. F. Botero, M. Till Beck, H. De Meer, and X. Hessel-
bach, “Virtual Network Embedding (VNE): A survey,” Communications
Surveys & Tutorials, IEEE, vol. 15, no. 4, pp. 1888–1906, 2013.
[8] M. R. Rahman and R. Boutaba, “SVNE: Survivable virtual network
embedding algorithms for network virtualization,” IEEE Transactions
on Network and Service Management, vol. 10, no. 2, pp. 105–118, June
2013.
[9] C. Prodhon and C. Prins, “A survey of recent research on location-
routing problems,” European Journal of Operational Research, vol. 238,
no. 1, pp. 1–17, 2014.
[10] S. Mehraghdam, M. Keller, and H. Karl, “Specifying and placing chains
of virtual network functions,” in IEEE 3rd International Conference on
Cloud Networking (CloudNet). IEEE, 2014, pp. 7–13.
[11] M. Savi, M. Tornatore, and G. Verticale, “Impact of processing costs on
service chain placement in network functions virtualization,” in IEEE
Conference on Network Function Virtualization and Software Defined
Network (NFV-SDN), Nov 2015, pp. 191–197.
[12] I. Cerrato, M. Annarumma, and F. Risso, “Supporting fine-grained
network functions through intel DPDK,” in Third European Workshop
on Software Defined Networks (EWSDN). IEEE, 2014, pp. 1–6.
[13] J. W. Jiang, T. Lan, S. Ha, M. Chen, and M. Chiang, “Joint vm placement
and routing for data center traffic engineering,” in IEEE Conference on
Information Communication (INFOCOM). IEEE, 2012, pp. 2876–2880.
[14] H. Moens and F. De Turck, “VNF-P: A model for efficient placement
of virtualized network functions,” in 10th International Conference on
Network and Service Management (CNSM). IEEE, 2014, pp. 418–423.
[15] F. Machida, M. Kawato, and Y. Maeno, “Redundant virtual machine
placement for fault-tolerant consolidated server clusters,” in IEEE Net-
work Operations and Management Symposium (NOMS), April 2010, pp.
32–39.
[16] M. Scholler, M. Stiemerling, A. Ripke, and R. Bless, “Resilient deploy-
ment of virtual network functions,” in the 5th International Congress on
Ultra Modern Telecommunications and Control Systems and Workshops
(ICUMT). IEEE, 2013, pp. 208–214.
[17] ETSI, “GS NFV-REL 001 v1. 1.1: Network functions virtualisation(nfv);
resiliency requirements,” ETSI industry Specfication Group (ISG) Net-
work Functions Virtualisation (NFV), Tech. Rep., 2015.
... Redundancy mechanisms such as duplicating service functions would fulfill availability requirement 26 . For example, researchers 27,28 have studied SFC failure caused by processing node or network link failures. Service functions generally have a backup that can be instantiated in different locations. ...
... Hmaity et al. 27 proposed an approach that chooses two nodes for deployment-primary and back up-where functions are deployed to primary nodes, and redeployed to back up nodes in case of failure. However, this approach has two issues, reallocation increases delay, as well as the obvious limitation if both of the chosen nodes fail to complete. ...
Article
Fog computing is an intermediate infrastructure between edge devices (e.g., Internet of Things) and cloud systems that is used to reduce latency in real-time applications. An application can be composed of a collection of virtual functions, between which dependency constraints can be captured in a service function chain (SFC). Virtual functions within an SFC can be executed at different geo-distributed locations. However, virtual functions are prone to failure and often do not complete within a deadline. This results in function reallocation to other nodes within the infrastructure; causing delays, potential data loss during function migration, and increased costs. We proposed Greedy Nominator Heuristic (GNH) to address these issues. GNH is based on redundant deployment and failure tracking of virtual functions. GNH places replicas of each function at multiple locations—taking account of expected completion time, failure risk, and cost. We make use of a MapReduce-based mechanism, where Mappers find suitable locations in parallel, and a Reducer then ranks these locations. Our results show that GNH reduces latency by up to 68%, and is more cost effective than other approaches which rely on state-of-the-art optimization algorithms to allocate replicas.
... Based on this, the application of redundancy technology can be realized to improve the availability of the system. is redundancy strategy has been widely used in the deployment of Service Function Chain (SFC). Hmaity et al. [10] proposed three redundancy protection schemes in the redundancy protection mechanism of SFC, namely, (1) endto-end redundancy, (2) virtual node redundancy protection, and (3) virtual link redundancy protection. Although these three schemes can protect nodes and links in a targeted manner, such extensive protection will lead to waste of resources, since the author did not study the number of redundancy. ...
... e total load deviation degree of the cluster can be measured by the sum of the coefficients of variation of the node CPU and memory resources, and the smaller the value of the indicator, the smaller the load difference between the nodes, and the better the load balancing effect. To align the cluster load balancing metric with the increase in load balancing performance, the load balancing metric is defined as a negative number of load deviations for the total load of the cluster, which can be expressed in formula (10). ...
Article
Full-text available
The microservice architecture has many advantages, such as technology heterogeneity, isolation, scalability, simple deployment, and convenient optimization. These advantages are important as we can use diversity and redundancy to improve the resilience of software system. It is necessary to study the method of improving the resilience of software system by diversity implementation and redundant deployment of software core components based on microservice framework. How to optimize the diversity deployment of microservices is a key problem to maximize system resilience and make full use of resources. To solve this problem, an efficient microservice diversity deployment mechanism is proposed in this paper. Firstly, we creatively defined a load balancing indicator and a diversity indicator. Based on this, a diversified microservices deployment model is established to maximize the resilience and the resource utilization of the system. Secondly, combined with load balancing, a microservice deployment algorithm based on load balance and diversity is proposed, which reduces the service’s dependence on the underlying mirror by enriching diversity and avoids the overreliance of microservices on a single node through decentralized deployment, while taking into account load balancing. Finally, we conduct experiments to evaluate the performance of the proposed algorithm. The results demonstrate that the proposed algorithm outperforms other algorithms.
... The heuristic approach is proposed to solve the scalability problem of the ILP-based algorithms in large physical network scenarios, where solving placement problems in large networks consume much execution time. ILP solutions are formulated in [46,47,51,55,56,92] while Mixed ILP (MILP) is formulated in [52,57,70,76]. ...
... Hmaity et. al [92] concluded that, the protection of one single node and one single link requires 107% NFV nodes which is more than the unprotected scenario. ...
Thesis
Full-text available
Network Function Virtualization (NFV) and Software Defined Net- working (SDN) have attracted many Telecoms operators to deploy them in the fifth Generation (5G) of mobile network infrastructure. Network Function (NF) decompositions were proposed to map tradi- tional NFs into their corresponding Virtual Network Functions (VNFs) in order to deploy them flexibly in the NFV environment. This re- implementation of large functions such as mobile core network func- tion might lead to multiple paths service graph in the form of one traditional function to many VNFs with (expected) high number of virtual link interconnections. Therefore, it is expected that solving NFV Resource Allocation (NFV-RA) problem for service chains that contains multiple virtual paths will inherit the complexity of Virtual Network Embedding (VNE) problem, which was proved to be NP- hard. This study proposed a path mapping approach to solve the NFV-RA problem for decomposed Network Service Chains (NSCs). The problem was formulated as Integer Linear Programming (ILP). Then, two schemes were suggested to solve the problem with path mapping approach. One is an ILP-based scheme and the other is a heuristic scheme. Both schemes were simulated with 22 experimental scenarios. The results evaluation showed that the proposed approach has significantly enhanced execution time and embedding cost.
... In the same context, authors in [92] formulate the resiliency problem as an optimization problem aiming to reduce the maximum number of impacted service chains during a PM failure while meeting the slicespecific requirements and respecting the VNF placement constraints in the co-located network slices. Similarly, in [60], the authors provide three ILP algorithms to tackle the VNF placement issue while ensuring resiliency against single link, single node, and single-node/link failures. ...
Preprint
Full-text available
With the growing demand for openness, scalability, and granularity, mobile network function virtualization (NFV) has emerged as a key enabler for most mobile network operators. NFV decouples network functions from hardware devices. This decoupling allows network services, referred to as Virtualized Network Functions (VNFs), to be hosted on commodity hardware which simplifies and enhances service deployment and management for providers, improves flexibility, and leads to efficient and scalable resource usage, and lower costs. The proper placement of VNFs in the hosting infrastructures is one of the main technical challenges. This placement significantly influences the network's performance, reliability, and operating costs. The VNF placement is NP-Hard. Hence, there is a need for placement methods that can scale with the issue's complexity and find appropriate solutions in a reasonable duration. The primary purpose of this study is to provide a taxonomy of optimization techniques used to tackle the VNF placement problems. We classify the studied papers based on performance metrics, methods, algorithms, and environment. Virtualization is not limited to simply replacing physical machines with virtual machines or VNFs, but may also include micro-services, containers, and cloud-native systems. In this context, the second part of our article focuses on the placement of containers in edge/fog computing. Many issues have been considered as traffic congestion, resource utilization, energy consumption, performance degradation, security, etc. For each matter, various solutions are proposed through different surveys and research papers in which each one addresses the placement problem in a specific manner by suggesting single objective or multi-objective methods based on different types of algorithms such as heuristic, meta-heuristic, and machine learning algorithms.
... Hmaity et al. [91] presented a solution that uses integer linear programming to solve the problem of link and node failures in service chains. They modelled the physical infrastructure using a directed network graph, with representations for physical devices with the ability to host virtual network functions and links. ...
Research
Full-text available
Network slicing has become a fundamental property for next-generation networks, especially because an inherent part of 5G standardisation is the ability for service providers to migrate some or all of their network services to a virtual network infrastructure, thereby reducing both capital and operational costs. With network function virtualisation (NFV), network functions (NFs) such as firewalls, traffic load balancers, content filters, and intrusion detection systems (IDS) are either instantiated on virtual machines (VMs) or lightweight containers, often chained together to create a service function chain (SFC). In this work, we review the state-of-the-art NFV and SFC implementation frameworks and present a taxonomy of the current proposals. Our taxonomy comprises three major categories based on the primary objectives of each of the surveyed frameworks: (1) resource allocation and service orchestration, (2) performance tuning, and (3) resilience and fault recovery. We also identify some key open research challenges that require further exploration by the research community to achieve scalable, resilient, and high-performance NFV/SFC deployments in next-generation networks.
... Therefore, studying the SSC orchestration mechanism of reliability assurance is of great significance to the normal operation of security services. The current virtual network function (VNF) backup methods in the literature include dedicated protection backup and shared protection backup [19]. Dedicated backup is used to allocate dedicated physical resources for the VNF to be backed up, while the backup resources are not shared with other VNFs. ...
Article
Full-text available
Software-defined networking (SDN) and network function virtualization (NFV) make a network programmable, resulting in a more flexible and agile network. An important and promising application for these two technologies is network security, where they can dynamically chain virtual security functions (VSFs), such as firewalls, intrusion detection systems, and intrusion prevention systems, and thus inspect, monitor, or filter traffic flows in cloud data center networks. In view of the strict delay constraints of security services and the high failure probability of VSFs, we propose the use of a security service chain (SSC) orchestration algorithm that is latency aware with reliability assurance (LARA). This algorithm includes an SSC orchestration module and VSF backup module. We first use a reinforcement learning (RL) based Q-learning algorithm to achieve efficient SSC orchestration and try to reduce the end-to-end delay of services. Then, we measure the importance of the physical nodes carrying the VSF instance and backup VSF according to the node importance of VSF. Extensive simulation results indicate that the LARA algorithm is more effective in reducing delay and ensuring reliability compared with other algorithms.
... The protection for SFC is required to consider the inner connections between VNFs in SFCs. The works in [10] and [11] focused on the SFC allocation model to minimize the transmission delay for latency-sensitive services. Redundancy VNFs and links are considered to prevent the decline of QoS caused by node and link failures. ...
... However, backup method 3 is not always feasible in traditional networks due to additional propagation delay incurred to reach MEC-2 from MEC-1 through a single connectivity (particularly, in the case of extreme latency requirements), and may end up in SLA violation and NOs may need to pay penalty for it. Therefore, in the existing works, onsite backup (i.e., backup NS is located in the same MEC cloud where primary NS is deployed) is preferred to handle failures and provide latency-sensitive services [37]. However, onsite backup methods (backup methods 1 and 2) cannot handle more than one server failure in the MEC cloud facility and service availability decreases drastically in the case of multiple servers failure. ...
Preprint
Full-text available
Network slicing (NS) and multi-access edge computing (MEC) are new paradigms which play key roles in 5G and beyond networks. NS allows network operators (NOs) to divide the available network resources into multiple logical NSs for providing dedicated virtual networks tailored to the specific service/business requirements. MEC enables NOs to provide diverse ultra-low latency services for supporting the needs of different industry verticals by moving computing facilities to the network edge. NS can be constructed by instantiating a set of virtual network functions (VNFs) on top of MEC cloud servers for provisioning diverse latency-sensitive communication services (e.g., autonomous driving and augmented reality) on demand at a lesser cost and time. However, VNFs, MEC cloud servers, and communication links are subject to failures due to software bugs, misconfiguration, overloading, hardware faults, cyber attacks, power outage, and natural/man-made disaster. Failure of a critical network component disrupts services abruptly and leads to users' dissatisfaction, which may result in revenue loss for the NOs. In this paper, we present a novel approach based on multi-connectivity in 5G networks to tackle this problem and our proposed approach is resilient against i) failure of VNFs, ii) failure of local servers within MEC, iii) failure of communication links, and iv) failure of an entire MEC cloud facility in regional level. To this end, we formulate the problem as a binary integer programming (BIP) model in order to optimally deploy NSs with the minimum cost, and prove it is NP-hard. To overcome time complexity, we propose an efficient genetic algorithm based heuristic to obtain near-optimal solution in polynomial time. By extensive simulations, we show that our proposed approach not only reduces resource wastage, but also improves throughput while providing high resiliency against failures.
Chapter
Cloud environments are the most economical deployment solutions to either enterprise IT or telecommunication operators. Initially, softwarization and virtualization of applications are the key principles used by operators to reduce their capital expenses (CAPEX) and operational expenses (OPEX). However, the unprecedented evolution of cellular network technologies like 4G and 5G, and advanced Wi‐Fi technologies has suddenly raised the huge number of subscribers and their variety of smart devices, and smart applications requirements. To handle the aforementioned issues, operators started deploying telecommunication clouds using network function virtualization (NFV) defined by the European Telecommunications Standards Institute (ETSI). NFV mainly focused on creating, managing, and provisioning virtual network functions (VNF) at all access levels. Besides, software‐defined networking (SDN) complements NFV infrastructure for designing flexible, scalable, and programmable cloud environments. SDN mainly focuses on creating centrally manageable programmable networks with a centralized view of the underlying network. SDN achieves it by separating control plane tasks from network forwarding devices. In this chapter, we discuss the role of SDN in providing optimal traffic engineering, and management solutions for cloud environments. The importance of SDN‐ and NFV‐enabled cloud environments is discussed with scenarios, such as network service chaining, which includes optimal placement of VNFs and scaling of VNFs. It also introduces novel architectures and important technologies for designing software‐defined wireless networks, and telecommunication clouds. Finally, it concludes the necessity of SDN and NFV integration to enable Cloud‐Fog‐Edge computing environments, and 5G‐enabled network slicing architectures for Internet of Things (IoT), and smart cities' applications.
Article
Network Function Virtualization (NFV) offers flexibility in traffic engineering and network resource management, by taking advantage of Software Defined Networking (SDN). By using these network technologies, it is possible to enhance the performance of video streaming applications by placing network functions in suitable locations and rerouting flows. Our study addresses the “virtual cache placement” problem in dynamic networks, where traffic patterns and attachment points of the clients are changing rapidly. The cache placement is done by determining how many virtual caches are necessary to be able to provide acceptable service to the clients, as well as where to place those caches to meet demand. To this end, we provide a heuristic solution by taking advantage of NFV‐SDN and having the assistance of Server and Network Assisted DASH (SAND). Experimental results show that the proposed algorithms can improve the video client rebuffering by 150%–270% and also can provide an 8%–12% increase in average bitrate received by the client, compared to a number of benchmark algorithms. The obtained results indicate that the co‐operation between the client and the operator of an SDN‐enabled network, by exchanging client and network information, allows network resources to be efficiently used, and as a consequence, the Quality of Experience (QoE) on the client's side is improved. Content distributors may leverage the virtualization features provided by NFV platforms in order to deploy a virtualized cache (vCache), as Virtual Network Functions (VNFs), instead of using physical appliances. Using these technologies, it is possible to utilize common virtualization techniques to migrate vCache resources on‐demand, to be nearer to the end‐users.
Article
Full-text available
Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.
Conference Paper
Full-text available
The Network Functions Virtualization (NFV) paradigm is the most promising technique to help network providers in the reduction of capital and energy costs. The deployment of virtual network functions (VNFs) running on generic x86 hardware allows higher flexibility than the classical middleboxes approach. NFV also reduces the complexity in the deployment of network services through the concept of service chaining, which defines how multiple VNFs can be chained together to provide a specific service. As a drawback, hosting multiple VNFs in the same hardware can lead to scalability issues, especially in the processing-resource sharing. In this paper, we evaluate the impact of two different types of costs that must be taken into account when multiple chained VNFs share the same processing resources: the upscaling costs and the context switching costs. Upscaling costs are incurred by VNFs multi-core implementations, since they suffer a penalty due to the needs of load balancing among cores. Context switching costs arise when multiple VNFs share the same CPU and thus require the loading/saving of their context. We model through an ILP problem the evaluation of such costs and we show their impact in a VNFs consolidation scenario, when the x86 hardware deployed in the network is minimized.
Conference Paper
Full-text available
Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.
Article
Full-text available
Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this paper, we remove this assumption by formulating the survivable virtual network embedding (SVNE) problem. We then develop a pro-active, and a hybrid policy heuristic to solve it, and a baseline policy heuristic to compare to. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristics for SVNE outperform the baseline heuristic in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time.
Article
Full-text available
Today's data centers need efficient traffic management to improve resource utilization in their networks. In this work, we study a joint tenant (e.g., server or virtual machine) placement and routing problem to minimize traffic costs. These two complementary degrees of freedom—placement and routing—are mutually-dependent, however, are often optimized separately in today's data centers. Leveraging and expanding the technique of Markov approximation, we propose an efficient online algorithm in a dynamic environment under changing traffic loads. The algorithm requires a very small number of virtual machine migrations and is easy to implement in practice. Performance evaluation that employs the real data center traffic traces under a spectrum of elephant and mice flows, demonstrates a consistent and significant improvement over the benchmark achieved by common heuristics.
Article
Network Functions Virtualization (NFV) aims totransform network functions into software images, executedon standard, high-volume hardware. This paper focuses onthe case in which a massive number of (tiny) network function instances are executed simultaneously on the same server and presents our experience in the design of the components that move the traffic across those functions, based on the primitives offered by the Intel DPDK framework. This paper proposes different possible architectures, it characterizes the resulting implementations, and it evaluates their applicability under different constraints.
Article
Network Function Virtualization (NFV) continues to draw immense attention from researchers in both industry and academia. By decoupling Network Functions (NFs) from the physical equipment on which they run, NFV promises to reduce Capital Expenses (CAPEX) and Operating Expenses (OPEX), make networks more scalable and flexible, and lead to increased service agility. However, despite the unprecedented interest it has gained, there are still obstacles that must be overcome before NFV can advance to reality in industrial deployments, let alone delivering on the anticipated gains. While doing so, important challenges associated with network and function Management and Orchestration (MANO) need to be addressed. In this article, we introduce NFV and give an overview of the MANO framework that has been proposed by the European Telecommunications Standards Institute (ETSI). We then present representative projects and vendor products that focus on MANO, and discuss their features and relationship with the framework. Finally, we identify open MANO challenges as well as opportunities for future research.
Conference Paper
Moving services to the cloud is a trend that has been going on for years now, with a constant increase in sophistication and complexity of such services. Today, even critical infrastructure operators are considering moving their services and data to the cloud; most prominently are telecommunication operators, who are calling for running their service as so-called virtual network services. These services are usually composed from a set of components, each with individual resilience and scalability requirements. Hence, the problem of describing the blueprint of how to build a service from its components, including the components' requirements, and how to derive an actual deployment from such a blueprint needs to be solved. In this paper, we present a first step in this direction. We have developed an information model to describe the resources and components of complex composite services, and a management system that maps such a description into a deployment model. We have based our prototype on OpenStack and have identified some shortcomings of it that need to be overcome to make an automated resilience-aware deployment and operation system reality.
Article
Network appliances perform different functions on network flows and constitute an important part of an operator's network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives.
Article
The design of distribution systems raises hard combinatorial optimization problems. For instance, facility location problems must be solved at the strategic decision level to place factories and warehouses, while vehicle routes must be built at the tactical or operational levels to supply customers. In fact, location and routing decisions are interdependent and studies have shown that the overall system cost may be excessive if they are tackled separately. The location-routing problem (LRP) integrates the two kinds of decisions. Given a set of potential depots with opening costs, a fleet of identical vehicles and a set of customers with known demands, the classical LRP consists in opening a subset of depots, assigning customers to them and determining vehicle routes, to minimize a total cost including the cost of open depots, the fixed costs of vehicles used, and the total cost of the routes. Since the last comprehensive survey on the LRP, published by Nagy and Salhi (2007), the number of articles devoted to this problem has grown quickly, calling a review of new research works. This paper analyzes the recent literature (72 articles) on the standard LRP and new extensions such as several distribution echelons, multiple objectives or uncertain data. Results of state-of-the-art metaheuristics are also compared on standard sets of instances for the classical LRP, the two-echelon LRP and the truck and trailer problem.